pageid
int64
12
74.6M
title
stringlengths
2
102
revid
int64
962M
1.17B
description
stringlengths
4
100
categories
list
markdown
stringlengths
1.22k
148k
52,018,264
British hydrogen bomb programme
1,111,861,409
British effort to develop hydrogen bombs between 1952 and 1958
[ "Articles containing video clips", "Nuclear history of the United Kingdom", "Nuclear weapon design", "United Kingdom–United States military relations" ]
The British hydrogen bomb programme was the ultimately successful British effort to develop hydrogen bombs between 1952 and 1958. During the early part of the Second World War, Britain had a nuclear weapons project, codenamed Tube Alloys. At the Quebec Conference in August 1943, British prime minister Winston Churchill and United States president Franklin Roosevelt signed the Quebec Agreement, merging Tube Alloys into the American Manhattan Project, in which many of Britain's top scientists participated. The British government trusted that America would share nuclear technology, which it considered to be a joint discovery, but the United States Atomic Energy Act of 1946 (also known as the McMahon Act) ended technical cooperation. Fearing a resurgence of American isolationism, and the loss of Britain's great power status, the British government resumed its own development effort, which was codenamed "High Explosive Research". The successful nuclear test of a British atomic bomb in Operation Hurricane in October 1952 represented an extraordinary scientific and technological achievement. Britain became the world's third nuclear power, reaffirming the country's status as a great power, but hopes that the United States would be sufficiently impressed to restore the nuclear Special Relationship were soon dashed. In November 1952, the United States conducted the first successful test of a true thermonuclear device or hydrogen bomb. Britain was therefore still several years behind in nuclear weapons technology. The Defence Policy Committee, chaired by Churchill and consisting of the senior Cabinet members, considered the political and strategic implications in June 1954, and concluded that "we must maintain and strengthen our position as a world power so that Her Majesty's Government can exercise a powerful influence in the counsels of the world." In July 1954, Cabinet agreed to proceed with the development of thermonuclear weapons. The scientists at the United Kingdom Atomic Energy Authority's Atomic Weapons Establishment at Aldermaston in Berkshire included William Penney, William Cook, Ken Allen, Samuel Curran, Henry Hulme, Bryan Taylor and John Ward. They did not know how to build a hydrogen bomb, but produced three designs: Orange Herald, a large boosted fission weapon; Green Bamboo, an interim thermonuclear design; and Green Granite, a true thermonuclear design. The first series of Operation Grapple tests involved Britain's first airdrop of a thermonuclear bomb. Although hailed as a success at the time, the first test of the Green Granite design was a failure. The second test validated Orange Herald as a usable design of a megaton weapon, but it was not a thermonuclear bomb, and the core boosting did not work. A third test attempted to correct the Green Granite design, but was another failure. In the Grapple X test in November 1957, they successfully tested a thermonuclear design. The Grapple Y test the following April obtained most of its yield from nuclear fusion, and the Grapple Z test series later that year demonstrated a mastery of thermonuclear weapons technology. An international moratorium on nuclear tests commenced on 31 October 1958, and Britain ceased atmospheric testing for good. The successful development of the hydrogen bomb, along with the Sputnik crisis, resulted in the 1958 US–UK Mutual Defence Agreement, in which the nuclear Special Relationship was restored. ## Background ### Tube Alloys The neutron was discovered by James Chadwick at the Cavendish Laboratory at the University of Cambridge in February 1932, and in April 1932, his Cavendish colleagues John Cockcroft and Ernest Walton split lithium atoms with accelerated protons. In December 1938, Otto Hahn and Fritz Strassmann at Hahn's laboratory in Berlin-Dahlem bombarded uranium with slow neutrons, and discovered that barium had been produced, and therefore that the uranium nucleus had been split. Hahn wrote to his colleague Lise Meitner, who, with her nephew Otto Robert Frisch, developed a theoretical explanation of the process. By analogy with the division of biological cells, they named the process "fission". The discovery of fission raised the possibility that an extremely powerful atomic bomb could be created. Frisch and Rudolf Peierls, both German refugee scientists working in Britain, calculated the critical mass of a metallic sphere of pure uranium-235, and found that instead of tons, as everyone had assumed, as little as 1 to 10 kilograms (2.2 to 22.0 lb) would suffice, and would explode with the power of thousands of tons of dynamite. The MAUD Committee was established to investigate further. It reported that an atomic bomb was technically feasible, and recommended pursuing its development as a matter of urgency. A new directorate known as Tube Alloys was created to coordinate this effort. Sir John Anderson, the Lord President of the Council, became the minister responsible, and Wallace Akers from Imperial Chemical Industries (ICI) was appointed the director of Tube Alloys. ### Manhattan Project In July 1940, Britain offered the United States access to its scientific research, and Cockcroft briefed American scientists on British nuclear weapons developments. He discovered that the American S-1 Project (later renamed the Manhattan Project) was smaller than the British, and not as far advanced. The two projects exchanged information, but did not initially combine their efforts, ostensibly over concerns about American security. Ironically, it was the British project that had already been penetrated by atomic spies for the Soviet Union. The United Kingdom did not have the manpower or resources of the United States, and despite its early and promising start, Tube Alloys fell behind its American counterpart. The British considered producing an atomic bomb without American help, but it would require overwhelming priority, disruption to other wartime projects was inevitable, and it was unlikely to be ready in time to affect the outcome of the war in Europe. At the Quebec Conference in August 1943, the Prime Minister, Winston Churchill, and the President of the United States, Franklin Roosevelt, signed the Quebec Agreement, which merged the two national projects. The Quebec Agreement established the Combined Policy Committee and the Combined Development Trust to coordinate their efforts. The 19 September 1944 Hyde Park Agreement extended both commercial and military cooperation into the post-war period. A British mission led by Akers assisted in the development of gaseous diffusion technology at the SAM Laboratories in New York. Another, headed by Mark Oliphant, assisted with the electromagnetic separation process at the Berkeley Radiation Laboratory. Cockcroft became the director of the joint British-Canadian Montreal Laboratory. A British mission to the Los Alamos Laboratory was led by Chadwick, and later Peierls, which included several of Britain's most eminent scientists. As overall head of the British Mission, Chadwick forged a close and successful partnership, and ensured that British participation was complete and wholehearted. ### End of American cooperation With the end of the war the Special Relationship between Britain and the United States became, in the words of Margaret Gowing, "very much less special". The British government had trusted that America would share nuclear technology, which it considered a joint discovery. On 8 August 1945 the Prime Minister, Clement Attlee, sent a message to President Harry Truman in which he referred to both of them as "heads of the Governments which have control of this great force". On 9 November 1945, Attlee and the Prime Minister of Canada, Mackenzie King, went to Washington, D.C., to confer with Truman about future cooperation in nuclear weapons and nuclear power. A Memorandum of Intention they signed replaced the Quebec Agreement. The three leaders agreed that there would be full and effective cooperation on atomic energy, but British hopes were soon disappointed. The Atomic Energy Act of 1946 (McMahon Act), which was signed into law by Truman on 1 August 1946, ended technical cooperation. Its control of "restricted data" prevented the United States' allies from receiving any information. This partly resulted from the arrest for espionage of British physicist Alan Nunn May, who had worked in the Montreal Laboratory, in February 1946, while the legislation was being debated. It was but the first of a series of spy scandals. The arrest of Klaus Fuchs in January 1950, and the June 1951 defection of Donald Maclean, who had served as a British member of the Combined Policy Committee from January 1947 to August 1948, left Americans with a distrust of British security arrangements. The remaining British scientists working in the United States were denied access to papers that they had written just days before. ### Resumption of independent UK efforts Attlee set up a cabinet sub-committee, the Gen 75 Committee (known informally by Attlee as the "Atomic Bomb Committee"), on 10 August 1945 to examine the feasibility of an independent British nuclear weapons programme. The Chiefs of Staff Committee considered the issue of nuclear weapons in July 1946, and recommended that Britain acquire them. A nuclear reactor and plutonium-processing facility was approved by the Gen 75 committee on 18 December 1945 "with the highest urgency and importance". The decision to proceed was formally made on 8 January 1947 at a meeting of Gen 163, another cabinet subcommittee, and was publicly announced in the House of Commons on 12 May 1948. D notice No. 25 forbid the publication of details on the design, construction or location of atomic weapons. The project was given the cover name "High Explosive Research". Production facilities were constructed under the direction of Christopher Hinton, who established his headquarters in a former Royal Ordnance Factory at Risley in Lancashire. These included a uranium metal plant at Springfields, nuclear reactors and a plutonium processing plant at Windscale, and a gaseous diffusion uranium enrichment facility at Capenhurst, near Chester. Uranium ore was stockpiled at Springfields. As the American nuclear programme expanded, its requirements became greater than the production of the existing mines. To gain access to the stockpile, they reopened negotiations, which resulted in the 1948 Modus Vivendi, which allowed for consultation on the use of nuclear weapons, and limited sharing of technical information. As Chief Superintendent Armament Research (CSAR, pronounced "Caesar"), Penney directed bomb design from Fort Halstead. In 1951 his design group moved to a new site at Aldermaston in Berkshire. The first British atomic bomb was successfully tested in Operation Hurricane on 3 October 1952. Britain thereby became the third country to test nuclear weapons. The first Blue Danube atomic bombs were delivered to Bomber Command in November 1953, although the V bombers to deliver them to their targets were not available until 1955. In the meantime, nuclear deterrence was provided by the United States Strategic Air Command, which had begun operating from British bases in 1949. ## Decision The successful test of an atomic bomb represented an extraordinary scientific and technological achievement. Britain became the world's third nuclear power, reaffirming its status as a great power, but hopes that the United States would be sufficiently impressed to restore the Special Relationship were soon dashed. On 1 November 1952, the United States conducted Ivy Mike, the first successful test of a true thermonuclear device (also known as a hydrogen bomb). Due to its physical size and use of cryogenic liquid deuterium, it was not suitable for use as a deliverable weapon, but the Castle Bravo test on 1 March 1954 used a much smaller device with solid lithium deuteride. Boosted by the nuclear fusion reaction in lithium-7, the yield of 15 megatonnes of TNT (63 PJ) was more than twice what had been expected, and indeed was the largest detonation the Americans would ever carry out. This resulted in widespread radioactive fallout that affected 236 Marshall Islanders, 28 Americans, and the 23 crewmen of a Japanese fishing boat, the Daigo Fukuryū Maru (Lucky Dragon No. 5). Meanwhile, the Soviet Union tested Joe 4, a boosted fission weapon with a yield of 400 kilotonnes of TNT (1,700 TJ) on 12 August 1953. This was followed by Joe 19, a true two-stage thermonuclear weapon on 20 November 1954. Although the British Hurricane device was more advanced than the American Fat Man bombs of 1946, Britain was still several years behind in nuclear weapons technology, and while British and Soviet advances had taken much of the heat out of American opposition to renewed cooperation with the British, the United States Congress saw little benefit in it for the United States. The McMahon Act was amended by the Atomic Energy Act of 1954 on 30 August, which allowed for greater exchange of information with foreign nations, but it fell far short of what the British government wanted. Churchill, who had replaced Attlee as prime minister, turned to Lord Cherwell for advice on the prospect of producing a British hydrogen bomb. Cherwell reported that "We think we know how to make an H-bomb", but Penney did not agree with this sanguine assessment. A New Weapons Committee was established at Aldermaston on 15 October 1951 to examine improvements to their atomic bombs. John Corner, the head of the theoretical group at Aldermaston, suggested producing a device in the "megaton range"—one with a yield of 500 kilotonnes of TNT (2,100 TJ) or more. In this he was thinking not of a thermonuclear weapon, but of a large fission one. The idea was not pursued at that time, because the RAF wanted more, not bigger, atomic bombs. Meeting in Bermuda in December 1953 with Dwight D. Eisenhower, who had replaced Truman as president earlier that year, Churchill told him that the RAF had reckoned that fission bombs would be sufficient for most targets, and therefore that Britain had no intention of developing hydrogen bombs. On 12 and 19 March 1954, Penney briefed the Gen 475 Committee meetings, attended by the Chiefs of Staff, senior officials from the Ministry of Defence and Foreign Office, and Sir Edwin Plowden, about recent developments in thermonuclear weapons. Sir Frederick Brundrett, the chairman of the Chiefs of Staff's Working Party on the Operational Use of Atomic Weapons (OAW), then asked Penney on 25 May for a working paper for an OAW meeting on 31 May. In turn, OAW sent a report to the Chiefs of Staff, who recommended that the United Kingdom develop its own thermonuclear weapons. Admiral of the Fleet Sir Rhoderick McGrigor, the First Sea Lord, recalled that: > The United Kingdom, as the recognised leader of the Commonwealth, and as a leading world power, had a position to maintain in world affairs. If our influence were to decline it would be virtually impossible to regain our rightful place as a world power. It was essential that the United Kingdom should have the ability to produce the H-Bomb in order that she could claim membership of the Allied H-Club. Thus, it was hoped that the development of thermonuclear weapons would shore up Britain's great power status and restore the special relationship with the United States, which would give the UK a prospect of influencing American defence policy. There was a third political consideration: the Lucky Dragon incident had touched off a storm of protest, and there were calls from trade unions and the Labour Party for a moratorium on nuclear testing, resulting in an acrimonious debate in the House of Commons on 5 April 1954 in which Churchill blamed Attlee for the McMahon Act. The new Eisenhower administration in the United States looked favourably on the idea of a moratorium, and the Foreign Secretary, Anthony Eden, was sounded out about it by the U.S. Secretary of State, John Foster Dulles. The United States had now finished its Operation Castle series of tests, and such a moratorium would restrict further nuclear weapons development by the Soviet Union; but it would also lock the United Kingdom into a permanent state of inferiority. The Defence Policy Committee, chaired by the Prime Minister and consisting of the senior Cabinet members, considered the political and strategic implications on 1 June, and concluded that "we must maintain and strengthen our position as a world power so that Her Majesty's Government can exercise a powerful influence in the counsels of the world." Churchill informed Cabinet of the decision on 7 July 1954, and they were not happy about not being consulted, particularly the Lord Privy Seal, Harry Crookshank. Cabinet debated the matter that day and the next, before postponing a final decision. On 27 July 1954, the Lord President of the Council, the Marquess of Salisbury, raised the matter, although it was not on the agenda, stressing the need for a decision. This time Cabinet agreed to proceed with the development of thermonuclear weapons. ## Organisation Churchill's return to the prime ministership meant Lord Cherwell's return to the post of Paymaster General. He was a strong supporter of the atomic energy programme, but while he agreed with its size and scope, he was critical of its organisation, which he blamed for slower progress than its Soviet counterpart. In particular, the programme had experienced problems with Civil Service pay and conditions, which were below those for comparable workers in industry. The Treasury had agreed to flexibility in exceptional cases, but the procedure was absurdly slow. Hinton in particular was concerned at the low remuneration his senior staff were receiving compared to those with similar responsibilities at ICI. When he attempted to bring Frank Kearton in as his successor, the Treasury refused to adjust the salaries of his other two deputies to match. Rather than ruin his organisation's morale, Hinton had dropped the proposal to appoint Kearton. Nor could any reorganisation be carried out without Treasury approval. Within a month of assuming office, Cherwell had prepared a memorandum proposing that responsibility for the program be transferred from the Ministry of Supply to an Atomic Energy Commission. Cherwell managed to persuade Churchill to propose to Cabinet that a small committee be established to examine the matter. Cabinet agreed at a meeting in November 1952, and the committee was created, chaired by Crookshank. Cabinet accepted its recommendations in April 1953, and another committee was established under Anderson (now Lord Waverley) to make recommendations on the implementation of the new organisation and its structure. The Atomic Energy Authority Act 1954 created the United Kingdom Atomic Energy Authority (UKAEA) on 19 July 1954. Plowden became its first chairman. His fellow board members were Hinton, who was in charge of the Industrial Group at Risley; Cockcroft, who headed the Research Group at Harwell; and Penney, who led the Weapons Group at Aldermaston. The UKAEA initially reported to Salisbury in his capacity as Lord President of the Council; later in the decade the UKAEA would report directly to the Prime Minister. Over 20,000 staff transferred to the UKAEA; by the end of the decade, their numbers had grown to nearly 41,000. Like Hinton, Penney had difficulty recruiting and retaining the highly skilled staff he needed. In particular, he wanted a deputy with a strong scientific background. An approach to Vivian Bowden failed. After Penney repeatedly asked for William Cook, Salisbury managed to persuade McGrigor to release Cook from the Admiralty to be Penney's deputy. Cook commenced work at Aldermaston on 1 September 1954. Henry Hulme joined in 1954. He was too senior to be placed in Corner's theoretical physics division, so he became an assistant to Penney, with special responsibility for the hydrogen bomb programme. Samuel Curran, who had worked on the Manhattan Project in Berkeley, became head of the radiation measurements division. The physicist John Ward was also recruited at this time. ## Development British knowledge of thermonuclear weapons was based on work done at the Los Alamos Laboratory during the war. Two British scientists, Bretscher and Fuchs, had attended the conference there on the Super (as it was then called) in April 1946, and Chadwick had written a secret report on it in May 1946. The Classic Super design was unsuccessful. Fuchs and John von Neumann had produced an ingenious alternative design, for which they filed a patent in May 1946. This was tested in the American Operation Greenhouse George test in May 1951, but was also found to be unworkable. There was also some intelligence about Joe 4 derived from its debris, which was provided to Britain under the 1948 Modus Vivendi. Penney established three megaton bomb projects at Aldermaston: Orange Herald, under Bryan Taylor, a large boosted fission weapon; Green Bamboo, an interim thermonuclear design similar to the Soviet Layer Cake used in Joe 4 and the American Alarm Clock; and Green Granite, a true thermonuclear design. Orange Herald would be the first British weapon to incorporate an external neutron initiator. For Green Granite, Penney proposed a design based on radiation implosion and staging. There would be three stages, which he called Tom, Dick and Harry. Tom, the primary stage, would be a fission bomb. It would produce radiation to implode a secondary, Dick, another fission device. In turn, it would implode Harry, a thermonuclear tertiary. Henceforth, the British designers would refer to Tom, Dick and Harry rather than primary, secondary and tertiary. They still had only vague ideas about how a thermonuclear weapon would work, and whether one, two, or three stages would be required. Nor was there much more certainty about the boosted designs, with no agreement on whether the boosting thermonuclear fuel was best placed inside the hollow core, as in Orange Herald, or wrapped around it, as in Green Bamboo. Keith Roberts and John Ward studied the detonation waves in a thermonuclear detonation, but there was an incomplete understanding of radiation implosion. Additional information came from the study of Joe 19. It was found that there was a large amount of residual uranium-233. The Soviet scientists had used this isotope so they could distinguish the behaviour of uranium in different parts of the system. This clarified that it was a two-stage device. Hulme prepared a paper in January 1956. At this point there were still three stages, Dick being a fission device and Harry the thermonuclear component. Two weeks later, Ken Allen produced a paper in which he described the mechanism of thermonuclear burning. He suggested that the Americans had compressed lithium-6 and uranium around a fissile core. In April 1956, the recognisable ancestor of the later devices appeared. There were now only two stages: Tom, the fission primary; and Dick, which was now also a set of concentric spheres, with uranium-235 and lithium-6 deuteride shells. A spherical Dick was chosen in preference to a cylindrical one for ease of calculation; work on a cylindrical Dick was postponed until a new IBM 704 computer arrived from the United States. ## Testing ### Preparations Implicit in the creation of a hydrogen bomb was that it would be tested. Eden, who replaced Churchill as prime minister after the latter's retirement, gave a radio broadcast in which he declared: "You cannot prove a bomb until it has exploded. Nobody can know whether it is effective or not until it has been tested." Testing of boosted designs was carried out by Operation Mosaic in the Monte Bello Islands in Western Australia in May and June 1956. This was a sensitive matter; there was an agreement with Australia that no thermonuclear testing would be carried out there. The Australian Minister for Supply Howard Beale, responding to rumours reported in the newspapers, asserted that "the Federal Government has no intention of allowing any, hydrogen bomb tests to take place in Australia. Nor has it any intention of allowing any experiments connected with hydrogen bomb tests to take place here." Since the tests were connected with hydrogen bomb development, this prompted Eden to cable the Prime Minister of Australia, Robert Menzies, detailing the nature and purpose of the tests. He promised that the yield of the second, larger test would not be more than two and a half times that of the Operation Hurricane test, which was 25 kilotonnes of TNT (100 TJ). Menzies cabled his approval of the tests on 20 June 1955. The yield of the second test turned out to be 60 kilotonnes of TNT (250 TJ), which was larger than the limit of 50 kilotonnes of TNT (210 TJ) for tests in Australia. Another test site was therefore required. For safety and security reasons, in light of the Lucky Dragon incident, a large site remote from population centres was required. Various remote islands in the South Pacific and Southern Oceans were considered, along with Antarctica. The Admiralty suggested the Antipodes Islands, which are about 860 kilometres (530 mi) southeast of New Zealand. In May 1955, the Minister for Defence, Selwyn Lloyd, concluded that the Kermadec Islands, which lie about 1,000 kilometres (620 mi) northeast of New Zealand, would be suitable. They were part of New Zealand, so Eden wrote to the Prime Minister of New Zealand, Sidney Holland, to ask for permission to use the islands. Holland refused, fearing an adverse public reaction in forthcoming elections. Despite reassurances and pressure from the British government, Holland remained firm. The search for a location continued, with Malden Island and McKean Island being considered. The former became the frontrunner. Three Avro Shackletons from No. 240 Squadron RAF were sent to conduct an aerial reconnaissance and Holland agreed to send the survey ship to conduct a maritime survey. The test series was given the secret codename "Operation Grapple". Air Commodore Wilfrid Oulton was appointed task force commander, with the acting rank of air vice marshal from 1 March 1956. He had a formidable task ahead of him. Nearby Christmas Island was chosen as a base. It was claimed by both Britain and the United States, but the Americans were willing to let the British use it for the tests. With pressure mounting at home and abroad for a moratorium on testing, 1 April 1957 was set as the target date. Oulton held the first meeting of the Grapple Executive Committee on New Oxford Street in London on 21 February 1956. The RAF and Royal Engineers would improve the airfield to enable it to operate large, heavily loaded aircraft, and the port and facilities would be improved to enable Christmas Island to operate as a base by 1 December 1956. It was estimated that 18,640 measurement tons (21,110 m<sup>3</sup>) of stores would be required for the construction effort alone. The tank landing ship HMS Narvik would reprise the role of control ship it had for Operation Hurricane; but as it was also required for Operation Mosaic, it had very little time to return to the Chatham Dockyard for a refit before heading out to Christmas Island for Operation Grapple. Having decided on a location and date, there still remained the matter of what would be tested. John Challens, whose weapons electronics group would have to produce the assembly, wanted to know the configuration of Green Granite. Cook ruled that it would use a Red Beard Tom, and would fit inside a Blue Danube casing for dropping. The design was frozen in April 1956. There were two versions of Orange Herald, large and small. They had similar cores, but the large version contained more explosive. The designs were frozen in July. Green Bamboo was also nominally frozen, but tinkering with the design continued. On 3 September, Corner suggested that Green Granite could be made smaller by moving the Tom and Dick closer together. This design became known as Short Granite. By January 1957, with the tests just months away, a tentative schedule had emerged. Short Granite would be fired first. Green Bamboo would follow if Short Granite was unsuccessful, but be omitted as unnecessary otherwise. Orange Herald (small) would be fired next. Because Short Granite was too large to fit into a missile or guided bomb, this would occur whether or not Short Granite was a success. Finally, Green Granite would be tested. In December 1956, Cook had proposed another design, known as Green Granite II. This was smaller than Green Granite I, and could fit into a Yellow Sun casing that could be used by the Blue Steel guided missile then under development; but it could not be made ready to reach Christmas Island before 26 June 1957, and extending Operation Grapple would have cost another £1.5 million. ### First series The first test of the series was Grapple 1, of Short Granite. This bomb was dropped from a height of 14,000 metres (45,000 ft) by a Vickers Valiant bomber of No. 49 Squadron RAF piloted by Wing Commander Kenneth Hubbard, off the shore of Malden Island at 11:38 local time on 15 May 1957. It was Britain's second airdrop of a nuclear bomb after the Operation Buffalo test at Maralinga on 11 October 1956, and the first of a thermonuclear weapon. The United States had not attempted an airdrop of a hydrogen bomb until the Operation Redwing Cherokee test on 21 May 1956. Their bomb had landed 6.4 kilometres (4 mi) from the target; Hubbard missed by just 382 metres (418 yd). The Short Granite's yield was estimated at 300 kilotonnes of TNT (1,300 TJ), far below its designed capability. Penney cancelled the Green Granite test and substituted a new weapon codenamed Purple Granite. This was identical to Short Granite, but with some minor modification to it; additional uranium-235 was added, and the outer layer was replaced with aluminium. Despite its failure, the test was described as a successful thermonuclear explosion, and the government did not confirm or deny reports that the UK had become a third thermonuclear power. When documents on the series were declassified in the 1990s, the tests were denounced as a hoax, but the reports were unlikely to have fooled the American observers. The next test was Grapple 2, of Orange Herald (small). This bomb was dropped at 10:44 local time on 31 May by another 49 Squadron Valiant, piloted by Squadron Leader Dave Roberts. It exploded with a force of 720 to 800 kilotonnes of TNT (3,000 to 3,300 TJ). The yield was the largest ever achieved by a single stage device, and made it technically a megaton weapon, but it was close to Corner's estimate for an unboosted yield, and Hulme doubted that the lithium-6 deuteride had contributed at all. This was chalked up to Taylor instability, which limited the compression of the light elements in the core. The bomb was hailed as a hydrogen bomb, and the truth that it was actually a large fission bomb was kept secret by the British government until the end of the Cold War. An Operational Requirement (OR1142) had been issued in 1955 for a thermonuclear warhead for a medium-range ballistic missile, which became Blue Streak. This was revised in November 1955, with "megaton" replacing "thermonuclear". Orange Herald (small) could then meet the requirement. A version was created as an interim megaton weapon in order to provide the RAF with one at the earliest possible date. Codenamed Green Grass, the unsuccessful fusion boosting was omitted, and it used Green Bamboo's 72-lens implosion system instead of Orange Herald's 32. This allowed the amount of highly enriched uranium to be reduced from 120 kilograms (260 lb) to 75 kilograms (165 lb). Its yield was estimated at 0.5 megatonnes of TNT (2.1 PJ). It was placed in a Blue Danube casing, and this bomb became known as Violet Club. About ten were delivered before Yellow Sun became available. The third and final shot of the series was Grapple 3, the test of Purple Granite. This was dropped by a Valiant piloted by Squadron Leader Arthur Steele on 19 June. The yield was a very disappointing 300 kilotonnes of TNT (1,300 TJ), even less than Short Granite. The changes had not worked. "We haven't got it right", Cook told a flabbergasted Oulton. "We shall have to do it all again, providing we can do so before the ban comes into force; so that means as soon as possible." ### Second series A re-think was required. Cook had the unenviable task of explaining the failure to the government. Henceforth, he would take a tighter grip on the hydrogen bomb programme, gradually superseding Penney. The scientists and politicians considered abandoning Green Granite. The Minister of Defence, Duncan Sandys, queried Cook on the imperative to persist with thermonuclear designs, given that Orange Herald satisfied most military requirements, and the tests were very expensive. Cook replied that megaton-range fission bombs represented an uneconomical use of expensive fissile material, that they could not be built to produce yields of more than a megaton, and that they could not be made small enough to be carried by aircraft smaller than the V-bombers, or on missiles. Sandys was not convinced, but he authorised further tests, as did the Prime Minister, now Harold Macmillan following Eden's resignation in the wake of the Suez crisis. The earliest possible date was November 1957 unless the Operation Antler tests were cancelled, but the Foreign Office warned that a moratorium on nuclear testing might come into effect in late October. The scientists at Aldermaston had created a design incorporating staging, radiation implosion, and compression, but they had not mastered the design of thermonuclear weapons. Knowing that much of the yield of American and Soviet bombs came from fission in the uranium-238 tamper, they had focused on what they called the "lithium-uranium cycle", whereby neutrons from the fission of uranium would trigger fusion, which would produce more neutrons to induce fission in the tamper. However, this is not the most important reaction. Corner and his theoretical physicists at Aldermaston argued that Green Granite could be made to work by increasing compression and reducing Taylor instability. The first step would be achieved with an improved Tom. The Red Beard Tom was given an improved high explosive supercharge, a composite (uranium-235 and plutonium) core, and a beryllium tamper, thereby increasing its yield to 45 kilotonnes of TNT (190 TJ). The Dick was greatly simplified; instead of the 14 layers in Short Granite, it would have just three. This was called Round A; a five-layer version was also discussed, which was called Round B. A third round, Round C, was produced, for diagnostics. It had the same three layers as Round A, but an inert layer instead of lithium deuteride. Calculations for Round B were performed on the new IBM 704, while the old Ferranti Mark 1 was used for the simpler Round A. The next trial was known as Grapple X. To save time and money, and as Narvik and the light aircraft carrier HMS Warrior were unavailable, the bomb would be dropped off the southern tip of Christmas Island rather than off Malden Island, just 20 nautical miles (37 km; 23 mi) from the airfield where 3,000 men would be based. This required a major construction effort to improve the facilities on Christmas Island, and those that had been constructed on Malden Island had to now be duplicated on Christmas Island. Works included 26 blast-proof shelters, a control room, and tented accommodation. Components of Rounds A and C were delivered to Christmas Island on 24, 27 and 29 October. Round B would not be available; to get the calculations for Round A completed, the IBM 704 had to be turned over to them, and there was no possibility of completing the Round B calculations on the Ferranti. On inspection, a fault was found in the Round A Tom, and the fissile core was replaced with the one from Round C. Round A was dropped by a Valiant bomber piloted by Squadron Leader Barney Millett at 08:47 on 8 November 1957. This time the yield of 1.8 megatonnes of TNT (7.5 PJ) exceeded expectations; the predicted yield had only been 1 megatonne of TNT (4.2 PJ). But it was still below the 2 megatonnes of TNT (8.4 PJ) safety limit. This was the real hydrogen bomb Britain wanted, but it used a relatively large quantity of expensive highly enriched uranium. Due to the higher-than-expected yield of the explosion, there was some damage to buildings, the fuel storage tanks, and helicopters on the island. The physicists at Aldermaston had plenty of ideas about how to follow up Grapple X. Possibilities were discussed in September 1957. One was to tinker with the width of the shells in the Dick to find an optimal configuration. If they were too thick, they would slow the neutrons generated by the fusion reaction; if they were too thin, they would give rise to Taylor instability. Another was to do away with the shells entirely and use a mixture of uranium-235, uranium-238 and deuterium. Ken Allen had an idea, which Sam Curran supported, of a three-layer Dick that used lithium deuteride that was less enriched in lithium-6 (and therefore had more lithium-7), but more of it, reducing the amount of uranium-235 in the centre of the core. This proposal was the one adopted in October, and it became known as "Dickens" because it used Ken's Dick. The device would otherwise be similar to Round A, but with a larger radiation case. The safety limit was again set to 2 megatonnes of TNT (8.4 PJ). Keith Roberts calculated that the yield could reach 3 megatonnes of TNT (13 PJ), and suggested that this could be reduced by modifying the tamper, but Cook opposed this, fearing that it might cause the test to fail. Because of the possibility of a moratorium on testing, plans for the test, codenamed Grapple Y, were restricted to the Prime Minister, who gave verbal approval, and a handful of officials. Air Vice Marshal John Grandy succeeded Oulton as Task Force commander. The bomb was dropped off Christmas at 10:05 local time on 28 April 1958 by a Valiant piloted by Squadron Leader Bob Bates. It had an explosive yield of about 3 megatonnes of TNT (13 PJ), and remains the largest British nuclear weapon ever tested. The design of Grapple Y was successful because much of its yield came from its thermonuclear reaction instead of fission of a heavy uranium-238 tamper, making it a true hydrogen bomb, and because its yield had been closely predicted—indicating that its designers understood what they were doing. On 22 August 1958, Eisenhower announced a moratorium on nuclear testing, effective 31 October 1958. This did not mean an immediate end to testing; on the contrary, the United States, the Soviet Union and the United Kingdom all rushed to perform as much testing as possible before the deadline, which the Soviets did not meet, conducting tests on 1 and 3 November. A new British test series, known as Grapple Z, commenced on 22 August. It explored new technologies such as the use of external neutron initiators, which had first been tried out with Orange Herald. Core boosting using tritium gas and external boosting with layers of lithium deuteride were successfully tested in the Pendant and Burgee tests, allowing a smaller, lighter Tom for two-stage devices. The international moratorium commenced on 31 October 1958, and Britain ceased atmospheric testing for good. ## Renewed American partnership British timing was good. The Soviet Union's launch of Sputnik 1, the world's first artificial satellite, on 4 October 1957, came as a tremendous shock to the American public, who had trusted that American technological superiority ensured their invulnerability. Now, suddenly, there was incontrovertible proof that, in some areas at least, the Soviet Union was actually ahead. In the widespread calls for action in response to the Sputnik crisis, officials in the United States and Britain seized an opportunity to mend the relationship with Britain that had been damaged by the Suez Crisis. At the suggestion of Harold Caccia, the British Ambassador to the United States, Macmillan wrote to Eisenhower on 10 October urging that the two countries pool their resources to meet the challenge. To do this, the McMahon Act's restrictions on nuclear cooperation needed to be relaxed. British information security, or the lack thereof, no longer seemed so important now that the Soviet Union was apparently ahead, and the United Kingdom had independently developed the hydrogen bomb. The trenchant opposition from the Joint Committee on Atomic Energy that had derailed previous attempts was absent. Amendments to the Atomic Energy Act of 1954 passed Congress on 30 June 1958, and were signed into law by Eisenhower on 2 July 1958. The 1958 US–UK Mutual Defence Agreement was signed on 3 July, and was approved by Congress on 30 July. Macmillan called this "the Great Prize". The United States Atomic Energy Commission (AEC) invited the British government to send representatives to a series of meetings in Washington, DC, on 27 and 28 August 1958 to work out the details. The U.S. delegation included Willard Libby, AEC deputy chairman; Major General Herbert Loper, the Assistant to the Secretary of Defence for Atomic Energy Affairs; Brigadier General Alfred Starbird, AEC Director of Military Applications; Norris Bradbury, director of the Los Alamos National Laboratory; Edward Teller, director of the Lawrence Livermore Laboratory; and James W. McCrae, president of the Sandia Corporation. The British representatives were Brundrett and J.H.B. Macklen from the Ministry of Defence, and Penney, Cook and E. F. Newly from Aldermaston. The Americans disclosed the details of nine of their nuclear weapon designs: the Mark 7, Mark 15/39, Mark 19, Mark 25, Mark 27, Mark 28, Mark 31, Mark 33 and Mark 34. In return, the British provided the details of seven of theirs, including Green Grass; Pennant, the boosted device which had been detonated in the Grapple Z test on 22 August; Flagpole, the two-stage device scheduled for 2 September; Burgee, scheduled for 23 September; and the three-stage Haillard 3. The Americans were impressed with the British designs, particularly with Haillard 1, the heavier version of Haillard 3. Cook therefore changed the Grapple Z programme to fire Haillard 1 instead of Haillard 3. Macmillan wrote to Plowden: > I had a very interesting talk with Brundrett, Penney and Cook about their discussions in Washington last week, and I have been very impressed by the results which they have achieved. It is clear that the Americans were amazed to learn how much we already know and this was a major factor in convincing them that we could be trusted with more information than they probably intended originally to give us. I hope that these discussions will be only the first of a series, in which Anglo-American cooperation in this field will become progressively closer. But if we do succeed in gradually persuading the Americans to regard the enterprise as a joint project in which we are entitled to be regarded as equal partners in terms of basic knowledge, it will be because we have got off to a flying start under the bilateral agreement; and the credit for that must go to the team of scientists and technicians who have enabled us, single-handed, to keep virtually abreast of the United States in this complex and intricate business of nuclear weapons development. It is a tremendous achievement, of which they have every right to be very proud. The Anglo-American Special Relationship proved mutually beneficial, although it was never one of equals; the United States was far larger than Britain both militarily and economically. Britain soon became dependent on the United States for its nuclear weapons, as it lacked the resources to produce a range of designs. The British decided to adapt the Mark 28 as a British weapon as a cheaper alternative to doing their own development, which became Red Snow. Other weapons were supplied through Project E, under which weapons in American custody were supplied for the use of the RAF and British Army. Nuclear material was also acquired from the United States. Under the Mutual Defence Agreement 5.4 tonnes of UK-produced plutonium was sent to the US in return for 6.7 kilograms (15 lb) of tritium and 7.5 tonnes of highly enriched uranium between 1960 and 1979, replacing Capenhurst production, although much of the highly enriched uranium was used not for weapons, but as fuel for the growing UK fleet of nuclear submarines. The British ultimately acquired entire weapons systems, with the UK Polaris programme and Trident nuclear programme using American missiles with British nuclear warheads.
1,721,958
Banded stilt
1,170,340,462
Species of Australian bird in the family Recurvirostridae
[ "Birds described in 1816", "Birds of South Australia", "Birds of Western Australia", "Endemic birds of Australia", "Recurvirostridae", "Taxa named by Louis Jean Pierre Vieillot" ]
The banded stilt (Cladorhynchus leucocephalus) is a nomadic wader of the stilt and avocet family, Recurvirostridae, native to Australia. It belongs to the monotypic genus Cladorhynchus. It gets its name from the red-brown breast band found on breeding adults, though this is mottled or entirely absent in non-breeding adults and juveniles. Its remaining plumage is pied and the eyes are dark brown. Nestling banded stilts have white down, unlike any other species of wader. Breeding is triggered by the filling of inland salt lakes by rainfall, creating large shallow lakes rich in tiny shrimp on which the birds feed. Banded stilts migrate to these lakes in large numbers and assemble in large breeding colonies. The female lays three to four brown- or black-splotched whitish eggs on a scrape. If conditions are favourable, a second brood might be laid, though if the lakes dry up prematurely the breeding colonies may be abandoned. The banded stilt is considered to be a species of least concern by the International Union for Conservation of Nature (IUCN). Under the South Australian National Parks and Wildlife Act 1972, however, this bird is considered to be Vulnerable. This is due to the predation of it by silver gulls, which are considered to be a serious threat. Black falcons and wedge-tailed eagles are also predators, taking the banded stilt and its young. ## Taxonomy French ornithologist Louis Jean Pierre Vieillot described the banded stilt in 1816, classifying it in the avocet genus Recurvirostra and giving it the name Recurvirostra leucocephala, "L'avocette a tete blanche" ("white-headed avocet"). He only recorded the species as being found in terres australes, the meaning of which is unclear. Amateur ornithologist Gregory Mathews interpreted this as Victoria, while Erwin Stresemann concluded this was Rottnest Island in Western Australia. The species name is derived from the Ancient Greek words leukos "white", and kephale "head". French naturalist Georges Cuvier described it as Recurvirostra orientalis the same year. Belgian ornithologist Bernard du Bus de Gisignies described it as a new genus and species, Leptorhynchus pectoralis, to the Royal Academy of Belgium in 1835. English zoologist George Robert Gray placed the banded stilt in its own genus Cladorhynchus in 1840, noting that the name Leptorhynchus had been previously used. The genus name is from the Ancient Greek klados "twig" and rhynchos "bill". Likewise, German naturalist Johannes Gistel proposed the name Timeta to replace Leptorhynchus in 1848. John Gould had described the banded stilt as Himantopus palmatus in 1837, but recorded it as Cladorhynchus pectoralis in his 1865 work Handbook to the Birds of Australia. Gould also wrote that its distribution was unclear after it was first recorded at Rottnest Island though not elsewhere in Western Australia, and later in South Australia, until large numbers were seen by the British explorer Charles Sturt at Lepson's Lake north of Cooper Creek in what is now western Queensland. German naturalist Ludwig Reichenbach placed it in a new genus, naming it Xiphidiorhynchus pectoralis in 1845. Australian ornithologist Fred Lawson gave it the name Cladorhynchus australis in 1904. Gregory Mathews in his 1913 List of the Birds of Australia synonymised all subsequent genus and species names, using Cladorhynchus australis. He listed his subspecies rottnesti from 1913, though this has not been recognised since. Both Joseph G. Strauch in a 1978 study and Philip C. Chu in a 1995 re-analysis of bone and muscle characters found that the banded stilt was sister taxon to the avocets, with the stilts of the genus Himantopus an earlier offshoot. A 2004 study combining genetics and morphology reinforced its position as sister to the avocet lineage. English naturalist John Latham gave the bird the name "oriental avocet" in 1824, after Cuvier's description. "Banded stilt" has been designated the official name by the International Ornithological Committee (IOC). Other common names include "Rottnest snipe" and "bishop snipe". The Ngarrindjeri people of the Lower Murray River region in South Australia knew it as nilkani. ## Description The banded stilt is 45–53 cm (18–21 in) long and weighs 220–260 g (7.8–9.2 oz), with a wingspan of 55–68 cm (22–27 in). Adults in breeding plumage are predominantly white with black wings and a broad well-demarcated u-shaped chestnut band across the breast. The central part of the base of the upper tail is tinted a pale grey-brown. The slender bill is black, relatively straight, and twice as long as the head. The irises are dark brown and the legs and feet are a dark red-pink. The wings are long and slim and have eleven primary flight feathers, with the tenth being the longest. In flight, the wings are mostly black when seen from above, but have a white trailing edge from the tips of the inner primaries. From underneath, the wings are predominantly white with dark tips. White feathers on the head and neck have pale grey bases, which are normally hidden. Non-breeding plumage is similar, but the chest band is less distinct and often diluted to an ashy brown or mottled with white. The legs are a paler- or orange-pink. There is no difference in plumage between the sexes, nor has any geographic variation been recorded. Juvenile birds resemble adults but have a greyish forehead and lores, duller black wings, and lack the characteristic breast band. Adult plumage is attained in the second year. Their legs and feet are grey, becoming more blotched with pink until adulthood. Nestling banded stilts are covered in white down. A distinctive bird, the banded stilt is hard to confuse with any other species; the white-headed stilt lacks the breast band, and the red-necked avocet has a chestnut head and neck, and a distinctly upcurved bill. Adults make a barking call that has been written as cow or chowk, sometimes with two syllables as chowk-uk or chuk-uk. Birds also chatter softly and tunefully while nesting. ## Distribution and habitat The banded stilt is generally found in southern Australia. In Western Australia, it is found predominantly in the southwestern corner, though can be as far north as the saltworks in Port Hedland. Breeding took place at Lake Ballard in the Goldfields-Esperance after heavy rainfall from Cyclone Bobby in 1995, and then again after flooding in 2014. In 1933 a large colony had been recorded at Lake Grace, but had succumbed to attack presumably by foxes. The banded stilt has been recorded in southeastern South Australia, as well as the drainage of the Lake Eyre system, and in Victoria west of Port Phillip and the Wimmera. In July 2010 Lake Torrens filled with water, resulting in the influx of around 150,000 banded stilts. The Natimuk-Douglas Wetlands in western Victoria are an important nesting ground for the species, though lower numbers come here if there is flooding elsewhere in southeastern Australia. In New South Wales, it is most commonly found in the Riverina and western parts of the state, and has reached southern Queensland and the Northern Territory, where it has been found at the sewage ponds at Alice Springs and Erldunda. It has been recorded as a vagrant to Tasmania, with significant numbers recorded in 1981. The preferred habitats are large, shallow saline or hypersaline lakes, either inland or near the coast, including ephemeral salt lakes, salt works, lagoons, salt- or claypans and intertidal flats. The species is occasionally found in brackish or fresh water, including farm dams and sewage ponds. The banded stilt is highly nomadic, having adapted to the unpredictable climate of Australia's arid interior. Sudden rainfall results in the influx of water to and filling of dry inland salt lakes. The stilts respond by travelling to these areas and breeding, dispersing and returning to the coast once the lakes begin to dry up. How banded stilts on the coast become aware of inland rainfall is unknown. The distances travelled can be large; two birds have been tracked travelling from a drying Lake Eyre in South Australia to a newly filled lake system in Western Australia over 1,500 km (930 mi) away. One of these birds veered northwest over the Gibson Desert, travelling a minimum of 2,263 km (1,406 mi) in 55.9 hours. ## Behaviour The banded stilt is gregarious; birds are almost always encountered in groups, from small troops of tens of birds, to huge flocks numbering in the tens of thousands. ### Breeding The breeding habits of the banded stilt were unknown until 1930, when a colony was discovered at Lake Grace. Even then they remained poorly known until 2012, when researcher Reece Pedler and colleagues attached tracking devices to 21 birds to gain an insight into the species' movements. They discovered that the birds travel large distances inland and gather at the recently filled bodies of water. The majority of observed breeding events have occurred at inland salt lakes in South Australia and Western Australia immediately following freshwater inflows. An exception to this exists where some breeding was attempted at The Coorong during a time in which salinity in the Lower Lakes was significantly elevated due to reduced environmental flows down the Murray River. Breeding events are initiated by the filling of shallow inland lakes after rainfall and resultant explosion in numbers of food animals such as Parartemia brine shrimp. This can happen at any time of year. Breeding sites are generally on low islands, of 1–1.5 m (3–5 ft) elevation, or spits on or alongside large lakes, on clay or gravel and generally with sparse or no vegetation. The nests themselves are scrapes in the soil, up to 15 cm (5.9 in) across and 3 cm (1 in) deep, with or without some dead vegetation as lining. Birds on stony soils generally gather vegetation instead of digging scrapes. Egg clutches number three to four (or rarely five) oval eggs, which vary from fawn to white marked with brown to black splotches. They can be 49–58 mm (1.9–2.3 in) long and 35–48 mm (1.4–1.9 in) wide. Incubation takes 19 to 21 days, with both sexes sharing duties, although the male takes over as sole incubator as the eggs hatch and immediately afterwards. This is thought to allow the females to lay and incubate a second brood if the water and food in the lake persists. Parent birds incubate for one to six days before swapping with the other parent. These intervals are much longer than other waders, and thought to be due to the remoteness of food—either the prey are blown to remote corners of the lakes by the wind or the lakes themselves have receded as they have dried. Parents almost always changeover incubating at night, generally within two hours of nightfall, presumably to avoid predators. On hot days with temperatures over 40 °C (104 °F), incubating birds may leave briefly to wet their brood patches to presumably cool the eggs or young. Birds on nests always face into the wind. The nestlings are born covered in white down—unlike any other waders—and mobile with open eyes (precocial) and leave the nest soon after hatching (nidifugous). Adults lead the young birds to the water by the time they are two days old. Once in the water, they begin to feed on tiny crustaceans. ### Nest predators and hazards Banded stilt colonies suffer greatly from predation by silver gulls (Chroicocephalus novaehollandiae), while wedge-tailed eagles (Aquila audax), white-bellied sea eagles (Haliaeetus leucogaster), spot-bellied eagle-owls (Bubo nipalensis) and black falcons (Falco subniger) also take stilts and young. Premature drying of the lakes leads to parents abandoning their eggs or nestlings, resulting in the deaths of many thousands of young. ### Feeding The banded stilt forages by walking or swimming in shallow water, pecking, probing or scything into the water or mud. The bulk of its diet is made up of tiny crustaceans, including branchiopods, ostracods (seed shrimp), anostraca (fairy shrimp) such as Artemia salina and members of the genus Parartemia, both genera of notostraca (tadpole shrimp), and isopods such as the genera Deto and Haloniscus. They also eat molluscs, including both gastropods such as the land snail Salinator fragilis and members of the genus Coxiella, and bivalves of the genus Sphaericum, insects (such as bugs, beetles, flies and flying ants, which they glean from the water surface), and plants such as Ruppia. Small fish such as hardyheads (Craterocephalus spp.) have also been reportedly eaten. ## Status and conservation In 2016, the banded stilt was rated as least concern on the IUCN Red List of Endangered species. This was on the basis of its large range—greater than 20,000 km<sup>2</sup> (7700 mi<sup>2</sup>)—and fluctuating rather than declining population. However, it is listed as Vulnerable under the South Australian National Parks and Wildlife Act 1972. The listing was made after breeding attempts observed at Lake Eyre revealed heavy predation from silver gulls. The Department of Environment, Water and Natural Resources has developed a strategy for managing silver gull predation at chosen banded stilt breeding sites by applying site-specific culling measures. Breeding events observed at ephemeral lakes in Western Australia have proven to be more successful without the need for intervention due to their remoteness.
439,139
HMS Illustrious (87)
1,158,769,280
1940 Illustrious-class aircraft carrier of the Royal Navy
[ "1939 ships", "Cold War aircraft carriers of the United Kingdom", "Illustrious-class aircraft carriers", "Maritime incidents in December 1941", "Maritime incidents in January 1941", "Ships built in Barrow-in-Furness", "World War II aircraft carriers of the United Kingdom" ]
HMS Illustrious was the lead ship of her class of aircraft carriers built for the Royal Navy before World War II. Her first assignment after completion and working up was with the Mediterranean Fleet, in which her aircraft's most notable achievement was sinking one Italian battleship and badly damaging two others during the Battle of Taranto in late 1940. Two months later the carrier was crippled by German dive bombers and was repaired in the United States. After sustaining damage on the voyage home in late 1941 by a collision with her sister ship Formidable, Illustrious was sent to the Indian Ocean in early 1942 to support the invasion of Vichy French Madagascar (Operation Ironclad). After returning home in early 1943, the ship was given a lengthy refit and briefly assigned to the Home Fleet. She was transferred to Force H for the Battle of Salerno in mid-1943 and then rejoined the Eastern Fleet in the Indian Ocean at the beginning of 1944. Her aircraft attacked several targets in the Japanese-occupied Dutch East Indies over the following year before Illustrious was transferred to the newly formed British Pacific Fleet (BPF). The carrier participated in the early stages of the Battle of Okinawa until mechanical defects arising from accumulated battle damage became so severe she was ordered home early for repairs in May 1945. The war ended while she was in the dockyard and the Admiralty decided to modify her for use as the Home Fleet's trials and training carrier. In this role she conducted the deck-landing trials for most of the British post-war naval aircraft in the early 1950s. She was occasionally used to ferry troops and aircraft to and from foreign deployments as well as participating in exercises. In 1951, she helped to transport troops to quell rioting in Cyprus after the collapse of the Anglo-Egyptian treaty of 1936. She was paid off in early 1955 and sold for scrap in late 1956. ## Background and description The Royal Navy's 1936 Naval Programme authorised the construction of two aircraft carriers. Admiral Sir Reginald Henderson, Third Sea Lord and Controller of the Navy, was determined not to simply modify the previous unarmoured Ark Royal design. He believed that carriers could not be successfully defended by their own aircraft without some form of early-warning system. Lacking that, there was nothing to prevent land-based aircraft from attacking them, especially in confined waters like the North Sea and Mediterranean. This meant that the ship had to be capable of remaining in action after sustaining damage and that her fragile aircraft had to be protected entirely from damage. The only way to do this was to completely armour the hangar in which the aircraft would shelter, but putting so much weight high in the ship allowed only a single-storey hangar due to stability concerns. This halved the aircraft capacity compared with the older unarmoured carriers, exchanging offensive potential for defensive survivability. Illustrious was 740 feet (225.6 m) in length overall and 710 feet (216.4 m) at the waterline. Her beam was 95 feet 9 inches (29.2 m) at the waterline and she had a draught of 28 feet 10 inches (8.8 m) at deep load. She displaced 23,000 long tons (23,369 t) at standard load as completed. Her complement was approximately 1,299 officers and enlisted men upon completion in 1940. By 1944, she was severely overcrowded with a total crew of 1,997. After post-war modifications to convert her into a trials carrier, her complement was reduced to 1,090 officers and enlisted men. The ship had three Parsons geared steam turbines, each driving one shaft, using steam supplied by six Admiralty 3-drum boilers. The turbines were designed to produce a total of 111,000 shp (83,000 kW), enough to give a maximum speed of 30 knots (56 km/h; 35 mph) at deep load. On 24 May 1940 Illustrious ran her sea trials and her engines reached 113,700 shp (84,800 kW). Her exact speeds were not recorded as she had her paravanes streamed, but it was estimated that she could have made about 31 knots (57 km/h; 36 mph) under full power. She carried a maximum of 4,850 long tons (4,930 t) of fuel oil which gave her a range of 10,700 nautical miles (19,800 km; 12,300 mi) at 10 knots (19 km/h; 12 mph) or 10,400 nmi (19,300 km; 12,000 mi) at 16 knots (30 km/h; 18 mph) or 6,300 nmi (11,700 km; 7,200 mi) at 25 knots (46 km/h; 29 mph). The 753-foot (229.5 m) armoured flight deck had a usable length of 620 feet (189.0 m), due to prominent "round-downs" at each end designed to reduce the effects of air turbulence caused by the carrier's structure on aircraft taking-off and landing, and a maximum width of 95 feet (29.0 m). A single hydraulic aircraft catapult was fitted on the forward part of the flight deck. The ship was equipped with two unarmoured lifts on the centreline, each of which measured 45 by 22 feet (13.7 by 6.7 m). The hangar was 456 feet (139.0 m) long and had a maximum width of 62 feet (18.9 m). It had a height of 16 feet (4.9 m) which allowed storage of Lend-Lease Vought F4U Corsair fighters once their wingtips were clipped. The hangar was designed to accommodate 36 aircraft, for which 50,650 imperial gallons (230,300 L; 60,830 US gal) of aviation spirit was provided. ### Armament, electronics and protection The main armament of the Illustrious class consisted of sixteen quick-firing (QF) 4.5-inch (110 mm) dual-purpose guns in eight twin-gun turrets, four in sponsons on each side of the hull. The roofs of the gun turrets protruded above the level of the flight deck to allow them to fire across the deck at high elevations. Her light antiaircraft defences included six octuple mounts for QF 2-pounder ("pom-pom") antiaircraft guns, two each fore and aft of her island, and two in sponsons on the port side of the hull. The completion of Illustrious was delayed two months to fit her with a Type 79Z early-warning radar; she was the first aircraft carrier in the world to be fitted with radar before completion. This version of the radar had separate transmitting and receiving antennas which required a new mainmast to be added to the aft end of the island to mount the transmitter. The Illustrious-class ships had a flight deck protected by 3 inches (76 mm) of armour and the internal sides and ends of the hangars were 4.5 inches (114 mm) thick. The hangar deck itself was 2.5 inches (64 mm) thick and extended the full width of the ship to meet the top of the 4.5-inch waterline armour belt. The underwater defence system was a layered system of liquid- and air-filled compartments backed by a 1.5-inch (38 mm) splinter bulkhead. ### Wartime modifications While under repair in 1941, Illustrious's rear "round-down" was flattened to increase the usable length of the flight deck to 670 feet (204.2 m). This increased her aircraft complement to 47 aircraft by use of a permanent deck park of 6 aircraft. Her light AA armament was also augmented by the addition of 10 Oerlikon 20 mm autocannon in single mounts. In addition the two steel fire curtains in the hangar were replaced by asbestos ones. After her return to the UK later that year, her Type 79Z radar was replaced by a Type 281 system and a Type 285 gunnery radar was mounted on one of the main fire-control directors. The additional crewmen, maintenance personnel and facilities needed to support these aircraft, weapons and sensors increased her complement to 1,326. During her 1943 refits, the flight deck was modified to extend its usable length to 740 feet (225.6 m), and "outriggers" were probably added at this time. These were 'U'-shaped beams that extended from the side of the flight deck into which aircraft tailwheels were placed. The aircraft were pushed back until the main wheels were near the edge of the flight deck to allow more aircraft to be stored on the deck. Twin Oerlikon mounts replaced most of the single mounts. Other twin mounts were added so that by May she had a total of eighteen twin and two single mounts. The Type 281 radar was replaced by an upgraded Type 281M, and a single-antenna Type 79M was added. Type 282 gunnery radars were added for each of the "pom-pom" directors, and the rest of the main directors were fitted with Type 285 radars. A Type 272 target-indicator radar was mounted above her bridge. These changes increased her aircraft capacity to 57 and caused her crew to grow to 1,831. A year later, in preparation for her service against the Japanese in the Pacific, one starboard octuple "pom-pom" mount, directly abaft the island, was replaced by two 40 mm Bofors AA guns. Two more twin Oerlikon mounts were added, and her boilers were retubed. At this time her complement was 1,997 officers and enlisted men. By 1945, accumulated wear-and-tear as well as undiagnosed shock damage to Illustrious's machinery caused severe vibrations in her centre propeller shaft at high speeds. In an effort to cure the problem, the propeller was removed, and the shaft was locked in place in February; these radical measures succeeded in reducing, but not eliminating, the vibrations and reduced the ship's speed to about 24 knots (44 km/h; 28 mph). ### Post-war modifications Illustrious had been badly damaged underwater by a bomb in April 1945, and was ordered home for repairs the following month. She began permanent repairs in June that were scheduled to last four months. The RN planned to fit her out as a flagship, remove her aft 4.5-inch guns in exchange for increased accommodation, and replace some of her Oerlikons with single two-pounder AA guns, but the end of the war in August caused the RN to reassess its needs. In September, it decided that the Illustrious would become the trials and training carrier for the Home Fleet and her repairs were changed into a lengthy refit that lasted until June 1946. Her complement was sharply reduced by her change in role and she retained her aft 4.5-inch guns. Her light AA armament now consisted of six octuple "pom-pom" mountings, eighteen single Oerlikons, and seventeen single and two twin Bofors mounts. The flight deck was extended forward, which increased her overall length to 748 feet 6 inches (228.1 m). The high-angle director atop the island was replaced with an American SM-1 fighter-direction radar, a Type 293M target-indication system was added, and the Type 281M was replaced with a prototype Type 960 early-warning radar. The sum total of the changes since her commissioning increased her full-load displacement by 2,520 long tons (2,560 t). In 1947 she carried five 8-barrel pom-poms, 17 Bofors and 16 Oerlikons. A five-bladed propeller was installed on her centre shaft although the increasing wear on her outer shafts later partially negated the reduction in vibration. While running trials in 1948, after another refit, she reached a maximum speed of 29 knots (54 km/h; 33 mph) from 110,680 shp (82,530 kW). Two years later, she made 29.2 knots from 111,450 shp (83,110 kW). At some point after 1948, the ship's light AA armament was reduced to two twin and nineteen single 40 mm guns and six Oerlikons. ## Construction and service Illustrious, the fourth ship of her name, was ordered as part of the 1936 Naval Programme from Vickers-Armstrongs on 13 April 1937. Construction was delayed by slow deliveries of her armour plates because the industry had been crippled by a lack of orders over the last 15 years as a result of the Washington Naval Treaty. As a consequence, her flight-deck armour had to be ordered from Vítkovice Mining and Iron Corporation in Czechoslovakia. She was laid down at their Barrow-in-Furness shipyard two weeks later as yard number 732 and launched on 5 April 1939. She was christened by Lady Henderson, wife of the recently retired Third Sea Lord. Illustrious was then towed to Buccleuch Dock for fitting out and Captain Denis Boyd was appointed to command her on 29 January 1940. She was commissioned on 16 April 1940 and, excluding her armament, she cost £2,295,000 to build. While Illustrious was being moved in preparation for her acceptance trials on 24 April, the tugboat Poolgarth capsized with the loss of three crewmen. The carrier conducted preliminary flying trials in the Firth of Clyde with six Fairey Swordfish torpedo bombers that had been craned aboard earlier. In early June, she loaded the personnel from 806, 815, and 819 Squadrons at Devonport Royal Dockyard; 806 Squadron was equipped with Blackburn Skua dive bombers and Fairey Fulmar fighters, and the latter two squadrons were equipped with Swordfish. She began working up off Plymouth, but the German conquest of France made this too risky, and Illustrious sailed for Bermuda later in the month to continue working up. This was complete by 23 July, when she arrived in the Clyde and flew off her aircraft. The ship was docked in Clydeside for a minor refit the following day; she arrived in Scapa Flow on 15 August, and became the flagship of Rear Admiral Lumley Lyster. Her squadrons flew back aboard, and she sailed for the Mediterranean on 22 August with 15 Fulmars and 18 Swordfish aboard. After refuelling in Gibraltar, Illustrious and the battleship Valiant were escorted into the Mediterranean by Force H as part of Operation Hats, during which her Fulmars shot down five Italian bombers and her AA guns shot down two more. Now escorted by the bulk of the Mediterranean Fleet, eight of her Swordfish, together with some from the carrier Eagle, attacked the Italian seaplane base at Rhodes on the morning of 3 September. A few days after the Italian invasion of Egypt, Illustrious flew off 15 Swordfish during the moonlit night of 16/17 September to attack the port of Benghazi. Aircraft from 819 Squadron laid six mines in the harbour entrance while those from 815 Squadron sank the destroyer Borea and two freighters totalling 10,192 gross register tons (GRT). The destroyer Aquilone later struck one of the mines and sank. During the return voyage to Alexandria, the Italian submarine Corallo made an unsuccessful attack on the British ships. While escorting a convoy to Malta on 29 September, the carrier's Fulmars broke up attacks by Italian high-level and torpedo bombers, shooting down one for the loss of one fighter. While returning from another convoy escort mission, the Swordfish of Illustrious and Eagle attacked the Italian airfield on the island of Leros on the evening of 13/14 October. ### Battle of Taranto Upon his arrival in the Mediterranean, Lyster proposed a carrier airstrike on the Italian fleet at its base in Taranto, as the Royal Navy had been planning since the Abyssinia Crisis of 1935, and Admiral Andrew Cunningham, commander of the Mediterranean Fleet, approved the idea by 22 September 1940. The attack, with both available carriers, was originally planned for 21 October, the anniversary of the Battle of Trafalgar, but a hangar fire aboard Illustrious on 18 October forced its postponement until 11 November when the next favourable phase of the moon occurred. The fire destroyed three Swordfish and heavily damaged two others, but they were replaced by aircraft from Eagle, whose contaminated fuel tanks prevented her from participating in the attack. Repairs were completed before the end of the month, and she escorted a convoy to Greece, during which her Fulmars shot down one shadowing CANT Z.506B floatplane. She sailed from Alexandria on 6 November, escorted by the battleships Warspite, Malaya, and Valiant, two light cruisers, and 13 destroyers, to provide air cover for another convoy to Malta. At this time her air group was reinforced by several of Eagle's Gloster Sea Gladiators supplementing the fighters of 806 Squadron as well as torpedo bombers from 813 and 824 Squadrons. The former aircraft were carried "...as a permanent deck park..." and they shot down a CANT Z.501 seaplane two days later. Later that day seven Savoia-Marchetti SM.79 medium bombers were intercepted by three Fulmars, which claimed to have shot down one bomber and damaged another. In reality, they heavily damaged three of the Italian aircraft. A Z.501 searching for the fleet was shot down on 10 November by a Fulmar and another on the 11th. A flight of nine SM.79s was intercepted later that day and the Fulmars claimed to have damaged one of the bombers, although it actually failed to return to base. Three additional Fulmars had been flown aboard from Ark Royal a few days earlier, when both carriers were near Malta; that brought its strength up to 15 Fulmars, 24 Swordfish, and two to four Sea Gladiators. Three Swordfish crashed shortly after take-off on 10 and 11 November, probably due to fuel contamination, and the maintenance crewmen spent all day laboriously draining all the fuel tanks and refilling them with clean petrol. This left only 21 aircraft available for the attack. Now augmented by reinforcements from the UK, the Mediterranean Fleet detached Illustrious, four cruisers, and four destroyers to a point 170 miles (270 km) south-east of Taranto. The first wave of a dozen aircraft, all that the ship could launch at one time, flew off by 20:40 and the second wave of nine by 21:34. Six aircraft in each airstrike were armed with torpedoes and the remainder with bombs or flares or both to supplement the three-quarter moon. The Royal Air Force (RAF) had positioned a Short Sunderland flying boat off the harbour to search for any movement to or from the port and this was detected at 17:55 by acoustic locators and again at 20:40, alerting the defenders. The noise of the on-coming first airstrike was heard at 22:25 and the anti-aircraft guns defending the port opened fire shortly afterwards, as did those on the ships in the harbour. The torpedo-carrying aircraft of the first wave scored one hit on the battleship Conte di Cavour and two on the recently completed battleship Littorio while the two flare droppers bombed the oil storage depot with little effect. The four aircraft loaded with bombs set one hangar in the seaplane base on fire and hit the destroyer Libeccio with one bomb that failed to detonate. The destroyer Fulmine, or Conte di Cavour, shot down the aircraft that put a torpedo into the latter ship, but the remaining aircraft returned to Illustrious. One torpedo-carrying aircraft of the second wave was forced to return when its long-range external fuel tank fell off, but the others hit the Littorio once more and the Duilio was hit once when they attacked beginning at 23:55. The two flare droppers also bombed the oil storage depot with minimal effect, and one bomb penetrated through the hull of the heavy cruiser Trento without detonating. One torpedo bomber was shot down, but the other aircraft returned. A follow on airstrike was planned for the next night based on the pessimistic assessments of the aircrews, but it was cancelled due to bad weather. Reconnaissance photos taken by the R.A.F. showed three battleships with their decks awash and surrounded by pools of oil. The two airstrikes had changed the balance of power in the Mediterranean by rendering the Conte di Cavour unavailable for the rest of the war, and badly damaging the Littorio and the Duilio. ### Subsequent operations in the Mediterranean While en route to Alexandria the ship's Fulmars engaged four CANT Z.506Bs, claiming three shot down and the fourth damaged, although Italian records indicate the loss of only two aircraft on 12 November. Two weeks later, 15 Swordfish attacked Italian positions on Leros, losing one Swordfish. While off Malta two days later, six of the carrier's fighters engaged an equal number of Fiat CR.42 Falco biplane fighters, shooting down one and damaging two others. One Fulmar was lightly damaged during the battle. On the night of 16/17 December, 11 Swordfish bombed Rhodes and the island of Stampalia with little effect. Four days later Illustrious's aircraft attacked two convoys near the Kerkennah Islands and sank two merchant ships totalling 7,437 GRT. On the morning of 22 December, 13 Swordfish attacked Tripoli harbour, starting fires and hitting warehouses multiple times. The ship arrived back at Alexandria two days later. On 7 January 1941, Illustrious set sail to provide air cover for convoys to Piraeus, Greece and Malta as part of Operation Excess. For this operation, her fighters were reinforced by a detachment of three Fulmars from 805 Squadron. During the morning of 10 January, her Swordfish attacked an Italian convoy without significant effect. Later that morning three of the five Fulmars on Combat Air Patrol (CAP) engaged three SM.79s at low altitude, claiming one shot down. One Fulmar was damaged and forced to return to the carrier, while the other two exhausted their ammunition and fuel during the combat and landed at Hal Far airfield on Malta. The remaining pair engaged a pair of torpedo-carrying SM.79s, damaging one badly enough that it crashed upon landing. They were low on ammunition and out of position, as they chased the Italian aircraft over 50 miles (80 km) from Illustrious. The carrier launched four replacements at 12:35, just when 24–36 Junkers Ju 87 Stuka dive bombers of the First Group/Dive Bomber Wing 1 (I. Gruppe/Sturzkampfgeschwader (StG) 1) and the Second Group/Dive Bomber Wing 2 (II. Gruppe/StG 2) began their attack, led by Paul-Werner Hozzel. Another pair were attempting to take off when the first 250-or-500-kilogram (550 or 1,100 lb) bomb struck just forward of the aft lift, destroying the Fulmar whose engine had failed to start and detonating high in the lift well; the other aircraft took off and engaged the Stukas as they pulled out of their dive. The ship was hit five more times in this attack, one of which penetrated the un-armoured aft lift and detonated beneath it, destroying it and the surrounding structure. One bomb struck and destroyed the starboard forward "pom-pom" mount closest to the island, while another passed through the forwardmost port "pom-pom" mount and failed to detonate, although it did start a fire. One bomb penetrated the outer edge of the forward port flight deck and detonated about 10 feet (3.0 m) above the water, riddling the adjacent hull structure with holes which caused flooding in some compartments and starting a fire. The most damaging hit was a large bomb that penetrated through the deck armour forward of the aft lift and detonated 10 feet above the hangar deck. The explosion started a severe fire, destroyed the rear fire sprinkler system, bent the forward lift like a hoop and shredded the fire curtains into lethal splinters. It also blew a hole in the hangar deck, damaging areas three decks below. The Stukas also near-missed Illustrious with two bombs, which caused minor damage and flooding. The multiple hits at the aft end of the carrier knocked out her steering gear, although it was soon repaired. Another attack by 13 Ju 87s at 13:20 hit the ship once more in the aft lift well, which again knocked out her steering and reduced her speed to 15 knots (28 km/h; 17 mph). This attack was intercepted by six of the ship's Fulmars which had rearmed and refuelled ashore after they had dropped their bombs, but only two of the dive bombers were damaged before the Fulmars ran out of ammunition. The carrier, steering only by using her engines, was attacked several more times before she entered Grand Harbour's breakwater at 21:04, still on fire. The attacks killed 126 officers and men and wounded 91. Nine Swordfish and five Fulmars were destroyed during the attack. One additional Swordfish, piloted by Lieutenant Charles Lamb, was attempting to land when the bombs began to strike and was forced to ditch when it ran out of fuel; the crew was rescued by the destroyer Juno. The British fighters claimed to have shot down five Ju 87s, with the fleet's anti-aircraft fire claiming three others. Germans records show the loss of three Stukas, with another forced to make an emergency landing. While her steering was being repaired in Malta, the Illustrious was bombed again on 16 January by 17 Junkers Ju 88 medium bombers and 44 Stukas. The pilots of 806 Squadron claimed to have shot down two of the former and possibly damaged another pair, but a 500 kg bomb penetrated her flight deck aft of the rear lift and detonated in the captain's day cabin; several other bombs nearly hit the ship but only caused minor damage. Two days later, one of three Fulmars that intercepted an Axis air raid on the Maltese airfields was shot down with no survivors. Only one Fulmar was serviceable on 19 January, when the carrier was attacked several times and it was shot down. Illustrious was not struck during these attacks but was near-missed several times and the resulting shock waves from their detonations dislodged enough hull plating to cause an immediate 5-degree list, cracked the cast-iron foundations of her port turbine, and damaged other machinery. The naval historian J. D. Brown noted that "There is no doubt that the armoured deck saved her from destruction; no other carrier took anything like this level of punishment and survived." Without aircraft aboard, she sailed to Alexandria on 23 January escorted by four destroyers, for temporary repairs that lasted until 10 March. Boyd was promoted to rear admiral on 18 February and relieved Lyster as Rear Admiral Aircraft Carriers. He transferred his flag to Formidable when she arrived at Alexandria on 10 March, just before Illustrious sailed for Port Said to begin her transit of the Suez Canal. The Germans had laid mines in the canal earlier. Clearing the mines and the ships sunk by them was a slow process and Illustrious did not reach Suez Bay until 20 March. The ship then sailed for Durban, South Africa, to have the extent of her underwater damage assessed in the drydock there. She reached Durban on 4 April and remained there for two weeks. The ship ultimately arrived at the Norfolk Navy Yard in the United States on 12 May for permanent repairs. Some important modifications were made to her flight deck arrangements, including the installation of a new aft lift and modification of the catapult for use by American-built aircraft. Her light antiaircraft armament was also augmented during the refit. Captain Lord Louis Mountbatten relieved her acting captain on 12 August, although he did not arrive aboard her until 28 August. He was almost immediately sent on a speaking tour to influence American public opinion, until he was recalled home in October and relieved by Captain A. G. Talbot on 1 October. The work was completed in November and Illustrious departed on 25 October, for trials off Jamaica and to load the dozen Swordfish of 810 and 829 Squadrons. She returned to Norfolk on 9 December, to rendezvous with Formidable, which had also been repaired there, and the carriers sailed for home three days later. On the night of 15/16 December, Illustrious collided with Formidable in a moderate storm. Neither ship was seriously damaged, but Illustrious had to reduce speed to shore up sprung bulkheads in the bow and conduct temporary repairs to the forward flight deck. She arrived at Greenock on 21 December and permanent repairs were made from 30 December to late February 1942 at Cammell Laird's shipyard in Birkenhead. While working up her air group in March, reinforced by the Grumman Martlet fighters (the British name of the F4F Wildcat) of 881 and 882 Squadrons, she conducted trials of a "hooked" Supermarine Spitfire fighter, the prototype of the Seafire. ### In the Indian Ocean The conquest of British Malaya and the Dutch East Indies in early 1942 opened the door for Japanese advances into the Indian Ocean. The Vichy French-controlled island of Madagascar stood astride the line of communication between India and the UK and the British were worried that the French would accede to occupation of the island as they had to the Japanese occupation of French Indochina in 1940. Preventing this required a preemptive invasion of Diego Suarez scheduled for May 1942. Illustrious had her work up cut short on 19 March to prepare to join the Eastern Fleet in the Indian Ocean and participate in the attack. She sailed four days later, having embarked twenty-one Swordfish, nine Martlet IIs of 881 Squadron and six Martlet Is of 882 Squadron, and two Fulmar fighters prior to escorting a troop convoy carrying some of the men allocated for the assault. A hangar fire broke out on 2 April that destroyed 11 aircraft and killed one crewman, but failed to cause any serious damage to the ship. Repairs were made in Freetown, Sierra Leone, where her destroyed aircraft were replaced and augmented by twelve additional Martlet II fighters from HMS Archer, while two Martlet I aircraft were, in turn, transferred to Archer, bringing Illustrious's total aircraft complement to 47. After her stay at Freetown, Illustrious proceeded to Durban; during the voyage her staff also fitted ASV radar to the replacement Swordfish. One Martlet I was fitted with folding wings. Illustrious's aircraft were tasked to attack French naval units and shipping and to defend the invasion fleet, while her half-sister Indomitable provided air support for the ground forces. For the operation the carrier's air group numbered 25 Martlets, 1 night-fighting Fulmar and 21 Swordfish, and was consequently was forced to have a permanent deck park of 5 Martlets and one Swordfish. Before dawn on 5 May, she launched 18 Swordfish together with 8 Martlets. The first flight of 6 Swordfish, carrying torpedoes, unsuccessfully attacked the aviso D'Entrecasteaux, but sank the armed merchant cruiser . The second flight, carrying depth charges, sank the submarine Bévéziers while the third flight dropped leaflets over the defenders before attacking an artillery battery and D'Entrecasteaux. One aircraft of the third flight was forced to make an emergency landing and its crew was captured by the French. Later in the day, D'Entrecasteaux attempted to put to sea, but she was successfully bombed by an 829 Squadron Swordfish and deliberately run aground to avoid sinking. Three other Swordfish completed her destruction. The next morning, Martlets from 881 Squadron intercepted three Potez 63.11 reconnaissance bombers, shooting down two and forcing the other to retreat, while Swordfish dropped dummy parachutists as a diversion. One patrolling Swordfish sank the submarine Le Héros and another spotted for ships bombarding French defences. On the morning of 7 May, Martlets from 881 Squadron intercepted three Morane-Saulnier M.S.406 fighters on a reconnaissance mission. All three were shot down for the loss of one Martlet. In addition to the other losses enumerated, 882 Squadron's Fulmar was shot down while providing ground support. Illustrious''' aircraft flew 209 sorties and suffered six deck landing crashes, including four by Martlets. She was then formally assigned to the Eastern Fleet and, after a short refit in Durban, sailed to Colombo, Ceylon, and became the flagship of the Rear Admiral Aircraft Carriers, Eastern Fleet, Denis Boyd, her former captain. At the beginning of August, the ship participated in Operation Stab, a decoy invasion of the Andaman Islands to distract the Japanese when the Americans were invading the island of Guadalcanal in the South Pacific. Captain Robert Cunliffe relieved Talbott on 22 August. On 10 September the carrier covered the amphibious landing that opened Operation Streamline Jane, the occupation of the remainder of Madagascar, and the landing at Tamatave eight days later, but no significant resistance was encountered and her aircraft were not needed. For this operation she had aboard six Fulmars of 806 Squadron, 23 Martlets of 881 Squadron and 18 Swordfish of 810 and 829 Squadrons. ### European waters After a farewell visit from the Eastern Fleet commander, Admiral Sir James Somerville on 12 January 1943, Illustrious sailed for home the next day. She flew off her aircraft to Gibraltar on 31 January and continued on to the Clyde where she arrived five days later. She conducted deck-landing trials for prototypes of the Blackburn Firebrand and Fairey Firefly fighters, as well as the Fairey Barracuda dive/torpedo bomber from 8 to 10 February. On 26 February she began a refit at Birkenhead that lasted until 7 June during which her flight deck was extended, new radars were installed, her light anti-aircraft armament was augmented, and two new arrestor wires were fitted aft of the rear lift which increased her effective landing area. While conducting her post-refit trials, she also conducted flying trials for Martlet Vs and Barracudas. Both sets of trials were completed by 18 July, by which time the Illustrious had joined the Home Fleet. On 26 July, she sortied for the Norwegian Sea as part of Operation Governor, together with the battleship Anson, the American battleship Alabama, and the light carrier Unicorn, an attempt to fool the Germans into thinking that Sicily was not the only objective for an Allied invasion. 810 Squadron was the only unit retained from her previous air group and it had been re-equipped with Barracudas during her refit. Her fighter complement was augmented by 878 and 890 Squadrons, each with 10 Martlet Vs, and 894 Squadron with 10 Seafire IICs. These latter aircraft lacked folding wings and could not fit on the lifts. The British ships were spotted by Blohm & Voss BV 138 flying boats and 890 Squadron shot down two of them before the fleet returned to Scapa Flow on 29 July. She transferred to Greenock at the end of the month and sailed on 5 August to provide air cover for the ocean liner as she conveyed Prime Minister Winston Churchill to the Quebec Conference. Once the convoy was out of range of German aircraft, the Illustrious left the convoy and arrived back at Greenock on 8 August. Together with the Unicorn, she sailed for the Mediterranean on 13 August to prepare for the landings at Salerno (Operation Avalanche), reaching Malta a week later. Her air group was reinforced at this time by four more Martlets each for 878 and 890 Squadrons. She was assigned to Force H for the operation which was tasked to protect the amphibious force from attack by the Italian Fleet and provide air cover for the carriers supporting the assault force. The Italians made no effort to attack the Allied forces, and the most noteworthy thing that any of her aircraft did was when one of 890 Squadron's Martlets escorted a surrendering Italian aircraft to Sicily. Before the Illustrious steamed for Malta she transferred six Seafires to the Unicorn to replace some of the latter's aircraft wrecked in deck-landing accidents. Four of these then flew ashore to conduct operations until they rejoined Illustrious on 14 September at Malta. She then returned to Britain on 18 October for a quick refit at Birkenhead that included further improvements to the flight deck and the reinforcement of her light anti-aircraft armament. She embarked the Barracudas of 810 and 847 Squadrons of No. 21 Naval Torpedo-Bomber Reconnaissance Wing on 27 November before beginning her work up three days later. No. 15 Naval Fighter Wing with the Vought Corsairs of 1830 and 1833 Squadrons were still training ashore and flew aboard before the work up was finished on 27 December. ### Return to the Indian Ocean Illustrious departed Britain on 30 December and arrived in Trincomalee, Ceylon, on 28 January 1944. She spent most of the next several months training although she participated in several sorties with the Eastern Fleet searching for Japanese warships in the Bay of Bengal and near the coast of Sumatra. The fleet departed Trincomalee on 21 March to rendezvous with the American carrier Saratoga in preparation for combined operations against the Japanese facilities in the Dutch East Indies and the Andaman Islands. The first operation carried out by both carriers was an airstrike on the small naval base at Sabang at the northern tip of Sumatra (Operation Cockpit). The carrier's air group consisted of 21 Barracudas and 28 Corsairs for the operation; Illustrious launched 17 of the former escorted by 13 of the latter on the morning of 19 April. The American bombers attacked the shipping in the harbour while the British aircraft attacked the shore installations. The oil storage tanks were destroyed and the port facilities badly damaged by the Barracudas. There was no aerial opposition and the fighters claimed to have destroyed 24 aircraft on the ground. All British aircraft returned safely although one American fighter was forced to ditch during the return home. The Saratoga was ordered to depart for home for a refit by 19 May and Somerville wanted to mount one more attack as she was leaving the Indian Ocean. He chose the naval base and oil refinery at Surabaya, Java (Operation Transom), and the distance from the newly renamed East Indies Fleet's base at Ceylon required refuelling at Exmouth Gulf on the western coast of Australia before the attack. The necessity to attack from the south, across the full width of Java, meant that the target was outside the Barracuda's range and 810 and 847 Squadrons were replaced by the 18 Grumman Avengers of 832 and 845 Squadrons for the mission. Early on the morning of 17 May, the ship launched all 18 Avengers, escorted by 16 Corsairs. One Avenger crashed on take-off and an American Avenger was shot down over the target; only one small ship was sunk, and little damage was done to the refinery. The Saratoga and her escorts separated after refuelling again in Exmouth Gulf and the East Indies Fleet was back in Trincomalee on 27 May where No. 21 Wing reembarked. On 10 June, the Illustrious and the escort carrier Atheling put to sea to simulate another airstrike on Sabang as a means of distracting the Japanese while the Americans were attacking airfields in the Mariana Islands and preparing to invade the island of Saipan. For the planned attack on Port Blair in the Andaman Islands in mid-June her air group was reinforced by the 14 Corsairs of 1837 Squadron; six Barracudas from No. 21 TBR Wing were landed to make room for the additional fighters. On 21 June, the ship launched 15 Barracudas and 23 Corsairs against the airfield and harbour of Port Blair. Two of the Barracudas were forced to return with engine trouble before the attack began and another was shot down over the target. In addition, one Corsair was forced to ditch; the pilot was rescued by a destroyer. Bad weather degraded the accuracy of the Barracudas and little damage was inflicted aside from a few aircraft destroyed on the ground and a few small craft sunk in the harbour. With over 50 aircraft airborne at one point, the British realised that a single deck accident might result in the loss of every aircraft in the air because there was no other carrier available to land aboard. The carrier and her escorts arrived back at Trincomalee on 23 June where 847 Squadron was merged into 810 Squadron a week later. Her sister ships, the Indomitable and the Victorious arrived at the end of June although only the latter's pilots were combat-ready. Captain Charles Lambe was appointed as the new captain of the Illustrious on 21 May, but he could not join his new ship until 9 July. Somerville decided to attack Sabang again (Operation Crimson), although the ships of the East Indies Fleet would bombard the port while the fighters from the Illustrious and the Victorious spotted for them and protected the fleet. As the Barracudas were needed only for anti-submarine patrols, the former embarked only nine while the latter ship flew off all her Barracudas. On the early morning of 25 July, Illustrious launched 22 Corsairs for CAP and to observe the naval gunfire and take photos for post-attack damage assessments. The bombardment was very effective, sinking two small freighters, and severely damaging the oil storage and port facilities. One Corsair was shot down by Japanese flak although the pilot was rescued after ditching. As the fleet was withdrawing, Illustrious's CAP intercepted and shot down a Nakajima Ki-43 (codenamed "Oscar") fighter and a Mitsubishi Ki-21 "Sally" medium bomber on reconnaissance missions. Later in the day her Corsairs intercepted 10 Ki-43s and shot down two of them while driving off the remainder. After arriving in Trincomalee, 1837 Squadron was transferred to the Victorious. On 30 July, she sailed for Durban to begin a refit that lasted from 15 August to 10 October and arrived back at Trincomalee on 1 November. 810 Squadron and its Barracudas were transferred off the ship the next day and were later replaced by the Avengers of 854 Squadron. For the next six weeks she carried out an intensive flying regime in preparation for the next operations against the Japanese together with the other carriers of the fleet. On 22 November she was assigned to the newly formed British Pacific Fleet (BPF), commanded by Admiral Sir Bruce Fraser. She was assigned to the 1st Aircraft Carrier Squadron (1st ACS), commanded by Rear Admiral Sir Philip Vian when he arrived at Colombo aboard the carrier Indefatigable. A week later, Illustrious and Indomitable sortied to attack an oil refinery at Pangkalan Brandan, Sumatra (Operation Outflank); the former's airgroup now consisted of 36 Corsairs of 1830 and 1833 Squadrons and 21 Avengers of 857 Squadron. When the aircraft approached the target on the morning of 20 December, it was obscured by clouds so they diverted to the secondary target of the port at Belawan Deli. It was partially obscured by clouds and heavy squalls so the attacking aircraft had only moderate success, setting some structures on fire and destroying several aircraft on the ground. On 16 January 1945 the BPF sailed for its primary base in the Pacific Ocean, Sydney, Australia. En route, the carriers of the 1st ACS attacked Palembang on 24 January and 29 January (Operation Meridian). Illustrious's air group consisted of 32 Corsairs and 21 Avengers by now and she contributed 12 of her Avengers and 16 Corsairs to the first attack, which destroyed most of the oil storage tanks and cut the refinery's output by half for three months. Five days later, the BPF attacked a different refinery and the ship launched 12 Avengers and 12 Corsairs. The attack was very successful at heavy cost; between the two air operations, her squadrons lost five Corsairs to enemy flak or fighters and one due to a mechanical problem on take-off as well as three Avengers to enemy action. Her Corsairs claimed four enemy aircraft shot down as did one Avenger pilot who claimed victory over a Nakajima Ki-44 "Tojo" fighter. The fleet's fire discipline was poor when it was attacked by seven Japanese bombers shortly after the strike aircraft began landing. The attackers were all shot down, but two shells fired by either Indomitable or the battleship King George V struck Illustrious, killing 12 and wounding 21 men. ### Service in the Pacific Ocean She arrived on 10 February and repairs began when she entered the Captain Cook Dock in the Garden Island Dockyard the next day, well before it was officially opened by the Duke of Gloucester, the Governor-General of Australia on 24 March. By this time the vibration problems with her centre propeller shaft, which had never been properly repaired after she was bombed at Malta, were so bad that the propeller was removed and the shaft locked in place, reducing her maximum speed to 24 knots. On 6 March she sailed to the BPF's advance base at Manus Island and, after her arrival a week later, Illustrious and her sisters Indomitable and Victorious, as well as the carrier Indefatigable, exercised together before sailing for Ulithi on 18 March. The BPF joined the American Fifth Fleet there two days later, under the designation Task Force 57 (TF 57), to participate in the preliminary operations for the invasion of Okinawa (Operation Iceberg). The British role during the operation was to neutralise airfields on the Sakishima Islands, between Okinawa and Formosa, beginning on 26 March. Her air group now consisted of 36 Corsairs, 16 Avengers and two Supermarine Walrus flying boats for rescue work. From 26 March to 9 April, the BPF attacked the airfields with each two-day period of flying operations followed by two or three days required to replenish fuel, ammunition and other supplies. While the precise details on activities of the carrier's squadrons are not readily available, it is known that the commanding officer of 854 Squadron was forced to ditch his Avenger on the morning of 27 March with the loss of both his crewmen; he was ultimately rescued that evening by an American submarine. On the afternoon of 6 April, four kamikaze aircraft evaded detection and interception by the CAP, and one, a Yokosuka D4Y3 "Judy" dive bomber, attacked Illustrious in a steep dive. The light AA guns managed to sever its port wing so that it missed the ship, although its starboard wingtip shattered the Type 272's radome mounted on the front of the bridge. When the 1,000-kilogram (2,200 lb) bomb that it was carrying detonated in the water only 50 feet (15.2 m) from the side of the ship, the resulting shock wave badly damaged two Corsairs parked on the deck and severely shook the ship. The initial damage assessment was that little harm had been done, although vibrations had worsened, but this was incorrect as the damage to the hull structure and plating proved to be extensive. Vice Admiral Sir Bernard Rawlings, commander of Task Force 57, ordered the recently arrived Formidable to join the task force to replace Illustrious on 8 April. In the meantime, she continued to conduct operations with the rest of the fleet. On 12 and 13 April, the BPF switched targets to airfields in northern Formosa and her sister joined the task force on 14 April. Since the beginning of the operation, her aircraft had flown 234 offensive and 209 defensive sorties, claiming at least two aircraft shot down. Her own losses were two Avengers and three Corsairs lost in action and one Avenger and six Corsairs due to non-combat causes. Formidable's arrival allowed Rawlings to order Illustrious to the advance base in San Pedro Bay, in the Philippines, for a more thorough inspection. She arrived on 16 April and the examination by divers revealed that some of her outer plating was split and that some transverse frames were cracked. The facilities there could provide only emergency repairs, enough to allow her to reach the bigger dockyard in Sydney. Task Force 57 arrived in San Pedro Bay on 23 April for a more thorough replenishment period and Illustrious transferred aircraft, spares, stores, and newly arrived pilots to the other carriers before sailing for Sydney on 3 May. She arrived on 14 May and departed 10 days later, bound for Rosyth for permanent repairs. 854 Squadron was disembarked while at Sydney, but the carrier kept her two Corsair squadrons until after arriving in the UK on 27 June. ### Post-war career On 31 July Captain W. D. Stephens relieved Lambe. The end of the war several weeks later meant that there was no longer any urgency in refitting the Illustrious in time to participate in the invasion of the Japanese Home Islands and the Admiralty decided that she would become the Home Fleet training and trials carrier. Her catapult was upgraded to handle heavier aircraft, her flight deck was further improved, and her radar suite was modernized. She began her postrefit trials on 24 June 1946 and flying trials the following month. She relieved Triumph as the trials carrier in August and conducted trials on Firefly FR.4s, Firebrand TF.4s, de Havilland Sea Mosquitoes and de Havilland Sea Vampires over the next several months. Captain Ralph Edwards relieved Stephens on 7 January 1947. On 1 February, she joined the other ships of the Home Fleet as they rendezvoused with the battleship Vanguard, which was serving as the royal yacht to escort King George VI as he set out for the first royal tour of South Africa. Over the next several months she conducted deck-landing practice for Avenger and Seafire pilots before starting a short refit on 2 April. After the tour's conclusion on 12 May, she sailed for Scottish waters for more deck-landing practice with the destroyer Rocket as her planeguard. On 18 July she rendezvoused with the Home Fleet to participate in manoeuvres before George VI reviewed the fleet on 22–23 July. The King and Queen inspected Illustrious and her crew, as did Prime Minister Clement Attlee and his wife. Afterwards, she was opened for visits by the public before returning to Portsmouth. En route she served as the centrepiece of a convoy-defence exercise as the RAF successfully "attacked" the convoy. After summer leave for her crew, she resumed deck-landing trials in September and October, including the initial trials of the prototype Supermarine Attacker jet-powered fighter in the latter month. In November the government accelerated the demobilisation of some National Servicemen and almost 2,000 men serving in the Mediterranean became eligible for release. They had to be replaced by men from the UK so Illustrious ferried the replacements to Malta, sailing on 21 November and returning on 11 December to Portsmouth. She was refitted and modernised from January to August 1948. Captain John Hughes-Hallett relieved Edwards on 14 June. The ship was recommissioned in early September. While at anchor in Portland Harbour on 17 October, one of her boats foundered 50 yards (46 m) short of the ship in heavy weather; 29 men lost their lives. Illustrious resumed her duties in early 1949 and conducted trials and training for Avengers, Fireflies, Gloster Meteors, de Havilland Sea Hornets, Vampires and Seafires. On 10 June Hughes-Hallet was relieved by Captain Eric Clifford. During a severe gale in late October, the ship aided the small coastal steamer SS Yewpark that had lost power. The weather was too bad for Illustrious to rescue the steamer's crew, but she pumped fuel oil overboard to flatten the seas until a tug arrived to rescue the ship on 27 October. On 2 May 1950, she arrived at Birkenhead to commemorate the launch of the new carrier Ark Royal the following day with the First Lord of the Admiralty, George Hall, 1st Viscount Hall, aboard. A Hawker Sea Fury crashed while landing on 15 May, killing the pilot and two members of the deck crew. The prototype of the turboprop-powered Fairey Gannet anti-submarine aircraft made its first carrier landing aboard on 16 June. This event was also the first landing of any turboprop aircraft aboard an aircraft carrier. Captain S. H. Carlill assumed command on 24 June and the Illustrious resumed deck-landing training. On 8 and 9 November the Supermarine 510 research aircraft made the first ever landings by a swept-wing aircraft aboard a carrier. This aircraft was one of the ancestors of the Supermarine Swift fighter. A month later the ship began a four-month refit and hosted the first carrier landing of the de Havilland Sea Venom on 9 July 1951. Later in the month she hosted the Sea Furies of 802 and the Fireflies of 814 Squadrons for Exercise Winged Fleet. Captain C. T. Jellicoe relieved Carlill on 27 August and the ship ferried 10 Fireflies of 814 Squadron to Malta beginning on 1 October. She exchanged them for Firebrands for the return voyage. On 3 November she began loading the 39th Infantry Brigade of the 3rd Infantry Division in response to the riots in Cyprus that broke out when Egypt abrogated the Anglo-Egyptian treaty of 1936. She set sail two days later and arrived at Famagusta on 11 November. She returned to Portsmouth on 19 November and began loading the 45th Field Regiment, Royal Artillery, the 1st Battalion, Coldstream Guards, and the Bedfordshire and Hertfordshire Regiment two days later. Illustrious set sail on 23 November and reached Famagusta on 29 November. She returned to Portsmouth on 7 December and she did not leave harbour until 30 January 1952 when she resumed her customary role as a training ship. After a brief refit in early 1952, she participated in Exercise Castanets off the Scottish coast in June and hosted 22,000 visitors during Navy Days at Devonport Royal Dockyard in August. On 1 September she hosted No. 4 Squadron and No. 860 Squadrons, Royal Netherlands Naval Aviation Service (RNNAS) for training, as well as 824 Squadron. Between the three squadrons they had 20 Fireflies and 8 Sea Furies when they participated in the major NATO exercise Main Brace later in the month. Jellicoe was relieved by Captain R. D. Watson on 26 September and Illustrious resumed training until 9 December when her crew was granted leave and the ship began a refit. She next put to sea on 24 April 1953 for trials and did not resume training pilots until the following month. She was reunited with the four other carriers that served with the BPF for the first time since the war for the Coronation Fleet Review of Queen Elizabeth II on 15 June at Spithead. The following day, the Fireflies of No. 4 Squadron, RNNAS and 824 Squadron landed aboard for more deck-landing training. During September she participated in Exercise Mariner with three British squadrons of Fireflies and Sea Furies and a Dutch Squadron of Avengers. The ship resumed flying training off the north coast of Scotland in October for two weeks, but she spent most of the rest of the year on trials of the new mirror-landing system that automated the process of landing aircraft aboard. Illustrious began her final maintenance period at Devonport Royal Dockyard on 10 December. Captain K. A. Short relieved Watson on 28 December. The refit was completed by the end of January 1954 and she resumed her normal role. The ship completed 1,051 deck landings and steamed 4,037 nautical miles (7,477 km; 4,646 mi) by April. She made her first foreign port visit in many years at Le Havre, France, on 20–22 March, where 13,000 people came aboard. After another round of flying operations, she visited Trondheim, Norway, on 19 June. During 12 days of training in September, she completed 950 daytime and nighttime arrested landings and 210 helicopter landings. She conducted her last landings on 3 December and arrived at Devonport four days later to begin decommissioning. Illustrious'' was paid off at the end of February 1955 and she was towed to Gareloch and placed in reserve. She was sold on 3 November 1956 and broken up in early 1957. A model of HMS Illustrious in 1940is on display at the Monaco Naval Museum. Another model of Illustrious in her late-war appearance is at the Fleet Air Arm Museum in RNAS Yeovilton. ## Squadrons embarked
420,806
Ernie Fletcher
1,173,034,174
American physician and politician (born 1952)
[ "1952 births", "20th-century American politicians", "21st-century American politicians", "Baptist ministers from the United States", "Baptists from Kentucky", "Intelligent design advocates", "Living people", "Military personnel from Kentucky", "People from Mount Sterling, Kentucky", "Physicians from Kentucky", "Politicians from Lexington, Kentucky", "Republican Party governors of Kentucky", "Republican Party members of the Kentucky House of Representatives", "Republican Party members of the United States House of Representatives from Kentucky", "United States Air Force officers", "University of Kentucky College of Medicine alumni" ]
Ernest Lee Fletcher (born November 12, 1952) is an American physician and politician who was the 60th governor of Kentucky from 2003 to 2007. He previously served three consecutive terms in the United States House of Representatives before resigning after elected governor. A member of the Republican Party, Fletcher was a family practice physician and a Baptist lay minister and is the second physician to be elected Governor of Kentucky; the first was Luke P. Blackburn in 1879. He was also the first Republican governor of Kentucky since Louie Nunn left office in 1971. Fletcher graduated from the University of Kentucky and joined the United States Air Force to pursue his dream of becoming an astronaut. He left the Air Force after budget cuts reduced his squadron's flying time and earned a degree in medicine, hoping to earn a spot as a civilian on a space mission. Deteriorating eyesight eventually ended those hopes, and he entered private practice as a physician and conducted services as a Baptist lay minister. He became active in politics and was elected to the Kentucky House of Representatives in 1994. Two years later he ran for a seat in the U.S. House of Representatives, but lost to incumbent Scotty Baesler. When Baesler retired to run for a seat in the U.S. Senate, Fletcher again ran for the congressional seat and defeated Democratic state senator Ernesto Scorsone. He soon became one of the House Republican caucus' top advisors regarding health care legislation, particularly the Patients' Bill of Rights. Fletcher was elected governor in 2003 over state Attorney General Ben Chandler. Early in his term, Fletcher achieved some savings to the state by reorganizing the executive branch. He proposed an overhaul to the state tax code in 2004, but was unable to get it passed through the General Assembly. When Republicans in the state senate insisted on tying the reforms to the state budget, the legislature adjourned without passing either and the state operated under an executive spending plan drafted by Fletcher until 2005, when both the budget and the reforms were passed. Later in 2005, Attorney General Greg Stumbo, the state's highest-ranking Democrat, launched an investigation into whether the Fletcher administration's hiring practices violated the state's merit system. A grand jury returned several indictments against members of Fletcher's staff, and eventually against Fletcher himself. Fletcher issued pardons for anyone on his staff implicated in the investigation, but did not pardon himself. Though the investigation was ended by an agreement between Fletcher and Stumbo in late 2006, it continued to overshadow Fletcher's re-election bid in 2007. After turning back a challenge in the Republican primary by former Congresswoman Anne Northup, Fletcher lost the general election to Democrat Steve Beshear. After his term as governor, he returned to the medical field as founder and CEO of Alton Healthcare. He is married and has two adult children. ## Early life Ernest Lee Fletcher was born in Mount Sterling, Kentucky, on November 12, 1952. He was the third of four children born to Harold Fletcher, Sr. and his wife, Marie. The family owned a farm and operated a general store near the community of Means. Harold Fletcher also worked for Columbia Gas. When Ernie was three weeks old, Harold was transferred to Huntington, West Virginia. Two years later, the Fletchers returned to Robertson County, Kentucky, where they lived until Ernie Fletcher began the first grade. The family moved once more and finally settled in Lexington. Fletcher attended Lafayette High School in Lexington, where he was a member of the National Beta Club. During his senior year, he was an all-state saxophone player and was elected prom king. After graduating in 1970, he enrolled at the University of Kentucky. He pledged and became a member of the Delta Tau Delta fraternity. After his freshman year, he married his high school sweetheart, Glenna Foster. The couple had two children, Rachel and Ben, and four grandchildren. Fletcher aspired to become an astronaut, and joined the Air Force Reserve Officer Training Corps. In 1974, he earned a Bachelor of Science degree in mechanical engineering, graduating with top honors. After graduation, he joined the U.S. Air Force. After flight training in Oklahoma, he was stationed in Alaska where he served as a F-4E Aircraft commander and NORAD Alert Force commander. During the Cold War, his duties included commanding squadrons to intercept Soviet military aircraft. In 1980, as budget cutbacks were reducing his squadron's flying time, Fletcher turned down a regular commission in the Air Force. He left the Air Force with the rank of captain, having received the Air Force Commendation Medal and the Outstanding Unit Award. Fletcher enrolled in the University of Kentucky College of Medicine, hoping that a medical degree, along with a military background, would earn him a civilian spot on a space mission. In 1984, he graduated medical school with a Doctor of Medicine degree, but his deteriorating eyesight forced him to abandon his dreams of becoming an astronaut. In 1983, the Lexington Primitive Baptist church that Fletcher attended ordained him as a lay minister. In 1984, he opened a family medical practice in Lexington. Along with former classmate Dr. James D. B. George, he co-founded the South Lexington Family Physicians in 1987. For two years, he concurrently held the title of chief executive officer of the Saint Joseph Medical Foundation, an organization that solicits private gifts to Saint Joseph Regional Medical Center in Lexington. In 1989, Fletcher's church called him to become its unpaid pastor, but over the years, he grew to question some of the church's doctrines, desiring it to become more evangelistic. Consequently, he left the Primitive Baptist denomination in 1994 and joined the Porter Memorial Baptist Church, a Southern Baptist congregation. ## Legislative career Through his church ministry, Fletcher became acquainted with a group of social conservatives that gained control of the Fayette County Republican Party in 1990. (Fayette County and the city of Lexington operate under the merged Lexington-Fayette Urban County Government). Fletcher accepted an invitation to become a member of the county Republican committee. In 1994, he was elected to the Kentucky House of Representatives, defeating incumbent Democrat Leslie Trapp. He represented Kentucky's 78th District and served on the Kentucky Commission on Poverty and the Task Force on Higher Education. He was also chosen by Governor Paul E. Patton to assist with reforming the state's health-care system. As a result of legislative redistricting in 1996, Fletcher's district was consolidated with the one represented by fellow Republican Stan Cave. Rather than challenge a member of his own party, Fletcher decided to run for a seat representing Kentucky's 6th District in the U.S. House of Representatives later that year. After winning a three-way Republican primary by 4 votes over his closest opponent, he was defeated by incumbent Democrat Scotty Baesler by just over 25,000 votes. In 1998, Baesler resigned his seat to run for the U.S. Senate seat vacated due to the retirement of Senator Wendell H. Ford. Fletcher won the Republican primary for Baesler's seat by a wide margin. In the general election, Fletcher faced Democrat Ernesto Scorsone. The Lexington Herald-Leader billed the race as "a classic joust between the left and the right". Fletcher was strongly opposed to abortion, advocated a "flatter, fairer, simpler" tax system, and called for returning most federal education funding to local communities. Scorsone supported abortion rights, called a flat tax "too regressive", and favored national educational testing and standards. Fletcher defeated Scorsone by a vote of 104,046 to 90,033, with third-party candidate W. S. Krogdahl garnering 1,839 votes. Within months of arriving in Washington, D.C., Fletcher was selected as the leadership liaison for the 17-member freshman class of Republican legislators. He was appointed to the Committee on Education and Workforce, and John Boehner, chair of the committee's employer/employee relations subcommittee, chose Fletcher as his vice-chair. The committee's purpose is to oversee the rules for employer-paid health plans, among other issues, and although it is rare for a freshman legislator to attain a committee leadership post, Boehner cited Fletcher's experience in the medical field and work on reforming the Kentucky health care system as reasons for the appointment. Fletcher also served as a member of the House Committees on the Budget and Agriculture. In June 1999, he sponsored an amendment to a youth violence bill that allowed school districts to use federal funds to develop curricula which included elements designed to promote and enhance students' moral character; the amendment passed 422–1. Later, Fletcher was assigned to the Committee on Energy and Commerce and was selected as chairman of the Policy Subcommittee on Health. During the debate over the proposed Patients' Bill of Rights legislation, Fletcher opposed a Democratic proposal that would have allowed individuals to sue their health maintenance organizations (HMOs), favoring instead a more limited bill drafted by Republican leadership that expanded the patient's ability to appeal HMO decisions. Many doctors in the Republican legislative caucus felt their party's bill did not go far enough; Fletcher and Tennessee Senator Bill Frist were notable exceptions. Fletcher's position cost him the support of the Kentucky Medical Association (KMA). After contributing to his campaign against Scorsone in 1998, KMA backed Scotty Baesler's bid to regain his old seat from Fletcher in 2000. However, Baesler only captured 35 percent of the vote to Fletcher's 53 percent. The remaining 12 percent went to third-party candidate Gatewood Galbraith. After the 2000 election, Fletcher crafted a compromise bill that allowed patients to sue their HMOs in federal court, capped pain and suffering awards at \$500,000, and eliminated punitive damage awards. Despite an eventual compromise allowing patient lawsuits to go to state courts under certain circumstances and heavy lobbying in favor of Fletcher's bill by President George W. Bush, the House refused to pass it, favoring an alternative proposal by Georgia's Charlie Norwood that was less restrictive on patient lawsuits. Fletcher faced no major-party opposition in his re-election bid in 2002 after the only Democrat in the race, 24-year-old Roy Miller Cornett Jr., withdrew his candidacy. Independent Gatewood Galbraith again made the race; Libertarian Mark Gailey also mounted a challenge. In the final vote tally, Fletcher received 115,522 votes to Galbraith's 41,853 and Gailey's 3,313. ## 2003 gubernatorial election In 2002, Fletcher was encouraged by Senator Mitch McConnell, the leader of Kentucky's Republican Party, to run for governor and formed an exploratory committee the same year. On December 2, 2002, he announced that he would run on a ticket with McConnell aide Hunter Bates. Early in 2003, a Republican college student named Curtis Shain challenged Bates' candidacy on grounds that he did not meet the residency requirements set forth for the lieutenant governor in the state constitution. Under the constitution, candidates for both governor and lieutenant governor must be citizens of the state for at least six years prior to the election. From August 1995 to February 2002, Bates and his wife rented an apartment in Alexandria, Virginia while Bates was working for a law firm in Washington, D.C., and later, as McConnell's chief of staff. Bob Heleringer, a former state representative from suburban Louisville and the running mate of Republican gubernatorial candidate Steve Nunn, joined the suit as a plaintiff. In March 2003, an Oldham County judge ruled that Bates had not established residency in Kentucky. He cited the fact that from 1995 to 2002, Bates held a Virginia driver's license, paid Virginia income taxes, and "regularly" slept in his apartment in Virginia. Bates did not appeal the ruling because by allowing the judge to declare a vacancy on the ballot, Fletcher was able to name a replacement running mate, an option that would not have been afforded him had Bates withdrawn. Fletcher chose Steve Pence, United States Attorney for the Western District of Kentucky, as his new running mate. Heleringer continued his legal challenge, first claiming that Bates' ineligibility should have invalidated the entire Fletcher/Bates ticket and then that Fletcher should not have been allowed to name a replacement for an unqualified candidate. The Kentucky Supreme Court rejected that argument on May 7, 2003, though the justices' reasons for doing so varied and the final opinion conceded that "[t]his is a close case on the law, and Heleringer has presented legal issues worthy of this court's time and attention". The state Board of Elections instructed all county clerks to count absentee ballots cast for Fletcher and Bates as votes for Fletcher and Pence. In the Republican primary, Fletcher received 53 percent of the vote, besting Nunn, Jefferson County judge/executive Rebecca Jackson, and state senator Virgil Moore. In the Democratic primary, Attorney General Ben Chandler defeated Speaker of the House Jody Richards. Chandler, the grandson of former governor A. B. "Happy" Chandler, was hurt in the closing days of the campaign when a third challenger, businessman Bruce Lunsford dropped out of the race and endorsed Richards. Chandler won the Democratic primary by just 3.7 percentage points and was forced to reorganize his campaign. Consequently, Fletcher entered the general election as the favorite. Due to the funding from the Republican Governors Association, Fletcher held a two-to-one fundraising advantage over Chandler. A sex-for-favors scandal that ensnared sitting Democratic governor Paul Patton, as well as a predicted \$710 million shortfall in the upcoming budget, damaged the entire Democratic slate of candidates' chances for election. Fletcher capitalized on these issues, promising to "clean up the mess" in Frankfort, and won the election by a vote of 596,284 to 487,159. In all, Republicans captured four of the seven statewide constitutional offices in 2003; Trey Grayson was elected Secretary of State and Richie Farmer was elected Commissioner of Agriculture. Fletcher resigned his seat in the House on December 8, 2003, and assumed the governorship the following day. Fletcher's victory made him the first Republican elected governor of Kentucky since 1971, and his margin of victory was the largest ever for a Republican in a Kentucky gubernatorial election. ## Governor of Kentucky Fletcher made economic development a priority, and Kentucky ranked fourth among all U.S. states in number of jobs created during his administration. One of his first actions as governor was to reorganize the executive branch, condensing the number of cabinet positions from fourteen to nine. He dissolved the former Kentucky Horse Racing Commission and instead created the Kentucky Horse Racing Authority to promote and regulate the state's horse racing industry. To improve the state's management of Medicaid, he rolled back some of the program's requirements and unveiled a plan to focus on improvements in care, benefit management, and technology. Fletcher also launched "Get Healthy Kentucky!," an initiative to promote healthier lifestyles for Kentuckians. ### 2004 state budget dispute Throughout Fletcher's term, the Kentucky Senate was controlled by Republicans, while Democrats held a majority in the state House of Representatives. Consequently, Fletcher had difficulty getting legislation enacted in the General Assembly. Early in the 2004 legislative session, he presented a plan for tax reform that he claimed was "revenue neutral" and would "modernize" the state tax code. The plan was drafted with input from seven Democratic legislators in the House, none of them in leadership roles, leading to claims that Fletcher was trying to circumvent House leadership. As the session wore on, Republicans insisted on tying the tax reform package to the proposed state budget, while Democrats wanted to vote on the measures separately. Despite last minute attempts at a compromise as the session drew to a close, the Assembly passed neither the tax reform package nor a state budget. The contentious session ended with only a few accomplishments, including passage of a fetal homicide law, an anti-price gouging measure, and a law barring the state public service commission from regulating broadband Internet providers beyond what restrictions were put in place by the Federal Communications Commission. The 2004 session marked the second consecutive session in which the General Assembly had failed to pass a biennial budget; the first occurred in 2002 under Governor Patton. When the fiscal year ended without a budget in place, responsibility for state expenditures fell to Fletcher. As it had been in 2002, spending was governed by an executive spending plan created by the governor. Democratic Attorney General Greg Stumbo filed suit asking for a determination on the extent of Fletcher's ability to spend without legislative approval. A similar suit, filed after the 2002 session ended in deadlock, was rendered moot when the legislature passed a budget in a special session prior to the conclusion of the lawsuit. A judicial review by a Franklin County circuit court judge approved Fletcher's spending plan but forbade spending on new capital projects and programs. In late December 2004, a judge ruled that Fletcher's plan could continue to govern spending until the end of the fiscal year on June 30, 2005, but "thereafter" executive spending was to be limited to "funds demonstrated to be for limited and specific essential services." On May 19, 2005, the Kentucky Supreme Court issued a 4–3 decision stating that the General Assembly had acted unconstitutionally by not passing a budget and that Fletcher had acted outside his constitutional authority by spending money not specifically appropriated by the legislature. The majority opinion rejected the lower court's exception for "specific essential services", saying "If the legislative department fails to appropriate funds deemed sufficient to operate the executive department at a desired level of services, the executive department must serve the citizenry as best it can with what it is given. If the citizenry deems those services insufficient, it will exercise its own constitutional power – the ballot." Chief Justice Joseph Lambert dissented, claiming the executive spending plan was necessary. Two other justices, in a separate opinion, disagreed with the majority that federal and state constitutional mandates should still be funded in the absence of a budget. In their dissent, they argued that the threat of a government shutdown would act as an impetus for the General Assembly to engage in timely budget-making. The decision took no retroactive steps to change the actions it ruled unconstitutional, but it served as a precedent for any future cases of budgetary gridlock. ### Legislative interim and 2005 legislative session In June 2004, Fletcher's aircraft caused a security scare that triggered a brief evacuation of the U.S. Capitol and Supreme Court building. Shortly after takeoff en route to memorial services for former president Ronald Reagan, the transponder on Fletcher's plane malfunctioned, leading officials at Reagan National Airport to report an unauthorized aircraft entering restricted airspace. Two F-15 fighters were dispatched to investigate, and Fletcher's plane was escorted to its destination by two Blackhawk helicopters. The plane, a 33-year-old Beechcraft King Air, was the oldest of its model still in operation. An investigation by the Federal Aviation Administration (FAA) found that the crew of Fletcher's plane maintained radio contact with air traffic officials and received clearance to enter the restricted air space. The investigation determined that miscommunication by air traffic controllers sparked the panic, and in the aftermath of the incident, the FAA adopted policies to prevent future errors of a similar nature. In July 2004, Fletcher announced a plan to unify the state's branding to improve its public perception. Shortly after the announcement, late-night comedians Craig Kilborn and Jay Leno made some tongue-in-cheek suggestions for the new slogan on The Late Late Show and The Tonight Show, respectively. In response, Fletcher wrote a letter to both comedians taking exception to the jokes and was invited to appear on both programs. Citing Leno's larger audience and earlier time slot, Fletcher agreed to appear on The Tonight Show, where he presented Leno with a Louisville Slugger baseball bat and traded jocular barbs about the relative advantages of Kentucky and Los Angeles where The Tonight Show is taped. Eventually, four slogans were chosen to be voted on online as well as at interstate travel centers. In December 2004, "Kentucky: Unbridled Spirit" was chosen as the winning slogan and was printed on road signs, state documents, and souvenirs. A 2007 study determined that 88.9% of Kentuckians could correctly identify the slogan and its logo. Further, 64% of those surveyed across a ten-state region recognized the slogan and logo, higher than any other brand tested in the study. In the second half of 2004, Fletcher proposed changes to the health benefits of state workers and retirees. Fletcher's plan provided discounts for members who engaged in healthier behavior, which he called a transition from a sickness initiative to a wellness initiative. Acknowledging that out-of-pocket expenses would rise, Fletcher proposed a 1% salary increase to offset the additional costs. State employees, particularly public school teachers, broadly opposed Fletcher's plan, and the Kentucky Educators Association called for an indefinite strike, to begin October 27, 2004. To address the opposition, Fletcher called a special session of the legislature to begin October 5, 2004. Although the state was still operating under an executive spending plan, Fletcher did not include the budget or his tax reform proposal in the session's agenda, a move praised by both parties, allowing them to focus only on concerns over the health plan. In a fifteen-day session, the General Assembly passed a plan that allocated \$190 million more to health insurance for state workers and restored many of the most popular benefits in the previous insurance plan. Immediately after the session adjourned, the Kentucky Educators Association voted to cancel their proposed strike. On November 8, 2004, Fletcher signed a death warrant for Thomas Clyde Bowling, who was convicted of a double murder in 1990 and sentenced to death by lethal injection. A group of doctors requested an investigation by the Kentucky Board of Medical Licensure to determine whether Fletcher's medical license should be revoked for that action. Kentucky requires doctors to follow the guidelines of the American Medical Association, which forbid doctors from participating in an execution. On January 13, 2005, the Board of Medical Licensure found that Fletcher was acting in his capacity as governor, not as a doctor, when he signed the warrant and ruled that his license was not subject to forfeiture by that action. During the General Assembly's 2005 session, Fletcher again proposed his tax reform plan, and late in the session, both houses passed it. The plan raised sin taxes on cigarettes and alcohol, as well as upping taxes on satellite television service and motel rooms. Businesses were also subjected to a gross receipts tax. In exchange, corporate taxes were lowered, as were income taxes for individuals who earned less than \$75,000 annually; 300,000 low-wage earners were dropped from the income tax rolls altogether. The Assembly also passed a budget for the remainder of the biennium, abolished the state's public campaign finance laws, and passed new school nutrition guidelines. ### Merit system investigation In May 2005, Attorney General Stumbo began an investigation of allegations that the Fletcher administration circumvented the state merit system for hiring, promoting, demoting and firing state employees by basing decisions on employees' political loyalties. The investigation was prompted by a 276-page complaint filed by Douglas W. Doerting, the assistant personnel director for the Kentucky Transportation Cabinet. Fletcher, who was on a trade mission in Japan when news of the investigation broke, conceded via telephone news conference that his office may have made "mistakes" with regard to hiring that stemmed from not having a formal process for handling employment recommendations. Upon his return from Japan, Fletcher denied that the "mistakes" by his administration were illegal and called the investigation by Stumbo "the beginning of the 2007 governor's race", an allusion to Stumbo's potential candidacy in 2007. Stumbo denied any plans to run for governor in 2007, although he eventually became gubernatorial candidate Bruce Lunsford's running mate in the election, losing in the Democratic primary. A grand jury was empaneled in June 2005 to investigate the charges against Fletcher's administration. By August, the jury had returned indictments against nine administration officials, including state Republican Party chairman Darrell Brock Jr. and acting Transportation Secretary Bill Nighbert. All of the indictments were for misdemeanors such as conspiracy except those against Administrative Services Commissioner Dan Druen, who was charged with 22 felonies (20 counts of physical evidence tampering and 2 counts of witness tampering) in addition to 13 misdemeanors. On August 29, Fletcher granted pardons to the nine indicted administration officials and issued a blanket pardon for "any and all persons who have committed, or may be accused of committing, any offense" with regard to the investigation. Fletcher exempted himself from the blanket pardon. The next day, Fletcher was called to testify before the grand jury, but refused to answer any questions, invoking his Fifth Amendment right against self-incrimination. In mid-September, after Fletcher issued the pardons, a Courier-Journal poll found Fletcher's approval rating at 38 percent, tying the lowest rating reached by his predecessor, Paul E. Patton, during the sex scandal that tarnished his administration. On September 14, 2005, Fletcher fired nine employees, including four of the nine he pardoned two weeks earlier. The firings were praised by Fletcher critic Charles Wells of the Kentucky Association of State Employees, who said: "When all else fails, the governor did the right thing." However, Democratic state senator and former governor Julian Carroll criticized Fletcher for not firing the indicted officials when he issued the pardons. Fletcher also called for the firing of state Republican Party chair Darrell Brock, Jr. due to Brock's role in the merit scandal. The state Republican executive committee met on September 17, but did not act on Fletcher's call to fire Brock. The grand jury continued its investigation, issuing five more indictments after Fletcher issued his blanket pardon. Two were returned against members of Fletcher's staff, and two were against unpaid advisors to Fletcher. The fifth was issued against Acting Secretary Nighbert for retaliation against a whistleblower. Only the additional charge against Nighbert was alleged to have occurred after Fletcher issued the pardon. On October 24, 2005, Fletcher filed a motion asking Franklin Circuit Court Judge William Graham to order the grand jury to stop issuing indictments for offenses that occurred prior to the blanket pardon; only the names of indicted officials could be included in the jury's final report. On November 16, Graham ruled that the grand jury could continue issuing indictments, but in a separate ruling, dismissed the indictments against Fletcher's staff and volunteer advisors on grounds that they were covered by the pardon. Graham did not rule on the latest indictment against Nighbert. The Kentucky Court of Appeals affirmed Graham's ruling on December 16. Immediately after the Court of Appeals' ruling, Fletcher announced his intent to appeal the ruling to the Kentucky Supreme Court. ### 2006 legislative session On February 12, 2006, shortly after the beginning of the General Assembly's legislative session, Fletcher was hospitalized with abdominal pain. Doctors at St. Joseph East hospital in Lexington found a gallstone in his common bile duct and also diagnosed him with an inflamed pancreas and gallbladder disease. After surgery to remove the gallbladder, Fletcher developed a blood infection that slowed his recovery, but was discharged from the hospital on March 1. Days later, he returned to St. Joseph's with a blood clot which had to be dissolved, resulting in another five-day stay in the hospital. Fletcher staffers insisted that his absence did not have a negative impact on his ability to get legislation passed during the session. A right-to-work law and a repeal of the state's prevailing wage law – both advocated by Fletcher – failed early in the session, but both had been considered unlikely to pass before the session started. Among the bills that did pass the session were a mandatory seat belt law, a law requiring children under 16 years old to wear a helmet when operating an all-terrain vehicle, and legislation allowing the Ten Commandments to be posted on Capitol grounds in a historical context. The Assembly passed a biennial budget, but did not allow enough time in the session to reconvene and potentially override any of Fletcher's vetoes. In an attempt to avoid "excessive debt", Fletcher used his line-item veto to trim \$370 million in projects from the budget passed by the Assembly. Although falling far short of his initial prediction of vetoing \$938 million, Fletcher used the line-item veto more than any other governor in state history. One project not vetoed by Fletcher was \$11 million for the University of the Cumberlands to build a pharmacy school. LGBT rights groups had asked Fletcher to veto the funds because the university, a private Baptist school, had expelled a student for being openly gay. One of Fletcher's priorities that was not resolved during the session was the correction of unintended tax increases on businesses that resulted from the tax reform plan passed in 2005. Fletcher called a special legislative session for mid-June so that the legislature could amend the plan and also authorize tax breaks designed to lure a proposed FutureGen power plant to Henderson. Republican Senate President David L. Williams asked Fletcher to include tax breaks for other businesses as well, but Fletcher insisted on a sparse legislative agenda. The session convened for five days and passed the tax breaks and amended tax reform plan unanimously in both houses. Fletcher applauded the legislature's efficiency. ### Investigation concludes As the Kentucky Supreme Court prepared to hear Fletcher's appeal on whether the grand jury could continue to indict people covered by his blanket pardon, two of the court's seven justices recused themselves from the case, citing conflicts of interest. Kentucky's constitution provides that, in the case of more than one recusal on the court, the governor is to appoint special justices to replace them. Accordingly, Fletcher named two replacements, but one of those – Circuit Judge Jeffrey Burdette – declined to serve on grounds that he had contributed to Fletcher's 2003 gubernatorial campaign. Fletcher then named another special justice to replace Burdette, consistent with a precedent set by former Democratic Governor Brereton Jones. Stumbo challenged this third appointment, claiming that Burdette's refusal to serve created only one vacancy on the court, and that the case could be tried with six justices. The Kentucky Supreme Court sustained Stumbo's complaint. In a 4–2 ruling issued May 18, 2006, the Kentucky Supreme Court barred the grand jury from issuing further indictments against individuals covered by Fletcher's blanket pardon, reversing the Court of Appeals. The ruling did not affect indictments for crimes allegedly committed after the pardon was issued. The Supreme Court also held that the grand jury could issue a general report of its findings at the conclusion of its investigation, but left open the question of whether the names of unindicted individuals could appear in the report. A later decision by the Court of Appeals found that unindicted individuals could not be named in the report. Just prior to the Supreme Court's ruling, the grand jury handed down indictments against Fletcher for three misdemeanors – conspiracy, official misconduct, and political discrimination. Fletcher did not appear at his arraignment on June 9 because he was on vacation in Florida; his attorney entered "not guilty" pleas to all three charges on his behalf. On August 11, 2006, Special Judge David E. Melcher ruled that because the personnel violations were allegedly committed while Fletcher was acting in his official capacity as governor, he was protected by executive immunity and could not be prosecuted until he left office. Melcher asked that the two sides work together to reach a settlement in the case. On August 24, Fletcher and Stumbo announced such an agreement. Under the settlement, Fletcher acknowledged that evidence "strongly indicate[d] wrongdoing by his administration" but did not admit any wrongdoing personally. Fletcher also acknowledged that Stumbo's prosecution of the case "[was a] necessary and proper [exercise] of his constitutional duty" and ensured that abuses of the merit system would be ended. In addition to dropping the charges against Fletcher, Stumbo conceded that any violations by Fletcher's administration were "without malice". Four members of the state Personnel Board who were appointed by Fletcher were required to step down. Their replacements would be chosen by Fletcher from a list provided by Stumbo. The grand jury issued its report on the investigation in October 2006, and a judge ordered it released to the public on November 16. The report categorized the Fletcher administration's actions as "a widespread and coordinated plan to violate merit hiring laws." It charged that "This investigation was not about a few people here and there who made some mistakes as Governor Ernie Fletcher had claimed," and lamented that the blanket pardon issued by Fletcher, coupled with Fletcher taking the Fifth, made it "difficult to get to the bottom of the facts of this case....As a result, [the grand jury was] in part forced to rely on documentary evidence to piece together the facts of the case." Fletcher opined that the allegations in the report were inconsistent with his settlement with Stumbo, which acknowledged that Fletcher's administration acted "without malice." ## 2007 gubernatorial election In early 2005, Fletcher announced his intent to run for re-election. Shortly after Fletcher was indicted by the grand jury in 2006, Lieutenant Governor Pence announced that he would not be Fletcher's running mate during his re-election bid. Fletcher asked for Pence's immediate resignation as lieutenant governor. Pence declined, but did tender his resignation as head of the Justice Cabinet. Fletcher named his executive secretary, Robbie Rudolph, as his new running mate. Although Fletcher's agreement with Stumbo to end the investigation was announced in late 2006, the scandal continued to plague his re-election bid, and he drew two challengers in the Republican primary – former Third District Congresswoman Anne Northup and multi-millionaire Paducah businessman Billy Harper. Senator Mitch McConnell, the consensus leader of the Kentucky Republican Party, declined to make an endorsement in the primary, but conceded that Northup was "a formidable opponent". Northup campaigned on the idea that Fletcher's involvement in the hiring scandal had made him "unelectable". Northup secured the endorsements of Jim Bunning, Kentucky's other Republican senator, and Lieutenant Governor Pence. In the primary, Fletcher garnered over 50% of the vote and secured the party's nomination. His rival Northup struggled with name recognition and found few areas of support outside the Louisville district she represented in Congress. She garnered 36.5% of the vote, with the remaining 13.4% going to Billy Harper. Democrats nominated former Lieutenant Governor Steve Beshear to challenge Fletcher. In the midst of the primary campaign, the 2007 General Assembly convened. Among the accomplishments of the session were raising the state's minimum wage to \$7.25 per hour, increasing the speed limit on major state highways to 70 mph (110 km/h), and implementing new safety requirements for social workers and coal miners. Additional legislation stalled after negotiations over how to make the state's retirement system solvent reached an impasse. Fletcher indicated that he would consider calling the Assembly into special session later in the year. In July, Fletcher called the session and included 67 items on its agenda. Democrats in the state House of Representatives maintained that none of the items were urgent enough to warrant a special session. They claimed the call was an attempt by Fletcher to boost his sagging poll numbers against Beshear, and the House adjourned after only 90 minutes without acting on any of Fletcher's agenda. Fletcher denied the claims and insisted that a tax incentive program was needed immediately to keep the state in the running for a proposed coal gasification plant to be built by Peabody Energy. After negotiating with legislators, Fletcher called another session for August; the session included only the tax incentive program, which the Assembly passed. In the general election campaign, Fletcher attempted to make the expansion of casino gambling, rather than the merit system investigation, the central issue. Beshear favored holding a referendum on a constitutional amendment to allow expanded casino gambling in the state, while Fletcher maintained that expanded gambling would bring an increase in crime and societal ills. The gambling issue failed to gain as much traction as the hiring scandal, however, and Beshear defeated Fletcher by a vote of 619,686 to 435,895. After the election, Fletcher founded Alton Healthcare, a consulting firm that helps healthcare providers make efficient use of technology in their practice. He has served as CEO of the company, which is based in Cincinnati, Ohio, since 2008. ## See also - List of Delta Tau Delta members - List of University of Kentucky alumni - List of new members of the 106th United States Congress - List of members of the United States House of Representatives in the 106th Congress by seniority - List of members of the United States House of Representatives in the 107th Congress by seniority - List of members of the United States House of Representatives in the 108th Congress by seniority - List of United States representatives from Kentucky - List of former members of the United States House of Representatives (F) - List of Republican nominees for Governor of Kentucky - List of governors of Kentucky - List of Christian preachers § Preachers with secular professions - List of Christian clergy in politics § Baptist
6,885,958
Three-dollar piece
1,132,603,793
US three-dollar coin (1854–1889)
[ "1854 establishments in the United States", "1854 introductions", "Goddess of Liberty on coins", "Native Americans on coins", "United States gold coins" ]
The three-dollar piece was a gold coin produced by the United States Bureau of the Mint from 1854 to 1889. Authorized by the Act of February 21, 1853, the coin was designed by Mint Chief Engraver James B. Longacre. The obverse bears a representation of Lady Liberty wearing a headdress of a Native American princess and the reverse a wreath of corn, wheat, cotton, and tobacco. In 1851, Congress had authorized a silver three-cent piece so that postage stamps of that value could be purchased without using the widely disliked copper cents. Two years later, a bill was passed which authorized a three-dollar coin. By some accounts, the coin was created so larger quantities of stamps could be purchased. Longacre, in designing the piece, sought to make it as different as possible from the quarter eagle or \$2.50 piece, striking it on a thinner planchet and using a distinctive design. Although over 100,000 were struck in the first year, the coin saw little use. It circulated somewhat on the West Coast, where gold and silver were used to the exclusion of paper money, but what little place it had in commerce in the East was lost in the economic disruption of the Civil War, and was never regained. The piece was last struck in 1889, and Congress ended the series the following year. Although many dates were struck in small numbers, the rarest was produced at the San Francisco Mint in 1870 (1870-S); only one is known with certainty to exist. ## Inception In 1832, New York Congressman Campbell P. White sought a means of returning American gold coins to circulation—as gold was overvalued with respect to silver by the government, gold coins had been routinely exported since the start of the 19th century. White's solution was to have the silver dollar and gold eagle struck at full value, but to have smaller gold and silver coins, including a \$3 piece, which contained less than their face value in metal. Although Congress, in passing the Coinage Act of 1834, made adjustments to the ratio between gold and silver, it did not authorize a \$3 coin at that time. The Act of March 3, 1845 authorized the first United States postage stamps and set the rate for local prepaid letters at five cents. In the years following, this rate was seen as too high and an impediment to commerce. Accordingly, Congress on March 3, 1851 authorized both a three-cent stamp and a three-cent silver coin. Kentucky Representative Richard Henry Stanton believed that the need to make change from a silver half dime with large copper cents might defeat the new scheme, writing to Mint Director Robert M. Patterson that "reduced postage [rates] depended on a three-cent coin for use in those states where copper does not circulate." According to numismatic historian Walter Breen, "the main purpose of the new 3¢ piece would be to buy postage stamps without using the unpopular, heavy, and often filthy copper cents. By 1853, silver was overvalued with respect to gold. This was due to large discoveries of gold, especially in California, and silver was heavily exported. To correct this situation, Secretary of the Treasury Thomas Corwin advocated reducing the precious-metal content of most silver coins to prevent their export. The opposition to the bill was led by Tennessee Representative Andrew Johnson, who believed that Congress had no authority to alter the gold/silver price ratio and, if it did, it should not exercise it. Nevertheless, Congress passed the bill, which became law on February 21, 1853. That bill also authorized a three-dollar gold coin; according to numismatic writer Don Taxay, provision for it had been inserted at the behest of gold interests. According to Breen, Congress believed the new coin "would be convenient for exchange for rolls or small bags of silver 3¢ pieces, and for buying sheets of 3¢ stamps—always bypassing use of copper cents". In 1889, then-Mint Director James P. Kimball wrote that "it is supposed that the three-dollar piece was designed to be a multiple of the three-cent piece, for the convenience of postal transactions". Numismatist Walter Hagans in his 2003 article on the three-dollar coin notes and dismisses the postal explanation, writing "the actual reason for the gold \$3 coin was the abundant supply of gold discovered in California." Coin dealer and author Q. David Bowers notes that "whether or not the \$3 denomination was actually necessary or worthwhile has been a matter of debate among numismatists for well over a century." ## Preparation and design Much of what is known of the design process for the three-dollar piece is from an August 21, 1858, letter from the Mint's chief engraver, James B. Longacre, the coin's designer, to the then-Mint director, James Ross Snowden. This letter is apparently in response to some criticism, and in it, Longacre discussed his views on coin design, especially regarding the three-dollar piece. He noted that he was initially perplexed as to what to put on the coin; the three-dollar piece was the first time he had been allowed to choose a design. Although he had designed the three-cent piece and other issues before Snowden's directorship, he had been told what to put on those pieces. The coin weighed 64.5 grams, and had a fineness of 900. Longacre noted that although those in charge of coinage design had usually dictated adaptations of Roman or Greek art, for the three-dollar coin, he was minded to create something truly American: > Why should we in seeking a type for the illustration or symbol of a nation that need not hold itself lower than the Roman virtue or the Science of Greece prefer the barbaric period of a remote and distant people, from which to draw an emblem of nationality: to the aboriginal period of our own land: especially when the latter presents us with a characteristic distinction not less interesting, and more peculiar than that which still casts its chain over the civilized portion of the older continent? Why not be American from the spring-head within our own domain? ... From the copper shores of Lake Superior to the silver mountains of Potosi, from the Ojibwa to the Araucanian, the feathered tiara is a characteristic of the primitiveness of our hemisphere as the turban is of the Asiatic. Representations of America as a female Native American, or Indian princess, dated back to the 16th century; cartographers would place a native woman, often wearing a feathered headdress, upon their version of the North American continent. This evolved into an image of an Indian queen, then an Indian princess, and although Columbia eventually came to be the favored female embodiment of the United States, the image of the Indian princess survives in the popular view of such figures as Pocahontas and Sacagawea. Some sources suggest that Longacre may have based the features of Liberty on those of his daughter, Sarah. This story would be more often associated with Longacre's Indian Head cent, but the features of Liberty on both coins (and also the Type I gold dollar, the double eagle, and the three-cent nickel piece) are nearly identical. To help distinguish the new coin from the quarter eagle or \$2.50 piece, Longacre used a thinner planchet, or blank, to make the piece greater in diameter. He also flattened the planchet of the gold dollar to enlarge it, and gave it the same Indian princess design. The reverse originates Longacre's "agricultural wreath" of corn, tobacco, cotton, and wheat which would also appear on the gold dollar, Flying Eagle cent, and his revised reverse for the Seated Liberty dime and half dime. This blended the produce of the South and North at a time of intersectional tension. Numismatist Walter Hagans deems the wreathed reverse "as uniquely American as is the Indian maiden on the obverse." Art historian Cornelius Vermeule stated that "the one area in which Longacre gave free rein to his imagination was in the matter of fancy headdress for his renderings of Liberty. His caps of feathers, his bonnets of freedom, and his starry diadems are a joy to behold." Nevertheless, Vermeule disliked the figure on the obverse, "the princess of the gold coins is a banknote engraver's elegant version of folk art of the 1850s. The plumes or feathers are more like the crest of the Prince of Wales than anything that saw the Western frontier, save perhaps on a music hall beauty." At the time of the authorization of the three-dollar piece, the Whig administration of Millard Fillmore was still in office, but two weeks later, Fillmore was succeeded by Democrat Franklin Pierce, and Mint Director George N. Eckert yielded his place to Thomas M. Pettit. Longacre submitted two designs to Pettit, and before the latter died on May 31, 1853, he selected one of the two; relief models quickly followed the approval. Longacre's models, both for the obverse and reverse, did not have lettering on them, as the legends and numbers were to be punched once a reduction was made. This allowed them to be used multiple times for different denominations. As Longacre was busy with the reduced-weight silver coins ordered by Congress in the same act that had authorized the three-dollar piece, work on dies did not begin until 1854. ## Production The first three-dollar pieces were 15 proof coins, delivered to the Secretary of the Treasury, James Guthrie, by Mint Director Snowden on April 28, 1854, most likely for distribution to legislators. The Philadelphia Mint then began the largest production of three-dollar pieces the denomination would ever see. Chief Coiner Franklin Peale delivered 23,140 pieces on May 8 and 29,181 pieces four days later. However, after June 8, there was only one further delivery by Peale, on November 10, when the last 22,740 of the mintage of 138,618 were delivered. In addition to the strikings at Philadelphia, there was branch mint production, with 24,000 pieces struck at the New Orleans Mint (1854-O) and 1,120 at Dahlonega (1854-D). A pair of dies was sent from Philadelphia to the Charlotte Mint on June 1, but they were not used. A pair was sent to Dahlonega the same day, arriving on June 10, with gauges and other necessary equipment following on July 15. The coinage of the 1854-D took place in August; the piece is today a rarity as few were put aside and it was not until decades later that mintmarked coins were saved as distinct varieties. Dies were sent to New Orleans in 1855, 1856, 1859, and 1861, only to remain unused; no further strikings took place at any of the three southern branch mints. Beginning in 1855, the letters of the word "Dollars" were enlarged, following complaints from the public. The same year, coinage began at the San Francisco Mint, where 6,600 were struck as opposed to 50,555 at Philadelphia. Mintages at Philadelphia declined for the remainder of the decade, to 7,036 by 1860; pieces were also struck at San Francisco in 1856, 1857, and 1860. In 1859, early numismatic writer Montroville W. Dickeson wrote of the three-dollar piece, "it is very unpopular, being frequently mistaken for a quarter eagle, and often counted as a five-dollar piece. It is exceedingly annoying to that portion of the human family whose vision is dependent on artificial aid, and we think its retirement would meet with public approbation." Perhaps a dozen contemporary numismatists collected three-dollar pieces; those who were serious ordered proof coins from the Mint. Coins in this condition became easier to obtain from Philadelphia as officials responded to the rise in interest in coin collecting which followed the introduction of the Flying Eagle cent in 1857. The coins saw some circulation in the East and Midwest, at least until 1861, when the economic turmoil caused by the American Civil War caused gold and silver to vanish from commerce there. With gold being hoarded, in December 1861, banks, and subsequently the Treasury, ceased to pay out gold at face value. The three-dollar piece would never return to circulation in the eastern part of the country. On the West Coast, where gold and silver remained in use, the coin continued in commerce, and might be occasionally encountered. The San Francisco Mint issues were most commonly seen there. Despite the failure to circulate, three-dollar pieces continued to be struck at Philadelphia as it was the policy of Mint Director James Pollock that each denomination should be struck every year, whether it circulated or not. Some Philadelphia Mint pieces migrated west in payment for transactions, as only gold and silver was acceptable money on the West Coast. Until the resumption of specie payments at the end of 1878, gold pieces were only available from the Philadelphia Mint by paying a premium in banknotes. Pieces not sold were stored there. In 1870, a set of dies for the three-dollar piece was sent from the Bureau of the Mint's Engraving Department at the Philadelphia Mint to San Francisco. On May 14, 1870, Oscar Hugh La Grange, superintendent of the San Francisco Mint, sent a telegram to Mint Director Pollock, informing him that dies for the one- and three-dollar pieces had been received, but lacked the customary "S" mint mark, and asking for guidance. The dies were, per Pollock's instructions, returned to Philadelphia, but LaGrange informed Pollock that to secure a three-dollar piece to place in the cornerstone of the new San Francisco Mint building, Coiner J.B. Harmistead had engraved an "S" on the reverse die. It is not certain what became of the piece to be placed in the cornerstone, but Harmistead also struck a piece for himself, which was mounted as jewelry at one time, and the existence of which was not known until 1907. The only unique regular-issue U.S. gold piece by date and mint mark, it last came on the market in 1982, when it sold for \$687,500. Today it forms part of the Harry W. Bass, Jr. Collection in the Money Museum of the American Numismatic Association in Colorado Springs. No other three-dollar pieces were struck at San Francisco in 1870; dies were sent there most years between 1861 and 1873, but, with the exception of 1870, were not used. On January 18, 1873, Philadelphia Mint Chief Coiner Archibald Loudon Snowden complained that the "3" in the date, as struck by the Mint, too closely resembled an "8", especially on the smaller-sized denominations. In response, Pollock ordered Chief Engraver William Barber to re-engrave the date, opening the arms of the "3" wider on most denominations, including the three-dollar piece. Both the Closed 3 and Open 3 varieties are extremely rare, though the official mintage of 25 pieces for 1873 is understated, since more specimens than that are known to exist. In 1875 and 1876, no pieces were struck for circulation, with only pieces in proof condition being made available to collectors. The official mintage is 20 for 1875 and 45 for 1876, though an unknown number of pieces may have been later illicitly restruck for each date. Numismatic writer R.W. Julian believes that there were no later restrikes, but as proof pieces were not counted until sold, employees substituted common-date pieces when unsold coins were to be melted. These pieces had been made available to the public only as part of a proof set of all gold denominations, at a price of \$43 (a premium of \$1.50 over face value). Julian suggests that the relatively large mintages of almost 42,000 in 1874 and some 82,000 in 1878 were struck in anticipation of the resumption of specie payments, but when this finally occurred at the end of 1878, "there was a loud yawn from the public and the Mint kept most of the pieces on hand, paying them out slowly as stocking stuffers. ## Final years and termination In the 1880s, despite the return of gold to commerce nationwide with the resumption of specie payments at the end of 1878, few three-dollar pieces were coined. There was a small speculative boom by the public in putting aside three-dollar pieces; nevertheless, thousands remained at the Philadelphia Mint. Few were sent to banks; the coins sold for a small premium when banks had some or when they were purchased from exchange brokers. The coins' main use was as gifts, or in jewelry. The pieces were struck only at Philadelphia after the 1870-S rarity, and early numismatist S.H. Chapman noted of the 1879 through 1889 issues, "of the later years of the \$3, large numbers were remelted at the Philadelphia Mint." The Mint apparently favored certain Philadelphia dealers in the distribution of the gold dollar, but the three-dollar piece could be obtained without a premium at the cashier's window of the Philadelphia Mint. Large numbers of the 1879 three-dollar piece (mintage 3,000 for circulation), 1880 (1,000), and 1881 (500) were hoarded by early coin collector and dealer Thomas L. Elder, who asked bank tellers to look out for them. Elder could not have obtained them directly from the Mint at the time of issue as he was still a child in 1880, and did not begin collecting coins until 1887. With the rise of collecting interest in the three-dollar piece in the 1880s, unscrupulous employees at the Philadelphia Mint enriched themselves by illicit striking of earlier-date pieces, including the 1873, 1875, and 1876. Bowers, in his sylloge of the Bass Collection, particularly blames these irregularities on Oliver Bosbyshell, chief coiner at Philadelphia from 1876 to 1885. During that period, quantities of pattern coins, restrikes, and pieces struck in different metals flowed to well-connected collectors and dealers, and Bosbyshell sold a large personal collection of such pieces shortly after leaving office as chief coiner. Although Bosbyshell returned as Philadelphia Mint superintendent from 1889 to 1894, he does not appear to have resumed his illicit activities. The relatively large mintage of about 6,000 in 1887 was due to a fad sweeping the country whereby men would present their lady friends with a coin with one side ground off and replaced by the woman's initials. Many wealthy suitors preferred to use a gold coin for this presentation. A larger-than-usual number of proof pieces were struck in 1888 and held by the Mint in anticipation of future trades with collectors for items which the Mint desired for its coin collection. The 1888 piece is the most common proof coin in the series, with an official mintage of 200 pieces. In 1889, Mint Director James P. Kimball sent a letter to the House of Representatives Committee on Coinage, Weights, and Measures urging the abolition of the three-dollar piece. Kimball wrote, "this is a denomination which serves no useful purpose, its present coinage being in fact limited to its production for cabinet [coin collecting] purposes. The value of over \$153,000 in three-dollar pieces still on hand at the Mint at Philadelphia can not be disposed of, owing to the unpopularity of this coin as a circulating medium." The gold dollar and three-dollar piece were not coined after 1889, and were abolished by Congress on September 26, 1890. In the 1890s, 49,087 three-dollar pieces were melted as obsolete at the Philadelphia Mint. Although no list was kept by years, Bowers suggests that many of the pieces were dated 1874 or 1878 (both years with relatively high mintages), or were from the final years of the series. In the 1890s, they typically commanded a premium of 25 or 50 cents at exchange brokers. In the 1920s, three-dollar pieces sold at a premium when other denominations of gold coinage remained at face value. The 2014 edition of R.S. Yeoman's A Guide Book of United States Coins lists the 1854 as the cheapest three-dollar piece in the lowest listed condition (Very Fine or MS-20) at \$825. An 1855-S in proof is the record holder in sales price for the denomination, selling at auction in 2011 for \$1,322,500. In 1934, Mint Director Nellie Tayloe Ross wrote in her annual report that a total of 539,792 three-dollar pieces had been coined, of which 452,572 were struck at Philadelphia, 62,350 at San Francisco (not including the 1870-S), 24,000 at New Orleans, and 1,120 at Dahlonega. According to Breen, three-dollar pieces "represent relics of an interesting but abortive experiment; today they are among the most highly coveted of American gold coins". New York coin dealer Norman Stack stated in the 1950s, "All are rare. There is no such thing as a common three-dollar gold piece."
1,070,016
Unification of Germany
1,172,547,810
1866–1871 unification of most German states into the German Reich
[ "1860s in Germany", "1870s in Germany", "19th century in Germany", "19th century in politics", "Conflicts in 1866", "Conflicts in 1871", "Modern history of Germany", "National unifications", "Pan-Germanism" ]
The unification of Germany (German: Deutsche Einigung, ) was a process of building the first nation-state for Germans with federal features based on the concept of Lesser Germany (one without Habsburgs' multi-ethnic Austria or its German-speaking part). It commenced on 18 August 1866 with adoption of the North German Confederation Treaty establishing the North German Confederation, initially a military alliance de facto dominated by Prussia which was subsequently deepened through adoption of the North German Constitution. The process symbolically concluded when most of south German states joined the North German Confederation with the ceremonial proclamation of the German Empire i.e. the German Reich having 25 member states and led by the Kingdom of Prussia of Hohenzollerns on 18 January 1871; the event was later celebrated as the customary date of the German Empire's foundation, although the legally meaningful events relevant to the accomplishment of unification occurred on 1 January 1871 (accession of South German states and constitutional adoption of the name German Empire) and 4 May 1871 (entry into force of the permanent Constitution of the German Empire). Despite the legal, administrative, and political disruption caused by the dissolution of the Holy Roman Empire in 1806, the German-speaking people of the old Empire had a common linguistic, cultural, and legal tradition. European liberalism offered an intellectual basis for unification by challenging dynastic and absolutist models of social and political organization; its German manifestation emphasized the importance of tradition, education, and linguistic unity. Economically, the creation of the Prussian Zollverein (customs union) in 1818, and its subsequent expansion to include other states of the Austria (under Austrian Empire)-led German Confederation, reduced competition between and within states. Emerging modes of transportation facilitated business and recreational travel, leading to contact and sometimes conflict between and among German-speakers from throughout Central Europe. The model of diplomatic spheres of influence resulting from the Congress of Vienna in 1814–1815 after the Napoleonic Wars endorsed Austrian dominance in Central Europe through Habsburg leadership of the German Confederation, designed to replace the Holy Roman Empire. The negotiators at Vienna took no account of Prussia's growing strength within and declined to create a second coalition of the German states under Prussia's influence, and so failed to foresee that Prussia (Kingdom of Prussia) would rise to challenge Austria for leadership of the German peoples. This German dualism presented two solutions to the problem of unification: Kleindeutsche Lösung, the small Germany solution (Germany without Austria), or Großdeutsche Lösung, the greater Germany solution (Germany with Austria or its German-speaking part), ultimately settled in favor of the former solution in the Peace of Prague. Historians debate whether Otto von Bismarck—Minister President of Prussia—had a master plan to expand the North German Confederation of 1866 to include the remaining independent German states into a single entity or simply to expand the power of the Kingdom of Prussia. They conclude that factors in addition to the strength of Bismarck's Realpolitik led a collection of early modern polities to reorganize political, economic, military, and diplomatic relationships in the 19th century. Reaction to Danish and French nationalism provided for expressions of German unity. Military successes—especially those of Prussia—in three regional wars generated enthusiasm and pride that politicians could harness to promote unification. This experience echoed the memory of mutual accomplishment in the Napoleonic Wars, particularly in the War of Liberation of 1813–1814. By establishing a Germany without multi-ethnic Austria (under Austria-Hungary) or its German-speaking part, the political and administrative unification in 1871 at least temporarily solved the problem of dualism. Despite undergoing in the later years several further changes of its name and borders, overhauls of its constitutional system, periods of limited sovereignty and interrupted unity of its territory or government, and despite dissolution of its dominant founding federated state, the polity resulting from the unification process continues its existence, surviving until today in its contemporary form known as the Federal Republic of Germany. ## Early history Germans emerged in medieval times among the descendants of the Romanized Germanic peoples in the area of modern western Germany, between the Rhine and Elbe rivers, particularly the Franks, Frisians, Saxons, Thuringii, Alemanni, and Baiuvarii. The region was divided into long-lasting divisions, or "Stem duchies", based upon these ethnic designations, under the dominance of the western Franks starting with Clovis I, who established control of the Romanized and Frankish population of Gaul in the 5th century, and began a new process of conquering the peoples east of the Rhine. In subsequent centuries the power of the Franks grew considerably. By the early 9th century AD, large parts of Europe had been united under the rule of the Frankish leader Charlemagne, who expanded the Frankish Empire (Francia) in several directions including east of the Rhine, where he conquered Saxons and Frisians. A confederated realm of German princedoms, along with some adjacent lands, had been in existence for over a thousand years; dating to the Treaty of Verdun i.e. the establishment of East Francia from eastern Frankish Empire in east of the Rhine in 843, especially when the Ottonian dynasty took power to rule East Francia in 919. The realm later in 962 made up the core of the Holy Roman Empire, which at times included more than 1,000 entities and was called the "Holy Roman Empire of the German Nation" from 1512 with the Diet of Cologne (new title was adopted partly because the Empire lost most of its territories in Italy and Burgundy to the south and west by the late 15th century, but also to emphasize the new importance of the German Imperial Estates in ruling the Empire due to the Imperial Reform). The states of the Holy Roman Empire ranged in size from the small and complex territories of the princely Hohenlohe family branches to sizable, well-defined territories such as the Electorate of Bavaria, the Margraviate of Brandenburg or the Kingdom of Bohemia. Their governance varied: they included free imperial cities, also of different sizes, such as the powerful Augsburg and the minuscule Weil der Stadt; ecclesiastical territories, also of varying sizes and influence, such as the wealthy Abbey of Reichenau and the powerful Archbishopric of Cologne; and dynastic states such as Württemberg. Among the German-speaking states, the Holy Roman Empire's administrative and legal mechanisms provided a venue to resolve disputes between peasants and landlords, between jurisdictions, and within jurisdictions. Through the organization of imperial circles (Reichskreise), groups of states consolidated resources and promoted regional and organizational interests, including economic cooperation and military protection. ## Early modern era and Eighteenth century Since the 15th century, with few exceptions, the Empire's Prince-electors had chosen successive heads of the House of Habsburg from the Duchy of Austria to hold the title of Holy Roman Emperor. Although they initially sought to restore central Imperial power, preserving a weak and fragmented Empire was convenient for France and Sweden, and therefore, their ensuing intervention led to the Peace of Westphalia which effectively thwarted for centuries any serious attempts to reinforce the imperial central authority and petrified fragmentation, resulting in the German-speaking territories comprising on the eve of the Napoleonic Wars still more than 300 political entities, most of them being parts of the Holy Roman Empire, though portions of the extensive Habsburg Monarchy (exclusively its large non-German-speaking territories: Lands of the Crown of Saint Stephen and the Austrian partition of Polish-Lithuanian Commonwealth) or of the Hohenzollern Kingdom of Prussia (both the German-speaking former Duchy of Prussia and the non-German-speaking entire territory of the Prussian partition of Polish-Lithuanian Commonwealth) as well as the German-speaking Swiss cantons were outside of the Imperial borders. This became known as the practice of Kleinstaaterei ("small-statery") As a further consequence, there was no German national identity in development as late as 1800, mainly due to the highly autonomous or semi-independent nature of the princely states; most inhabitants of the Holy Roman Empire, outside of those ruled by the emperor directly, identified themselves mainly with their prince rather than with the Empire or the nation as a whole. However, by the 19th century, transportation and communications improvements started to bring these regions closer together. ## Dissolution of the Old Empire by the Napoleonic Continental System Invasion of the (mostly ceremonial at the time) HRR by the First French Empire in the War of the Second Coalition (1798–1802) resulted in crushing the HRR and allied forces by Napoleon Bonaparte. The treaties of Lunéville (1801) and the Mediatization of 1803 secularized the ecclesiastical principalities and abolished most free imperial cities and these territories along with their inhabitants were absorbed by dynastic states. This transfer particularly enhanced the territories of Württemberg and Baden. In 1806, after a successful invasion of Prussia and the defeat of Prussia at the joint battles of Jena-Auerstedt 1806 during the War of the Third Coalition, Napoleon dictated the Treaty of Pressburg which included the formal dissolution of the Holy Roman Empire and the abdication of Emperor Francis II from the nominal reign over it. Napoleon established instead a German client state of France known as the Confederation of the Rhine which, inter alia, provided for the mediatization of over a hundred petty princes and counts and the absorption of their territories, as well as those of hundreds of imperial knights, by the Confederation's member-states. Several states were promoted to kingdoms such as the Kingdom of Bavaria, the Kingdom of Saxony or the Kingdom of Hanover. Following the formal secession from the Empire of the majority of its constituent states, the Emperor dissolved the Holy Roman Empire. In his abdication, Francis released all former estates from their duties and obligations to him, and took upon himself solely the title of Emperor of Austria, which had been established since 1804. ## Rise of German nationalism under Napoleon Under the hegemony of the French Empire (1804–1814), popular German nationalism thrived in the reorganized German states. Due in part to the shared experience, albeit under French dominance, various justifications emerged to identify "Germany" as a potential future single state. For the German philosopher Johann Gottlieb Fichte, > The first, original, and truly natural boundaries of states are beyond doubt their internal boundaries. Those who speak the same language are joined to each other by a multitude of invisible bonds by nature herself, long before any human art begins; they understand each other and have the power of continuing to make themselves understood more and more clearly; they belong together and are by nature one and an inseparable whole. A common language may have been seen to serve as the basis of a nation, but as contemporary historians of 19th-century Germany noted, it took more than linguistic similarity to unify these several hundred polities. The experience of German-speaking Central Europe during the years of French hegemony contributed to a sense of common cause to remove the French invaders and reassert control over their own lands. The Napoleon's campaigns in Poland (1806–07) resulting in his decision to re-establish a form of Polish statehood (the Duchy of Warsaw) at the cost of also Prussian-conquered Polish territories, as well as his campaigns on Iberian Peninsula, in western Germany, and his disastrous invasion of Russia in 1812 disillusioned many Germans, princes and peasants alike. Napoleon's Continental System nearly ruined the Central European economy. The invasion of Russia included nearly 125,000 troops from German lands, and the loss of that army encouraged many Germans, both high- and low-born, to envision a Central Europe free of Napoleon's influence. The creation of student militias such as the Lützow Free Corps exemplified this tendency. The debacle in Russia loosened the French grip on the German princes. In 1813, Napoleon mounted a campaign in the German states to bring them back into the French orbit; the subsequent War of Liberation culminated in the great Battle of Leipzig, also known as the Battle of Nations. In October 1813, more than 500,000 combatants engaged in ferocious fighting over three days, making it the largest European land battle of the 19th century. The engagement resulted in a decisive victory for the Coalition of Austria, Prussia, Russia, Saxony, and Sweden. As a result, the Confederation of the Rhine collapsed and the French period came to an end. Success encouraged the Coalition forces to pursue Napoleon across the Rhine; his army and his government collapsed, and the victorious Coalition incarcerated Napoleon on Elba. During the brief Napoleonic restoration known as the 100 Days of 1815, forces of the Seventh Coalition, including an Anglo-Allied army under the command of the Duke of Wellington and a Prussian army under the command of Gebhard von Blücher, were victorious at Waterloo (18 June 1815). The critical role played by Blücher's troops, especially after having to retreat from the field at Ligny the day before, helped to turn the tide of combat against the French. The Prussian cavalry pursued the defeated French in the evening of 18 June, sealing the allied victory. From the German perspective, the actions of Blücher's troops at Waterloo, and the combined efforts at Leipzig, offered a rallying point of pride and enthusiasm. This interpretation became a key building block of the Borussian myth expounded by the pro-Prussian nationalist historians later in the 19th century. ## Congress of Vienna and the rise of German dualism After Napoleon's defeat, the Congress of Vienna established a new European political-diplomatic system based on the balance of power. This system reorganized Europe into spheres of influence, which, in some cases, suppressed the aspirations of the various nationalities, including the Germans and Italians. Generally, an enlarged Prussia and the 38 other states consolidated from the mediatized territories of 1803 were confederated within the Austrian Empire's sphere of influence. The Congress established a loose German Confederation (1815–1866), headed by Austria, with a "Federal Diet" (called the Bundestag or Bundesversammlung, an assembly of appointed leaders) that met in the city of Frankfurt am Main. Its borders resembled those of its predecessor, the Holy Roman Empire (though there were some deviations e.g. Prussian territory in the Confederation was extended to include also the formerly Polish territories of the Lauenburg and Bütow Land and the former Starostwo of Draheim, while Austrian part was extended to include in the years 1818–1850 also the formerly Polish territories of the Duchy of Oświęcim and the Duchy of Zator), meaning that large portions of both Prussia and Austria were left outside pIn recognition of the imperial position traditionally held by the Habsburgs, the emperors of Austria became the titular presidents of this parliament. Despite the nomenclature of Diet (Assembly or Parliament), this institution should in no way be construed as a broadly, or popularly, elected group of representatives. Many of the states did not have constitutions, and those that did, such as the Duchy of Baden, based suffrage on strict property requirements which effectively limited suffrage to a small portion of the male population. ### Problems of reorganization Problematically, the built-in Austrian dominance failed to take into account Prussia's 18th-century emergence in Imperial politics. This impractical solution did not reflect the new status of Prussia in the overall scheme. Although the Prussian army had been dramatically defeated in the 1806 Battle of Jena-Auerstedt, it had made a spectacular comeback at Waterloo. Consequently, Prussian leaders expected to play a pivotal role in German politics. Ever since the Prince-Elector of Brandenburg had made himself King in Prussia at the beginning of that century, their domains had steadily increased through inheritance and war. Prussia's consolidated strength had become especially apparent during the Partitions of Poland, the War of the Austrian Succession and the Seven Years' War under Frederick the Great. As Maria Theresa and Joseph tried to restore Habsburg hegemony in the Holy Roman Empire, Frederick countered with the creation of the Fürstenbund (Union of Princes) in 1785. Austrian-Prussian dualism lay firmly rooted in old Imperial politics. Those balance of power manoeuvers were epitomized by the War of the Bavarian Succession, or "Potato War" among common folk. Even after the end of the Holy Roman Empire, this competition influenced the growth and development of nationalist movements in the 19th century. ## Prelude ### Vormärz The period of Austrian and Prussian police-states and vast censorship between the Congress of Vienna and the Revolutions of 1848 in Germany later became widely known as the Vormärz ("before March"), referring to March 1848. During this period, European liberalism gained momentum; the agenda included economic, social, and political issues. Most European liberals in the Vormärz sought unification under nationalist principles, promoted the transition to capitalism, sought the expansion of male suffrage, among other issues. Their "radicalness" depended upon where they stood on the spectrum of male suffrage: the wider the definition of suffrage, the more radical. The surge of German nationalism, stimulated by the experience of Germans in the Napoleonic period and initially allied with liberalism, shifted political, social, and cultural relationships within the German states. In this context, one can detect its roots in the experience of Germans in the Napoleonic period. Furthermore, implicit and sometimes explicit promises made during the German Campaign of 1813 engendered an expectation of popular sovereignty and widespread participation in the political process, promises that largely went unfulfilled once peace had been achieved. #### Emergence of liberal nationalism and conservative response Despite considerable conservative reaction, ideas of unity joined with notions of popular sovereignty in German-speaking lands. The Burschenschaft student organizations and popular demonstrations, such as those held at Wartburg Castle in October 1817, contributed to a growing sense of unity among German speakers of Central Europe. At the Wartburg Festival in 1817 the first real movements among the students were formed – fraternities and student organizations emerged. The colors black, red and gold were symbolic of this. Agitation by student organizations led such conservative leaders as Klemens Wenzel, Prince von Metternich, to fear the rise of national sentiment. The assassination of German dramatist August von Kotzebue in March 1819 by a radical student seeking unification was followed on 20 September 1819 by the proclamation of the Carlsbad Decrees, which hampered intellectual leadership of the nationalist movement. Metternich was able to harness conservative outrage at the assassination to consolidate legislation that would further limit the press and constrain the rising liberal and nationalist movements. Consequently, these decrees drove the Burschenschaften underground, restricted the publication of nationalist materials, expanded censorship of the press and private correspondence, and limited academic speech by prohibiting university professors from encouraging nationalist discussion. The decrees were the subject of Johann Joseph von Görres's pamphlet Teutschland [archaic: Deutschland] und die Revolution (Germany and the Revolution) (1820), in which he concluded that it was both impossible and undesirable to repress the free utterance of public opinion by reactionary measures. The Hambach Festival (Hambacher Fest) in May 1832 was attended by a crowd of more than 30,000. Promoted as a county fair, its participants celebrated fraternity, liberty, and national unity. Celebrants gathered in the town below and marched to the ruins of Hambach Castle on the heights above the small town of Hambach, in the Palatinate province of Bavaria. Carrying flags, beating drums, and singing, the participants took the better part of the morning and mid-day to arrive at the castle grounds, where they listened to speeches by nationalist orators from across the conservative to radical political spectrum. The overall content of the speeches suggested a fundamental difference between the German nationalism of the 1830s and the French nationalism of the July Revolution: the focus of German nationalism lay in the education of the people; once the populace was educated as to what was needed, they would accomplish it. The Hambach rhetoric emphasized the overall peaceable nature of German nationalism: the point was not to build barricades, a very "French" form of nationalism, but to build emotional bridges between groups. As he had done in 1819, after the Kotzebue assassination, Metternich used the popular demonstration at Hambach to push conservative social policy. The "Six Articles" of 28 June 1832 primarily reaffirmed the principle of monarchical authority. On 5 July, the Frankfurt Diet voted for an additional 10 articles, which reiterated existing rules on censorship, restricted political organizations, and limited other public activity. Furthermore, the member states agreed to send military assistance to any government threatened by unrest. Prince Wrede led half of the Bavarian army to the Palatinate to "subdue" the province. Several hapless Hambach speakers were arrested, tried and imprisoned; one, Karl Heinrich Brüggemann (1810–1887), a law student and representative of the secretive Burschenschaft, was sent to Prussia, where he was first condemned to death, but later pardoned. Crucially, both the Wartburg rally in 1817 and the Hambach Festival in 1832 had lacked any clear-cut program of unification. At Hambach, the positions of the many speakers illustrated their disparate agendas. Held together only by the idea of unification, their notions of how to achieve this did not include specific plans but instead rested on the nebulous idea that the Volk (the people), if properly educated, would bring about unification on their own. Grand speeches, flags, exuberant students, and picnic lunches did not translate into a new political, bureaucratic, or administrative apparatus. While many spoke about the need for a constitution, no such document appeared from the discussions. In 1848, nationalists sought to remedy that problem. #### Economy and the customs union Several other factors complicated the rise of nationalism in the German states. The man-made factors included political rivalries between members of the German confederation, particularly between the Austrians and the Prussians, and socio-economic competition among the commercial and merchant interests, and the old land-owning and aristocratic interests. Natural factors included widespread drought in the early 1830s, and again in the 1840s, and a food crisis in the 1840s. Further complications emerged as a result of a shift in industrialization and manufacturing; as people sought jobs, they left their villages and small towns to work during the week in the cities, returning for a day and a half on weekends. The economic, social and cultural dislocation of ordinary people, the economic hardship of an economy in transition, and the pressures of meteorological disasters all contributed to growing problems in Central Europe. The failure of most of the governments to deal with the food crisis of the mid-1840s, caused by the potato blight (related to the Great Irish Famine) and several seasons of bad weather, encouraged many to think that the rich and powerful had no interest in their problems. Those in authority were concerned about the growing unrest, political and social agitation among the working classes, and the disaffection of the intelligentsia. No amount of censorship, fines, imprisonment, or banishment, it seemed, could stem the criticism. Furthermore, it was becoming increasingly clear that both Austria and Prussia wanted to be the leaders in any resulting unification; each would inhibit the drive of the other to achieve unification. Formation of the Zollverein, an institution key to unifying the German states economically, helped to create a larger sense of economic unification. Initially conceived by the Prussian Finance Minister Hans, Count von Bülow, as a Prussian customs union in 1818, the Zollverein linked the many Prussian and Hohenzollern territories. Over the ensuing thirty years (and more) other German states joined. The Union helped to reduce protectionist barriers between the German states, especially improving the transport of raw materials and finished goods, making it both easier to move goods across territorial borders and less costly to buy, transport, and sell raw materials. This was particularly important for the emerging industrial centers, most of which were located in the Prussian regions of the Rhineland, the Saar, and the Ruhr valleys. States more distant from the coast joined the Customs Union earlier. Not being a member mattered more for the states of south Germany, since the external tariff of the Customs Union prevented customs-free access to the coast (which gave access to international markets). Thus, by 1836, all states to the south of Prussia had joined the Customs Union, except Austria. In contrast, the coastal states already had barrier free access to international trade and did not want consumers and producers burdened with the import duties they would pay if they were within the Zollverein customs border. Hanover on the north coast formed its own customs union – the "Tax Union" or Steuerverein – in 1834 with Brunswick and with Oldenburg in 1836. The external tariffs on finished goods and overseas raw materials were below the rates of the Zollverein. Brunswick joined the Zollverein Customs Union in 1842, while Hanover and Oldenburg finally joined in 1854 After the Austro-Prussian war of 1866, Schleswig, Holstein and Lauenburg were annexed by Prussia and thus annexed also to the Customs Union, while the two Mecklenburg states and the city states of Hamburg and Bremen joined late because they were reliant on international trade. The Mecklenburgs joined in 1867, while Bremen and Hamburg joined in 1888. #### Roads and railways By the early 19th century, German roads had deteriorated to an appalling extent. Travelers, both foreign and local, complained bitterly about the state of the Heerstraßen, the military roads previously maintained for the ease of moving troops. As German states ceased to be a military crossroads, however, the roads improved; the length of hard–surfaced roads in Prussia increased from 3,800 kilometers (2,400 mi) in 1816 to 16,600 kilometers (10,300 mi) in 1852, helped in part by the invention of macadam. By 1835, Heinrich von Gagern wrote that roads were the "veins and arteries of the body politic..." and predicted that they would promote freedom, independence and prosperity.As people moved around, they came into contact with others, on trains, at hotels, in restaurants, and for some, at fashionable resorts such as the spa in Baden-Baden. Water transportation also improved. The blockades on the Rhine had been removed by Napoleon's orders, but by the 1820s, steam engines freed riverboats from the cumbersome system of men and animals that towed them upstream. By 1846, 180 steamers plied German rivers and Lake Constance, and a network of canals extended from the Danube, the Weser, and the Elbe rivers. As important as these improvements were, they could not compete with the impact of the railway. German economist Friedrich List called the railways and the Customs Union "Siamese Twins", emphasizing their important relationship to one another. He was not alone: the poet August Heinrich Hoffmann von Fallersleben wrote a poem in which he extolled the virtues of the Zollverein, which he began with a list of commodities that had contributed more to German unity than politics or diplomacy. Historians of the German Empire later regarded the railways as the first indicator of a unified state; the patriotic novelist, Wilhelm Raabe, wrote: "The German empire was founded with the construction of the first railway..." Not everyone greeted the iron monster with enthusiasm. The Prussian king Frederick William III saw no advantage in traveling from Berlin to Potsdam a few hours faster, and Metternich refused to ride in one at all. Others wondered if the railways were an "evil" that threatened the landscape: Nikolaus Lenau's 1838 poem An den Frühling (To Spring) bemoaned the way trains destroyed the pristine quietude of German forests. The Bavarian Ludwig Railway, which was the first passenger or freight rail line in the German lands, connected Nuremberg and Fürth in 1835. Although it was 6 kilometers (3.7 mi) long and only operated in daylight, it proved both profitable and popular. Within three years, 141 kilometers (88 mi) of track had been laid, by 1840, 462 kilometers (287 mi), and by 1860, 11,157 kilometers (6,933 mi). Lacking a geographically central organizing feature (such as a national capital), the rails were laid in webs, linking towns and markets within regions, regions within larger regions, and so on. As the rail network expanded, it became cheaper to transport goods: in 1840, 18 Pfennigs per ton per kilometer and in 1870, five Pfennigs. The effects of the railway were immediate. For example, raw materials could travel up and down the Ruhr Valley without having to unload and reload. Railway lines encouraged economic activity by creating demand for commodities and by facilitating commerce. In 1850, inland shipping carried three times more freight than railroads; by 1870, the situation was reversed, and railroads carried four times more. Rail travel changed how cities looked and how people traveled. Its impact reached throughout the social order, affecting the highest born to the lowest. Although some of the outlying German provinces were not serviced by rail until the 1890s, the majority of the population, manufacturing centers, and production centers were linked to the rail network by 1865. #### Geography, patriotism and language As travel became easier, faster, and less expensive, Germans started to see unity in factors other than their language. The Brothers Grimm, who compiled a massive dictionary known as The Grimm, also assembled a compendium of folk tales and fables, which highlighted the story-telling parallels between different regions. Karl Baedeker wrote guidebooks to different cities and regions of Central Europe, indicating places to stay, sites to visit, and giving a short history of castles, battlefields, famous buildings, and famous people. His guides also included distances, roads to avoid, and hiking paths to follow. The words of August Heinrich Hoffmann von Fallersleben expressed not only the linguistic unity of the German people but also their geographic unity. In Deutschland, Deutschland über Alles, officially called Das Lied der Deutschen ("The Song of the Germans"), Fallersleben called upon sovereigns throughout the German states to recognize the unifying characteristics of the German people. Such other patriotic songs as "Die Wacht am Rhein" ("The Watch on the Rhine") by Max Schneckenburger began to focus attention on geographic space, not limiting "Germanness" to a common language. Schneckenburger wrote "The Watch on the Rhine" in a specific patriotic response to French assertions that the Rhine was France's "natural" eastern boundary. In the refrain, "Dear fatherland, dear fatherland, put your mind to rest / The watch stands true on the Rhine", and in such other patriotic poetry as Nicholaus Becker's "Das Rheinlied" ("The Rhine"), Germans were called upon to defend their territorial homeland. In 1807, Alexander von Humboldt argued that national character reflected geographic influence, linking landscape to people. Concurrent with this idea, movements to preserve old fortresses and historic sites emerged, and these particularly focused on the Rhineland, the site of so many confrontations with France and Spain. ### German revolutions and Polish uprising of 1848–1849 The widespread—mainly German—revolutions of 1848–49 sought unification of Germany under a single constitution. The revolutionaries pressured various state governments, particularly those in the Rhineland, for a parliamentary assembly that would have the responsibility to draft a constitution. Ultimately, many of the left-wing revolutionaries hoped this constitution would establish universal male suffrage, a permanent national parliament, and a unified Germany, possibly under the leadership of the Prussian king. This seemed to be the most logical course since Prussia was the strongest of the German states, as well as the largest in geographic size. Meanwhile, center-right revolutionaries sought some kind of expanded suffrage within their states and potentially, a form of loose unification. Finally, the Polish majority living in the share of Polish territory annexed by Prussia pursued their own liberation agenda. #### Frankfurt Parliament Their pressure resulted in a variety of elections, based on different voting qualifications, such as the Prussian three-class franchise, which granted to some electoral groups—chiefly the wealthier, landed ones—greater representative power. On 27 March 1849, the Frankfurt Parliament passed the Paulskirchenverfassung (Constitution of St. Paul's Church) and offered the title of Kaiser (Emperor) to the Prussian king Frederick William IV the next month. He refused for a variety of reasons. Publicly, he replied that he could not accept a crown without the consent of the actual states, by which he meant the princes. Privately, he feared opposition from the other German princes and military intervention from Austria or Russia. He also held a fundamental distaste for the idea of accepting a crown from a popularly elected parliament: he would not accept a crown of "clay". Despite franchise requirements that often perpetuated many of the problems of sovereignty and political participation liberals sought to overcome, the Frankfurt Parliament did manage to draft a constitution and reach an agreement on the kleindeutsch solution. While the liberals failed to achieve the unification they sought, they did manage to gain a partial victory by working with the German princes on many constitutional issues and collaborating with them on reforms. #### The aborted 1848–1849 German Empire in retrospective analysis Scholars of German history have engaged in decades of debate over how the successes and failures of the Frankfurt Parliament contribute to the historiographical explanations of German nation building. One school of thought, which emerged after The Great War and gained momentum in the aftermath of World War II, maintains that the failure of German liberals in the Frankfurt Parliament led to bourgeoisie compromise with conservatives (especially the conservative Junker landholders), which subsequently led to the so-called Sonderweg (distinctive path) of 20th-century German history. Failure to achieve unification in 1848, this argument holds, resulted in the late formation of the nation-state in 1871, which in turn delayed the development of positive national values. Hitler often called on the German public to sacrifice all for the cause of their great nation, but his regime did not create German nationalism: it merely capitalized on an intrinsic cultural value of German society that still remains prevalent even to this day. Furthermore, this argument maintains, the "failure" of 1848 reaffirmed latent aristocratic longings among the German middle class; consequently, this group never developed a self-conscious program of modernization. More recent scholarship has rejected this idea, claiming that Germany did not have an actual "distinctive path" any more than any other nation, a historiographic idea known as exceptionalism. Instead, modern historians claim 1848 saw specific achievements by the liberal politicians. Many of their ideas and programs were later incorporated into Bismarck's social programs (e.g., social insurance, education programs, and wider definitions of suffrage). In addition, the notion of a distinctive path relies upon the underlying assumption that some other nation's path (in this case, the United Kingdom's) is the accepted norm. This new argument further challenges the norms of the British-centric model of development: studies of national development in Britain and other "normal" states (e.g., France or the United States) have suggested that even in these cases, the modern nation-state did not develop evenly. Nor did it develop particularly early, being rather a largely mid-to-late-19th-century phenomenon. Since the end of the 1990s, this view has become widely accepted, although some historians still find the Sonderweg analysis helpful in understanding the period of National Socialism. ### Problem of spheres of influence: The Erfurt Union and the Punctation of Olmütz After the Frankfurt Parliament disbanded, Frederick William IV, under the influence of General Joseph Maria von Radowitz, supported the establishment of the Erfurt Union—a federation of German states, excluding Austria—by the free agreement of the German princes. This limited union under Prussia would have almost eliminated Austrian influence on the other German states. Combined diplomatic pressure from Austria and Russia (a guarantor of the 1815 agreements that established European spheres of influence) forced Prussia to relinquish the idea of the Erfurt Union at a meeting in the small town of Olmütz in Moravia. In November 1850, the Prussians—specifically Radowitz and Frederick William—agreed to the restoration of the German Confederation under Austrian leadership. This became known as the Punctation of Olmütz, but among Prussians it was known as the "Humiliation of Olmütz." Although seemingly minor events, the Erfurt Union proposal and the Punctation of Olmütz brought the problems of influence in the German states into sharp focus. The question became not a matter of if but rather when unification would occur, and when was contingent upon strength. One of the former Frankfurt Parliament members, Johann Gustav Droysen, summed up the problem: > We cannot conceal the fact that the whole German question is a simple alternative between Prussia and Austria. In these states, German life has its positive and negative poles—in the former, all the interests [that] are national and reformative, in the latter, all that are dynastic and destructive. The German question is not a constitutional question but a question of power; and the Prussian monarchy is now wholly German, while that of Austria cannot be. Unification under these conditions raised a basic diplomatic problem. The possibility of German (or Italian) unification would overturn the overlapping spheres of influence system created in 1815 at the Congress of Vienna. The principal architects of this convention, Metternich, Castlereagh, and Tsar Alexander (with his foreign secretary Count Karl Nesselrode), had conceived of and organized a Europe balanced and guaranteed by four "great powers": Great Britain, France, Russia, and Austria, with each power having a geographic sphere of influence. France's sphere included the Iberian Peninsula and a share of influence in the Italian states. Russia's included the eastern regions of Central Europe and a balancing influence in the Balkans. Austria's sphere expanded throughout much of the Central European territories formerly held by the Holy Roman Empire. Britain's sphere was the rest of the world, especially the seas. This sphere of influence system depended upon the fragmentation of the German and Italian states, not their consolidation. Consequently, a German nation united under one banner presented significant questions. There was no readily applicable definition for who the German people would be or how far the borders of a German nation would stretch. There was also uncertainty as to who would best lead and defend "Germany", however it was defined. Different groups offered different solutions to this problem. In the Kleindeutschland ("Lesser Germany") solution, the German states would be united under the leadership of the Prussian Hohenzollerns; in the Grossdeutschland ("Greater Germany") solution, the German states would be united under the leadership of the Austrian Habsburgs. This controversy, the latest phase of the German dualism debate that had dominated the politics of the German states and Austro-Prussian diplomacy since the 1701 creation of the Kingdom of Prussia, would come to a head during the following twenty years. ### External expectations of a unified Germany Other nationalists had high hopes for the German unification movement, and the frustration with lasting German unification after 1850 seemed to set the national movement back. Revolutionaries associated national unification with progress. As Giuseppe Garibaldi wrote to German revolutionary Karl Blind on 10 April 1865, "The progress of humanity seems to have come to a halt, and you with your superior intelligence will know why. The reason is that the world lacks a nation [that] possesses true leadership. Such leadership, of course, is required not to dominate other peoples but to lead them along the path of duty, to lead them toward the brotherhood of nations where all the barriers erected by egoism will be destroyed." Garibaldi looked to Germany for the "kind of leadership [that], in the true tradition of medieval chivalry, would devote itself to redressing wrongs, supporting the weak, sacrificing momentary gains and material advantage for the much finer and more satisfying achievement of relieving the suffering of our fellow men. We need a nation courageous enough to give us a lead in this direction. It would rally to its cause all those who are suffering wrong or who aspire to a better life and all those who are now enduring foreign oppression." German unification had also been viewed as a prerequisite for the creation of a European federation, which Giuseppe Mazzini and other European patriots had been promoting for more than three decades: > In the spring of 1834, while at Berne, Mazzini and a dozen refugees from Italy, Poland and Germany founded a new association with the grandiose name of Young Europe. Its basic, and equally grandiose idea, was that, as the French Revolution of 1789 had enlarged the concept of individual liberty, another revolution would now be needed for national liberty; and his vision went further because he hoped that in the no doubt distant future free nations might combine to form a loosely federal Europe with some kind of federal assembly to regulate their common interests. [...] His intention was nothing less than to overturn the European settlement agreed [to] in 1815 by the Congress of Vienna, which had reestablished an oppressive hegemony of a few great powers and blocked the emergence of smaller nations. [...] Mazzini hoped, but without much confidence, that his vision of a league or society of independent nations would be realized in his own lifetime. In practice Young Europe lacked the money and popular support for more than a short-term existence. Nevertheless he always remained faithful to the ideal of a united continent for which the creation of individual nations would be an indispensable preliminary. ### Prussia's growing strength: Realpolitik King Frederick William IV suffered a stroke in 1857 and could no longer rule. This led to his brother William becoming prince regent of the Kingdom of Prussia in 1858. Meanwhile, Helmuth von Moltke had become chief of the Prussian General Staff in 1857, and Albrecht von Roon would become Prussian Minister of War in 1859. This shuffling of authority within the Prussian military establishment would have important consequences. Von Roon and William (who took an active interest in military structures) began reorganizing the Prussian army, while Moltke redesigned the strategic defense of Prussia by streamlining operational command. Prussian army reforms (especially how to pay for them) caused a constitutional crisis beginning in 1860 because both parliament and William—via his minister of war—wanted control over the military budget. William, crowned King Wilhelm I in 1861, appointed Otto von Bismarck to the position of Minister-President of Prussia in 1862. Bismarck resolved the crisis in favor of the war minister. The Crimean War of 1854–55 and the Italian War of 1859 disrupted relations among Great Britain, France, Austria, and Russia. In the aftermath of this disarray, the convergence of von Moltke's operational redesign, von Roon and Wilhelm's army restructure, and Bismarck's diplomacy influenced the realignment of the European balance of power. Their combined agendas established Prussia as the leading German power through a combination of foreign diplomatic triumphs—backed up by the possible use of Prussian military might—and an internal conservatism tempered by pragmatism, which came to be known as Realpolitik. Bismarck expressed the essence of Realpolitik in his subsequently famous "Blood and Iron" speech to the Budget Committee of the Prussian Chamber of Deputies on 30 September 1862, shortly after he became Minister President: "The great questions of the time will not be resolved by speeches and majority decisions—that was the great mistake of 1848 and 1849—but by iron and blood." Bismarck's words, "iron and blood" (or "blood and iron", as often attributed), have often been misappropriated as evidence of a German lust for blood and power. First, the phrase from his speech "the great questions of time will not be resolved by speeches and majority decisions" is often interpreted as a repudiation of the political process—a repudiation Bismarck did not himself advocate. Second, his emphasis on blood and iron did not imply simply the unrivaled military might of the Prussian army but rather two important aspects: the ability of the assorted German states to produce iron and other related war materials and the willingness to use those war materials if necessary. By 1862, when Bismarck made his speech, the idea of a German nation-state in the peaceful spirit of Pan-Germanism had shifted from the liberal and democratic character of 1848 to accommodate Bismarck's more conservative Realpolitik. Bismarck sought to link a unified state to the Hohenzollern dynasty, which for some historians remains one of Bismarck's primary contributions to the creation of the German Empire in 1871. While the conditions of the treaties binding the various German states to one another prohibited Bismarck from taking unilateral action, the politician and diplomat in him realized the impracticality of this. To get the German states to unify, Bismarck needed a single, outside enemy that would declare war on one of the German states first, thus providing a casus belli to rally all Germans behind. This opportunity arose with the outbreak of the Franco-Prussian War in 1870. Historians have long debated Bismarck's role in the events leading up to the war. The traditional view, promulgated in large part by late 19th- and early 20th-century pro-Prussian historians, maintains that Bismarck's intent was always German unification. Post-1945 historians, however, see more short-term opportunism and cynicism in Bismarck's manipulation of the circumstances to create a war, rather than a grand scheme to unify a nation-state. Regardless of motivation, by manipulating events of 1866 and 1870, Bismarck demonstrated the political and diplomatic skill that had caused Wilhelm to turn to him in 1862. Three episodes proved fundamental to the unification of Germany. First, the death without male heirs of Frederick VII of Denmark led to the Second War of Schleswig in 1864. Second, the unification of Italy provided Prussia an ally against Austria in the Austro-Prussian War of 1866. Finally, France—fearing Hohenzollern encirclement—declared war on Prussia in 1870, resulting in the Franco-Prussian War. Through a combination of Bismarck's diplomacy and political leadership, von Roon's military reorganization, and von Moltke's military strategy, Prussia demonstrated that none of the European signatories of the 1815 peace treaty could guarantee Austria's sphere of influence in Central Europe, thus achieving Prussian hegemony in Germany and ending the dualism debate. ### The Schleswig-Holstein Question The first episode in the saga of German unification under Bismarck came with the Schleswig-Holstein Question. On 15 November 1863, Christian IX became king of Denmark and duke of Schleswig, Holstein, and Lauenburg, which the Danish king held in personal union. On 18 November 1863, he signed the Danish November Constitution which replaced The Law of Sjælland and The Law of Jutland, which meant the new constitution applied to the Duchy of Schleswig. The German Confederation saw this act as a violation of the London Protocol of 1852, which emphasized the status of the Kingdom of Denmark as distinct from the three independent duchies. The German Confederation could use the ethnicities of the area as a rallying cry: Holstein and Lauenburg were largely of German origin and spoke German in everyday life, while Schleswig had a significant Danish population and history. Diplomatic attempts to have the November Constitution repealed collapsed, and fighting began when Prussian and Austrian troops crossed the Eider river on 1 February 1864. Initially, the Danes attempted to defend their country using an ancient earthen wall known as the Danevirke, but this proved futile. The Danes were no match for the combined Prussian and Austrian forces and their modern armaments. The needle gun, one of the first bolt action rifles to be used in conflict, aided the Prussians in both this war and the Austro-Prussian War two years later. The rifle enabled a Prussian soldier to fire five shots while lying prone, while its muzzle-loading counterpart could only fire one shot and had to be reloaded while standing. The Second Schleswig War resulted in victory for the combined armies of Prussia and Austria, and the two countries won control of Schleswig and Holstein in the concluding peace of Vienna, signed on 30 October 1864. ### War between Austria and Prussia, 1866 The second episode in Bismarck's unification efforts occurred in 1866. In concert with the newly formed Italy, Bismarck created a diplomatic environment in which Austria declared war on Prussia. The dramatic prelude to the war occurred largely in Frankfurt, where the two powers claimed to speak for all the German states in the parliament. In April 1866, the Prussian representative in Florence signed a secret agreement with the Italian government, committing each state to assist the other in a war against Austria. The next day, the Prussian delegate to the Frankfurt assembly presented a plan calling for a national constitution, a directly elected national Diet, and universal suffrage. German liberals were justifiably skeptical of this plan, having witnessed Bismarck's difficult and ambiguous relationship with the Prussian Landtag (State Parliament), a relationship characterized by Bismarck's cajoling and riding roughshod over the representatives. These skeptics saw the proposal as a ploy to enhance Prussian power rather than a progressive agenda of reform. #### Choosing sides The debate over the proposed national constitution became moot when news of Italian troop movements in Tyrol and near the Venetian border reached Vienna in April 1866. The Austrian government ordered partial mobilization in the southern regions; the Italians responded by ordering full mobilization. Despite calls for rational thought and action, Italy, Prussia, and Austria continued to rush toward armed conflict. On 1 May, Wilhelm gave von Moltke command over the Prussian armed forces, and the next day he began full-scale mobilization. In the Diet, the group of middle-sized states, known as Mittelstaaten (Bavaria, Württemberg, the grand duchies of Baden and Hesse, and the duchies of Saxony–Weimar, Saxony–Meiningen, Saxony–Coburg, and Nassau), supported complete demobilization within the Confederation. These individual governments rejected the potent combination of enticing promises and subtle (or outright) threats Bismarck used to try to gain their support against the Habsburgs. The Prussian war cabinet understood that its only supporters among the German states against the Habsburgs were two small principalities bordering on Brandenburg that had little military strength or political clout: the Grand Duchies of Mecklenburg-Schwerin and Mecklenburg-Strelitz. They also understood that Prussia's only ally abroad was Italy. Opposition to Prussia's strong-armed tactics surfaced in other social and political groups. Throughout the German states, city councils, liberal parliamentary members who favored a unified state, and chambers of commerce—which would see great benefits from unification—opposed any war between Prussia and Austria. They believed any such conflict would only serve the interests of royal dynasties. Their own interests, which they understood as "civil" or "bourgeois", seemed irrelevant. Public opinion also opposed Prussian domination. Catholic populations along the Rhine—especially in such cosmopolitan regions as Cologne and in the heavily populated Ruhr Valley—continued to support Austria. By late spring, most important states opposed Berlin's effort to reorganize the German states by force. The Prussian cabinet saw German unity as an issue of power and a question of who had the strength and will to wield that power. Meanwhile, the liberals in the Frankfurt assembly saw German unity as a process of negotiation that would lead to the distribution of power among the many parties. #### Austria isolated Although several German states initially sided with Austria, they stayed on the defensive and failed to take effective initiatives against Prussian troops. The Austrian army therefore faced the technologically superior Prussian army with support only from Saxony. France promised aid, but it came late and was insufficient. Complicating the situation for Austria, the Italian mobilization on Austria's southern border required a diversion of forces away from battle with Prussia to fight the Third Italian War of Independence on a second front in Venetia and on the Adriatic sea. A quick peace was essential to keep Russia from entering the conflict on Austria's side. In the day-long Battle of Königgrätz, near the village of Sadová, Friedrich Carl and his troops arrived late, and in the wrong place. Once he arrived, however, he ordered his troops immediately into the fray. The battle was a decisive victory for Prussia and forced the Habsburgs to end the war with the unfavorable Peace of Prague, laying the groundwork for the Kleindeutschland (little Germany) solution, or "Germany without Austria." ## Founding a unified state > There is, in political geography, no Germany proper to speak of. There are Kingdoms and Grand Duchies, and Duchies and Principalities, inhabited by Germans, and each [is] separately ruled by an independent sovereign with all the machinery of State. Yet there is a natural undercurrent tending to a national feeling and toward a union of the Germans into one great nation, ruled by one common head as a national unit. ### Peace of Prague and the North German Confederation `The Peace of Prague sealed the dissolution of the German Confederation. Its former leading state, the Austrian Empire, was along with the majority of its allies excluded from the ensuing North German Confederation Treaty sponsored by Prussia which directly annexed Hanover, Hesse-Kassel, Nassau, and the city of Frankfurt, while Hesse Darmstadt lost some territory but kept its statehood. At the same time, the original East Prussian craddle of the Prussian statehood as well as the Prussian-held Polish- or Kashubian-speaking territories of Province of Posen and West Prussia were formally annexed into the North German Confederation, thus Germany. Following adoption of the North German Constitution, the new state obtained its own constitution, flag, and governmental and administrative structures.` Through military victory, Prussia under Bismarck's influence had overcome Austria's active resistance to the idea of a unified Germany. The states south of the Main River (Baden, Württemberg, and Bavaria) signed separate treaties requiring them to pay indemnities and to form alliances bringing them into Prussia's sphere of influence. Austria's influence over the German states may have been broken, but the war also splintered the spirit of pan-German unity, as many German states resented Prussian power politics. ### Unified Italy and Austro-Hungarian Compromise The Peace of Prague offered lenient terms to Austria but its relationship with the new nation-state of Italy underwent major restructuring. Although the Austrians were far more successful in the military field against Italian troops, the monarchy lost the important province of Venetia. The Habsburgs ceded Venetia to France, which then formally transferred control to Italy. The end of Austrian dominance of the German states shifted Austria's attention to the Balkans. The reality of defeat for Austria also caused a reevaluation of internal divisions, local autonomy, and liberalism. In 1867, the Austrian emperor Franz Joseph accepted a settlement (the Austro-Hungarian Compromise of 1867) in which he gave his Hungarian holdings equal status with his Austrian domains, creating the Dual Monarchy of Austria-Hungary. ### War with France The French public resented the Prussian victory and demanded Revanche pour Sadová ("Revenge for Sadova"), illustrating anti-Prussian sentiment in France—a problem that would accelerate in the months leading up to the Franco-Prussian War. The Austro-Prussian War also damaged relations with the French government. At a meeting in Biarritz in September 1865 with Napoleon III, Bismarck had let it be understood (or Napoleon had thought he understood) that France might annex parts of Belgium and Luxembourg in exchange for its neutrality in the war. These annexations did not happen, resulting in animosity from Napoleon towards Bismarck. #### Background By 1870 three of the important lessons of the Austro-Prussian war had become apparent. The first lesson was that, through force of arms, a powerful state could challenge the old alliances and spheres of influence established in 1815. Second, through diplomatic maneuvering, a skilful leader could create an environment in which a rival state would declare war first, thus forcing states allied with the "victim" of external aggression to come to the leader's aid. Finally, as Prussian military capacity far exceeded that of Austria, Prussia was clearly the only state within the Confederation (or among the German states generally) capable of protecting all of them from potential interference or aggression. In 1866, most mid-sized German states had opposed Prussia, but by 1870 these states had been coerced and coaxed into mutually protective alliances with Prussia. If a European state declared war on one of their members, then they all would come to the defense of the attacked state. With skilful manipulation of European politics, Bismarck created a situation in which France would play the role of aggressor in German affairs, while Prussia would play that of the protector of German rights and liberties. At the Congress of Vienna in 1815, Metternich and his conservative allies had reestablished the Spanish monarchy under King Ferdinand VII. Over the following forty years, the great powers supported the Spanish monarchy, but events in 1868 would further test the old system, finally providing the external trigger needed by Bismarck. #### Spanish prelude A revolution in Spain overthrew Queen Isabella II, and the throne remained empty while Isabella lived in sumptuous exile in Paris. The Spanish, looking for a suitable Catholic successor, had offered the post to three European princes, each of whom was rejected by Napoleon III, who served as regional power-broker. Finally, in 1870 the Regency offered the crown to Leopold of Hohenzollern-Sigmaringen, a prince of the Catholic cadet Hohenzollern line. The ensuing furor has been dubbed by historians as the Hohenzollern candidature.Over the next few weeks, the Spanish offer turned into the talk of Europe. Bismarck encouraged Leopold to accept the offer. A successful installment of a Hohenzollern-Sigmaringen king in Spain would mean that two countries on either side of France would both have German kings of Hohenzollern descent. This may have been a pleasing prospect for Bismarck, but it was unacceptable to either Napoleon III or to Agenor, duc de Gramont, his minister of foreign affairs. Gramont wrote a sharply formulated ultimatum to Wilhelm, as head of the Hohenzollern family, stating that if any Hohenzollern prince should accept the crown of Spain, the French government would respond—although he left ambiguous the nature of such response. The prince withdrew as a candidate, thus defusing the crisis, but the French ambassador to Berlin would not let the issue lie. He approached the Prussian king directly while Wilhelm was vacationing in Ems Spa, demanding that the King release a statement saying he would never support the installation of a Hohenzollern on the throne of Spain. Wilhelm refused to give such an encompassing statement, and he sent Bismarck a dispatch by telegram describing the French demands. Bismarck used the king's telegram, called the Ems Dispatch, as a template for a short statement to the press. With its wording shortened and sharpened by Bismarck—and further alterations made in the course of its translation by the French agency Havas—the Ems Dispatch raised an angry furor in France. The French public, still aggravated over the defeat at Sadová, demanded war. #### Open hostilities and the disastrous end of the Second French Empire Napoleon III had tried to secure territorial concessions from both sides before and after the Austro-Prussian War, but despite his role as mediator during the peace negotiations, he ended up with nothing. He then hoped that Austria would join in a war of revenge and that its former allies—particularly the southern German states of Baden, Württemberg, and Bavaria—would join in the cause. This hope would prove futile since the 1866 treaty came into effect and united all German states militarily—if not happily—to fight against France. Instead of a war of revenge against Prussia, supported by various German allies, France engaged in a war against all of the German states without any allies of its own. The reorganization of the military by von Roon and the operational strategy of Moltke combined against France to great effect. The speed of Prussian mobilization astonished the French, and the Prussian ability to concentrate power at specific points—reminiscent of Napoleon I's strategies seventy years earlier—overwhelmed French mobilization. Utilizing their efficiently laid rail grid, Prussian troops were delivered to battle areas rested and prepared to fight, whereas French troops had to march for considerable distances to reach combat zones. After a number of battles, notably Spicheren, Wörth, Mars la Tour, and Gravelotte, the Prussians defeated the main French armies and advanced on the primary city of Metz and the French capital of Paris. They captured Napoleon III and took an entire army as prisoners at Sedan on 1 September 1870. #### Proclamation of the German Empire The humiliating capture of the French emperor and the loss of the French army itself, which marched into captivity at a makeshift camp in the Saarland ("Camp Misery"), threw the French government into turmoil; Napoleon's energetic opponents overthrew his government and proclaimed the Third Republic. "In the days after Sedan, Prussian envoys met with the French and demanded a large cash indemnity as well as the cession of Alsace and Lorraine. All parties in France rejected the terms, insisting that any armistice be forged "on the basis of territorial integrity." France, in other words, would pay reparations for starting the war, but would, in Jules Favre's famous phrase, "cede neither a clod of our earth nor a stone of our fortresses". The German High Command expected an overture of peace from the French, but the new republic refused to surrender. The Prussian army invested Paris and held it under siege until mid-January, with the city being "ineffectually bombarded". Nevertheless, in January, the Germans fired some 12,000 shells, 300–400 grenades daily into the city. On January 18, 1871, the German princes and senior military commanders proclaimed Wilhelm "German Emperor" in the Hall of Mirrors at the Palace of Versailles. Under the subsequent Treaty of Frankfurt, France relinquished most of its traditionally German regions (Alsace and the German-speaking part of Lorraine); paid an indemnity, calculated (on the basis of population) as the precise equivalent of the indemnity that Napoleon Bonaparte imposed on Prussia in 1807; and accepted German administration of Paris and most of northern France, with "German troops to be withdrawn stage by stage with each installment of the indemnity payment". #### War as ′′the capstone of the unification process′′ Victory in the Franco-Prussian War proved the capstone of the unification process. In the first half of the 1860s, Austria and Prussia both contended to speak for the German states; both maintained they could support German interests abroad and protect German interests at home. In responding to the Schleswig-Holstein Question, they both proved equally diligent in doing so. After the victory over Austria in 1866, Prussia began internally asserting its authority to speak for the German states and defend German interests, while Austria began directing more and more of its attention to possessions in the Balkans. The victory over France in 1871 expanded Prussian hegemony in the German states (aside from Austria) to the international level. With the proclamation of Wilhelm as Kaiser, Prussia assumed the leadership of the new empire. The southern states became officially incorporated into a unified Germany at the Treaty of Versailles of 1871 (signed 26 February 1871; later ratified in the Treaty of Frankfurt of 10 May 1871), which formally ended the war. Although Bismarck had led the transformation of Germany from a loose confederation into a federal nation state, he had not done it alone. Unification was achieved by building on a tradition of legal collaboration under the Holy Roman Empire and economic collaboration through the Zollverein. The difficulties of the Vormärz, the impact of the 1848 liberals, the importance of von Roon's military reorganization, and von Moltke's strategic brilliance all played a part in political unification. "Einheit – unity – was achieved at the expense of Freiheit – freedom. The German Empire became," in Karl Marx's words, "a military despotism cloaked in parliamentary forms with a feudal ingredient, influenced by the bourgeoisie, festooned with bureaucrats and guarded by police." Indeed, many historians would see Germany's "escape into war" in 1914 as a flight from all of the internal-political contradictions forged by Bismarck at Versailles in the fall of 1870. ### Internal political and administrative unification The new German Empire included 26 political entities: twenty-five constituent states (or Bundesstaaten) and one Imperial Territory (or Reichsland). It realized the Kleindeutsche Lösung ("Lesser German Solution", with the exclusion of Austria) as opposed to a Großdeutsche Lösung or "Greater German Solution", which would have included Austria. Unifying various states into one nation required more than some military victories, however much these might have boosted morale. It also required a rethinking of political, social, and cultural behaviors and the construction of new metaphors about "us" and "them". Who were the new members of this new nation? What did they stand for? How were they to be organized? #### Constituent states of the Empire Though often characterized as a federation of monarchs, the German Empire, strictly speaking, federated a group of 26 constituent entities with different forms of government, ranging from the main four constitutional monarchies to the three republican Hanseatic cities. #### Political structure of the Empire The 1866 North German Constitution became (with some semantic adjustments) the 1871 Constitution of the German Empire. With this constitution, the new Germany acquired some democratic features: notably the Imperial Diet, which—in contrast to the parliament of Prussia—gave citizens representation on the basis of elections by direct and equal suffrage of all males who had reached the age of 25. Furthermore, elections were generally free of chicanery, engendering pride in the national parliament. However, legislation required the consent of the Bundesrat, the federal council of deputies from the states, in and over which Prussia had a powerful influence; Prussia could appoint 17 of 58 delegates with only 14 votes needed for a veto. Prussia thus exercised influence in both bodies, with executive power vested in the Prussian King as Kaiser, who appointed the federal chancellor. The chancellor was accountable solely to, and served entirely at the discretion of, the Emperor. Officially, the chancellor functioned as a one-man cabinet and was responsible for the conduct of all state affairs; in practice, the State Secretaries (bureaucratic top officials in charge of such fields as finance, war, foreign affairs, etc.) acted as unofficial portfolio ministers. With the exception of the years 1872–1873 and 1892–1894, the imperial chancellor was always simultaneously the prime minister of the imperial dynasty's hegemonic home-kingdom, Prussia. The Imperial Diet had the power to pass, amend, or reject bills, but it could not initiate legislation. (The power of initiating legislation rested with the chancellor.) The other states retained their own governments, but the military forces of the smaller states came under Prussian control. The militaries of the larger states (such as the Kingdoms of Bavaria and Saxony) retained some autonomy, but they underwent major reforms to coordinate with Prussian military principles and came under federal government control in wartime. #### Historical arguments and the Empire's social anatomy The Sonderweg hypothesis attributed Germany's difficult 20th century to the weak political, legal, and economic basis of the new empire. The Prussian landed elites, the Junkers, retained a substantial share of political power in the unified state. The Sonderweg hypothesis attributed their power to the absence of a revolutionary breakthrough by the middle classes, or by peasants in combination with the urban workers, in 1848 and again in 1871. Recent research into the role of the Grand Bourgeoisie—which included bankers, merchants, industrialists, and entrepreneurs—in the construction of the new state has largely refuted the claim of political and economic dominance of the Junkers as a social group. This newer scholarship has demonstrated the importance of the merchant classes of the Hanseatic cities and the industrial leadership (the latter particularly important in the Rhineland) in the ongoing development of the Second Empire. Additional studies of different groups in Wilhelmine Germany have all contributed to a new view of the period. Although the Junkers did, indeed, continue to control the officer corps, they did not dominate social, political, and economic matters as much as the Sonderweg theorists had hypothesized. Eastern Junker power had a counterweight in the western provinces in the form of the Grand Bourgeoisie and in the growing professional class of bureaucrats, teachers, professors, doctors, lawyers, scientists, etc. ## Beyond the political mechanism: forming a nation If the Wartburg and Hambach rallies had lacked a constitution and administrative apparatus, that problem was addressed between 1867 and 1871. Yet, as Germans discovered, grand speeches, flags, and enthusiastic crowds, a constitution, a political reorganization, and the provision of an imperial superstructure; and the revised Customs Union of 1867–68, still did not make a nation. A key element of the nation-state is the creation of a national culture, frequently—although not necessarily—through deliberate national policy. In the new German nation, a Kulturkampf (1872–78) that followed political, economic, and administrative unification attempted to address, with a remarkable lack of success, some of the contradictions in German society. In particular, it involved a struggle over language, education, and religion. A policy of Germanization of non-German people of the empire's population, including the Polish and Danish minorities, started with language, in particular, the German language, compulsory schooling (Germanization), and the attempted creation of standardized curricula for those schools to promote and celebrate the idea of a shared past. Finally, it extended to the religion of the new Empire's population. ### Kulturkampf For some Germans, the definition of nation did not include pluralism, and Catholics in particular came under scrutiny; some Germans, and especially Bismarck, feared that the Catholics' connection to the papacy might make them less loyal to the nation. As chancellor, Bismarck tried without much success to limit the influence of the Roman Catholic Church and of its party-political arm, the Catholic Center Party, in schools and education- and language-related policies. The Catholic Center Party remained particularly well entrenched in the Catholic strongholds of Bavaria and southern Baden, and in urban areas that held high populations of displaced rural workers seeking jobs in the heavy industry, and sought to protect the rights not only of Catholics, but other minorities, including the Poles, and the French minorities in the Alsatian lands. The May Laws of 1873 brought the appointment of priests, and their education, under the control of the state, resulting in the closure of many seminaries, and a shortage of priests. The Congregations Law of 1875 abolished religious orders, ended state subsidies to the Catholic Church, and removed religious protections from the Prussian constitution. ### Integrating the Jewish community The Germanized Jews remained another vulnerable population in the new German nation-state. Since 1780, after emancipation by the Holy Roman Emperor Joseph II, Jews in the former Habsburg territories had enjoyed considerable economic and legal privileges that their counterparts in other German-speaking territories did not: they could own land, for example, and they did not have to live in a Jewish quarter (also called the Judengasse, or "Jews' alley"). They could also attend universities and enter the professions. During the Revolutionary and Napoleonic eras, many of the previously strong barriers between Jews and Christians broke down. Napoleon had ordered the emancipation of Jews throughout territories under French hegemony. Like their French counterparts, wealthy German Jews sponsored salons; in particular, several Jewish salonnières held important gatherings in Frankfurt and Berlin during which German intellectuals developed their own form of republican intellectualism. Throughout the subsequent decades, beginning almost immediately after the defeat of the French, reaction against the mixing of Jews and Christians limited the intellectual impact of these salons. Beyond the salons, Jews continued a process of Germanization in which they intentionally adopted German modes of dress and speech, working to insert themselves into the emerging 19th-century German public sphere. The religious reform movement among German Jews reflected this effort. By the years of unification, German Jews played an important role in the intellectual underpinnings of the German professional, intellectual, and social life. The expulsion of Jews from Russia in the 1880s and 1890s complicated integration into the German public sphere. Russian Jews arrived in north German cities in the thousands; considerably less educated and less affluent, their often dismal poverty dismayed many of the Germanized Jews. Many of the problems related to poverty (such as illness, overcrowded housing, unemployment, school absenteeism, refusal to learn German, etc.) emphasized their distinctiveness for not only the Christian Germans, but for the local Jewish populations as well. ### Writing the story of the nation Another important element in nation-building, the story of the heroic past, fell to such nationalist German historians as the liberal constitutionalist Friedrich Dahlmann (1785–1860), his conservative student Heinrich von Treitschke (1834–1896), and others less conservative, such as Theodor Mommsen (1817–1903) and Heinrich von Sybel (1817–1895), to name two. Dahlmann himself died before unification, but he laid the groundwork for the nationalist histories to come through his histories of the English and French revolutions, by casting these revolutions as fundamental to the construction of a nation, and Dahlmann himself viewed Prussia as the logical agent of unification. Heinrich von Treitschke's History of Germany in the Nineteenth Century, published in 1879, has perhaps a misleading title: it privileges the history of Prussia over the history of other German states, and it tells the story of the German-speaking peoples through the guise of Prussia's destiny to unite all German states under its leadership. The creation of this Borussian myth (Borussia is the Latin name for Prussia) established Prussia as Germany's savior; it was the destiny of all Germans to be united, this myth maintains, and it was Prussia's destiny to accomplish this. According to this story, Prussia played the dominant role in bringing the German states together as a nation-state; only Prussia could protect German liberties from being crushed by French or Russian influence. The story continues by drawing on Prussia's role in saving Germans from the resurgence of Napoleon's power in 1815, at Waterloo, creating some semblance of economic unity, and uniting Germans under one proud flag after 1871. Mommsen's contributions to the Monumenta Germaniae Historica laid the groundwork for additional scholarship on the study of the German nation, expanding the notion of "Germany" to mean other areas beyond Prussia. A liberal professor, historian, and theologian, and generally a titan among late 19th-century scholars, Mommsen served as a delegate to the Prussian House of Representatives from 1863 to 1866 and 1873 to 1879; he also served as a delegate to the Reichstag from 1881 to 1884, for the liberal German Progress Party (Deutsche Fortschrittspartei) and later for the National Liberal Party. He opposed the antisemitic programs of Bismarck's Kulturkampf and the vitriolic text that Treitschke often employed in the publication of his Studien über die Judenfrage (Studies of the Jewish Question), which encouraged assimilation and Germanization of Jews. ## See also - Italian unification - Formation of Romania - Reichsbürgerbewegung - Pan-Germanism - Qin's wars of Chinese unification
142,426
Sense and Sensibility (film)
1,172,226,013
1995 film by Ang Lee
[ "1990s American films", "1990s British films", "1990s English-language films", "1995 films", "1995 romantic drama films", "American historical romance films", "American romantic drama films", "BAFTA winners (films)", "Best Drama Picture Golden Globe winners", "Best Film BAFTA Award winners", "British historical romance films", "Columbia Pictures films", "Films about sisters", "Films based on Sense and Sensibility", "Films directed by Ang Lee", "Films scored by Patrick Doyle", "Films set in England", "Films set in country houses", "Films whose writer won the Best Adapted Screenplay Academy Award", "Films with screenplays by Emma Thompson", "Golden Bear winners", "Romantic period films" ]
Sense and Sensibility is a 1995 period drama film directed by Ang Lee and based on Jane Austen's 1811 novel of the same name. Emma Thompson wrote the screenplay and stars as Elinor Dashwood, while Kate Winslet plays Elinor's younger sister Marianne. The story follows the Dashwood sisters, members of a wealthy English family of landed gentry, as they must deal with circumstances of sudden destitution. They are forced to seek financial security through marriage. Hugh Grant and Alan Rickman play their respective suitors. Producer Lindsay Doran, a longtime admirer of Austen's novel, hired Thompson to write the screenplay. She spent five years drafting numerous revisions, continually working on the script between other films as well as into production of the film itself. Studios were nervous that Thompson—a first-time screenwriter—was the credited writer, but Columbia Pictures agreed to distribute the film. Though initially intending to have another actress portray Elinor, Thompson was persuaded to take the role. Thompson's screenplay exaggerated the Dashwood family's wealth to make their later scenes of poverty more apparent to modern audiences. It also altered the traits of the male leads to make them more appealing to contemporary viewers. Elinor and Marianne's different characteristics were emphasised through imagery and invented scenes. Lee was selected as director, both for his work in the 1993 film The Wedding Banquet and because Doran believed he would help the film appeal to a wider audience. Lee was given a budget of \$16 million. Sense and Sensibility was released on 13 December 1995, in the United States. A commercial success, earning \$135 million worldwide, the film garnered overwhelmingly positive reviews upon release and received many accolades, including three awards and eleven nominations at the 1995 British Academy Film Awards. It earned seven Academy Awards nominations, including for Best Picture and Best Actress. Thompson received the award for Best Adapted Screenplay, becoming the only person to have won Academy Awards for both acting and screenwriting. Sense and Sensibility contributed to a resurgence in popularity for Austen's works, and has led to many more productions in similar genres. It continues to be recognised as one of the best Austen adaptations of all time. ## Plot When Mr. Dashwood dies, his wife and three daughters — Elinor, Marianne and Margaret — are left with an inheritance of only £500 a year; the bulk of his estate, Norland Park, is left to his son John from a previous marriage. John and his greedy, snobbish wife Fanny immediately install themselves in the large house; Fanny invites her brother Edward Ferrars to stay with them. She frets about the budding friendship between Edward and Elinor, believing he can do better, and does everything she can to prevent it from developing into a romantic attachment. Sir John Middleton, a cousin of the widowed Mrs. Dashwood, offers her a small cottage house on his estate, Barton Park in Devonshire. She and her daughters move in, and are frequent guests at Barton Park. Marianne meets the older Colonel Brandon, who falls in love with her at first sight. Competing for her affections is the dashing John Willoughby, with whom Marianne falls in love. On the morning she expects him to propose marriage to her, he instead leaves hurriedly for London. Unbeknownst to the Dashwood family, Brandon's ward Beth, illegitimate daughter of his former love Eliza, is pregnant with Willoughby's child; Willoughby's aunt, Lady Allen, has disinherited him upon discovering this. Sir John's mother-in-law, Mrs. Jennings, invites her daughter and son-in-law, Mr and Mrs Palmer, to visit. They bring with them the impoverished Lucy Steele. Lucy confides in Elinor that she and Edward have been engaged secretly for five years, thus dashing Elinor's hopes of a future with him. Mrs. Jennings takes Lucy, Elinor, and Marianne to London, where they meet Willoughby at a ball. He barely acknowledges their acquaintance, and they learn he is engaged to the extremely wealthy Miss Grey. Marianne is inconsolable. The engagement of Edward and Lucy also comes to light. Edward's mother demands that he break off the engagement. When he honourably refuses, his fortune is taken from him and given to his younger brother Robert. On their way home to Devonshire, Elinor and Marianne stop for the night at the country estate of the Palmers, who live near Willoughby. Marianne cannot resist going to see Willoughby's estate and walks a long way in a torrential rain to do so. As a result, she becomes seriously ill and is nursed back to health by Elinor after being rescued by Colonel Brandon. Marianne recovers, and the sisters return home. They learn that Miss Steele has become Mrs. Ferrars and assume that she married Edward. However, Edward arrives to explain that Miss Steele has unexpectedly wed Robert Ferrars and Edward is thus released from his engagement. Edward proposes to Elinor and becomes a vicar, whilst Marianne marries Colonel Brandon. ## Cast - Emma Thompson as Elinor Dashwood - Kate Winslet as Marianne Dashwood - Alan Rickman as Colonel Brandon - Imogen Stubbs as Lucy Steele - Hugh Grant as Edward Ferrars - Greg Wise as John Willoughby - Gemma Jones as Mrs. Dashwood - Harriet Walter as Fanny Dashwood - James Fleet as John Dashwood - Hugh Laurie as Mr. Palmer - Imelda Staunton as Charlotte Palmer - Robert Hardy as Sir John Middleton - Elizabeth Spriggs as Mrs. Jennings - Tom Wilkinson as Mr. Dashwood - Emilie François as Margaret Dashwood - Richard Lumsden as Robert Ferrars ## Production ### Conception and adaptation In 1989, Lindsay Doran, the new president of production company Mirage Enterprises, was on a company retreat brainstorming potential film ideas when she suggested the Jane Austen novel Sense and Sensibility to her colleagues. It had been adapted twice, most recently in a 1981 television serial. Doran was a longtime fan of the novel, and had vowed in her youth to adapt it if she ever entered the film industry. She chose to adapt this particular Austen work because there were two female leads. Doran stated that "all of [Austen's] books are funny and emotional, but Sense and Sensibility is the best movie story because it's full of twists and turns. Just when you think you know what's going on, everything is different. It's got real suspense, but it's not a thriller. Irresistible." She also praised the novel for possessing "wonderful characters ... three strong love stories, surprising plot twists, good jokes, relevant themes, and a heart-stopping ending." Prior to being hired at Mirage, the producer had spent years looking for a suitable screenwriter – someone who was "equally strong in the areas of satire and romance" and could think in Austen's language "almost as naturally as he or she could think in the language of the twentieth century". Doran read screenplays by English and American writers until she came across a series of comedic skits, often in period settings, that actress Emma Thompson had written. Doran believed the humour and style of writing was "exactly what [she'd] been searching for". Thompson and Doran were already working together on Mirage's 1991 film Dead Again. A week after its completion, the producer selected Thompson to adapt Sense and Sensibility, although she knew that Thompson had never written a screenplay. Also a fan of Austen, Thompson first suggested they adapt Persuasion or Emma before agreeing to Doran's proposal. The actress found that Sense and Sensibility contained more action than she had remembered and decided it would translate well to drama. Thompson spent five years writing and revising the screenplay, both during and between shooting other films. Believing the novel's language to be "far more arcane than in [Austen's] later books," Thompson sought to simplify the dialogue while retaining the "elegance and wit of the original." She observed that in a screenwriting process, a first draft often had "a lot of good stuff in it" but needed to be edited, and second drafts would "almost certainly be rubbish ... because you get into a panic". Thompson credited Doran that she could "help me, nourish me and mentor me through that process ... I learned about screenwriting at her feet". Thompson's first draft was more than three hundred handwritten pages, which required her to reduce it to a more manageable length. She found the romances to be the most difficult to "juggle", and her draft received some criticism for the way it presented Willoughby and Edward. Doran later recalled the work was criticized for not getting underway until Willoughby's arrival, with Edward sidelined as backstory. Thompson and Doran quickly realised that "if we didn't meet Edward and do the work and take that twenty minutes to set up those people ... then it wasn't going to work". At the same time, Thompson wished to avoid depicting "a couple of women waiting around for men"; gradually her screenplay focused as much on the Dashwood sisters' relationship with each other as it did with their romantic interests. With the draft screenplay, Doran pitched the idea to various studios in order to finance the film, but found that many were wary of the beginner Thompson as the screenwriter. She was considered a risk, as her experience was as an actress who had never written a film script. Columbia Pictures executive Amy Pascal supported Thompson's work and agreed to sign as the producer and distributor. As Thompson mentioned on the BBC program QI in 2009, at one point in the writing process a computer failure almost lost the entire work. In panic Thompson called fellow actor and close friend Stephen Fry, the host of QI and a self-professed "geek". After seven hours, Fry was able to recover the documents from the device while Thompson had tea with Hugh Laurie who was at Fry's house at the time. #### Lee's hire Taiwanese director Ang Lee was hired as a result of his work in the 1993 family comedy film The Wedding Banquet, which he co-wrote, produced, and directed. He was not familiar with Jane Austen. Doran felt that Lee's films, which depicted complex family relationships amidst a social comedy context, were a good fit with Austen's storylines. She recalled, "The idea of a foreign director was intellectually appealing even though it was very scary to have someone who didn't have English as his first language." The producer sent Lee a copy of Thompson's script, to which he replied that he was "cautiously interested". Fifteen directors were interviewed, but according to Doran, Lee was one of the few who recognised Austen's humour; he told them he wanted the film to "break people's hearts so badly that they'll still be recovering from it two months later." From the beginning, Doran wanted Sense and Sensibility to appeal to both a core audience of Austen aficionados as well as younger viewers attracted to romantic comedy films. She felt that Lee's involvement prevented the film from becoming "just some little English movie" that appealed only to local audiences instead of to the wider world. Lee said, > I thought they were crazy: I was brought up in Taiwan, what do I know about 19th-century England? About halfway through the script it started to make sense why they chose me. In my films I've been trying to mix social satire and family drama. I realised that all along I had been trying to do Jane Austen without knowing it. Jane Austen was my destiny. I just had to overcome the cultural barrier. Because Thompson and Doran had worked on the screenplay for so long, Lee described himself at the time as a "director for hire", as he was unsure of his role and position. He spent six months in England "learn[ing] how to make this movie, how to do a period film, culturally ... and how to adapt to the major league film industry". In January 1995, Thompson presented a draft to Lee, Doran, co-producer Laurie Borg, and others working on the production, and spent the next two months editing the screenplay based upon their feedback. Thompson continued making revisions throughout production of the film, including altering scenes to meet budgetary concerns, adding dialogue changes, and changing certain aspects to better fit the actors. Brandon's confession scene, for instance, initially included flashbacks and stylised imagery before Thompson decided it was "emotionally more interesting to let Brandon tell the story himself and find it difficult". ### Casting Thompson initially hoped that Doran would cast sisters Natasha and Joely Richardson as Elinor and Marianne Dashwood. Lee and Columbia wanted Thompson herself, now a "big-deal movie star" after her critically successful role in the 1992 film Howards End, to play Elinor. The actress replied that at the age of thirty-five, she was too old for the nineteen-year-old character. Lee suggested Elinor's age be changed to twenty-seven, which would also have made the difficult reality of spinsterhood easier for modern audiences to understand. Thompson agreed, later stating that she was "desperate to get into a corset and act it and stop thinking about it as a script." The formal casting process began in February 1995, though some of the actors met with Thompson the previous year to help her conceptualise the script. Lee eventually cast all but one of them: Hugh Grant (as Edward Ferrars), Robert Hardy (as Sir John Middleton), Harriet Walter (as Fanny Ferrars Dashwood), Imelda Staunton (as Charlotte Jennings Palmer), and Hugh Laurie (as Mr. Palmer). Amanda Root had also worked with Thompson on the screenplay, but had already committed to star in the 1995 film Persuasion. Commenting on the casting of Laurie, whom she had known for years, Thompson has said, "There is no one [else] on the planet who could capture Mr. Palmer's disenchantment and redemption so perfectly, and make it funny." Thompson wrote the part of Edward Ferrars with Grant in mind, and he agreed to receive a lower salary in line with the film's budget. Grant called her screenplay "genius", explaining "I've always been a philistine about Jane Austen herself, and I think Emma's script is miles better than the book and much more amusing." Grant's casting was criticised by the Jane Austen Society of North America (JASNA), whose representatives said that he was too handsome for the part. Actress Kate Winslet initially intended to audition for the role of Marianne but Lee disliked her work in the 1994 drama film Heavenly Creatures; she auditioned for the lesser part of Lucy Steele. Winslet pretended she had heard that the audition was still for Marianne, and won the part based on a single reading. Thompson later said that Winslet, only nineteen years old, approached the part "energised and open, realistic, intelligent, and tremendous fun." The role helped Winslet become recognised as a significant actress. Also appearing in the film was Alan Rickman, who portrayed Colonel Brandon. Thompson was pleased that Rickman could express the "extraordinary sweetness [of] his nature," as he had played "Machiavellian types so effectively" in other films. Greg Wise was cast as Marianne's other romantic interest, John Willoughby, his most noted role thus far. Twelve-year-old Emilie François, appearing as Margaret Dashwood, was one of the last people cast in the production; she had no professional acting experience. Thompson praised the young actress in her production diaries, "Emilie has a natural quick intelligence that informs every movement – she creates spontaneity in all of us just by being there." Other cast members included Gemma Jones as Mrs. Dashwood, James Fleet as John Dashwood, Elizabeth Spriggs as Mrs. Jennings, Imogen Stubbs as Lucy Steele, Richard Lumsden as Robert Ferrars, Tom Wilkinson as Mr. Dashwood, and Lone Vidahl as Miss Grey. ### Costume design According to Austen scholar Linda Troost, the costumes used in Sense and Sensibility helped emphasise the class and status of the various characters, particularly among the Dashwoods. They were created by Jenny Beavan and John Bright, a team of designers best known for Merchant Ivory films who began working together in 1984. The two attempted to create accurate period dress, and featured the "fuller, classical look and colours of the late 18th century." They found inspiration in the works of the English artists Thomas Rowlandson, John Hopper, and George Romney, and also reviewed fashion plates stored in the Victoria and Albert Museum. The main costumes and hats were manufactured at Cosprop, a London-based costumer company. To achieve the tightly wound curls fashionably inspired by Greek art, some of the actresses wore wigs while others employed heated hair twists and slept in pin curls. Fanny, the snobbiest of the characters, possesses the tightest of curls but has less of a Greek silhouette, a reflection of her wealth and silliness. Beavan stated that Fanny and Mrs. Jennings "couldn't quite give up the frills," and instead draped themselves in lace, fur, feathers, jewellery, and rich fabrics. Conversely, sensible Elinor opts for simpler accessories, such as a long gold chain and a straw hat. Fanny's shallow personality is also reflected in "flashy, colourful" dresses, while Edward's buttoned-up appearance represents his "repressed" personality, with little visible skin. Each of the 100 extras used in the London ballroom scene, depicting "soldiers and lawyers to fops and dowagers," don visually distinct costumes. For Brandon's costumes, Beavan and Bright consulted with Thompson and Lee and decided to have him project an image of "experienced and dependable masculinity." Brandon is first seen in black, but later he wears sporting gear in the form of corduroy jackets and shirtsleeves. His rescue of Marianne has him transforming into the "romantic Byronic hero", sporting an unbuttoned shirt and loose cravat. In conjunction with his tragic backstory, Brandon's "flattering" costumes help his appeal to the audience. Beavan and Bright's work on the film earned them a nomination for Best Costume Design at the 68th Academy Awards. ### Filming The film was budgeted at \$16 million, the largest Ang Lee had yet received as well as the largest awarded to an Austen film that decade. In the wake of the success of Columbia's 1994 film Little Women, the American studio authorised Lee's "relatively high budget" out of an expectation that it would be another cross-over hit and appeal to multiple audiences, thus yielding high box office returns. Nevertheless, Doran considered it a "low budget film", and many of the ideas Thompson and Lee came up with – such as an early dramatic scene depicting Mr. Dashwood's bloody fall from a horse – were deemed unfilmable from a cost perspective. According to Thompson, Lee "arrived on set with the whole movie in his head". Rather than focus on period details, he wanted the film to concentrate on telling a good story. He showed the cast a selection of films adapted from classic novels, including Barry Lyndon and The Age of Innocence, which he believed to be "great movies; everybody worships the art work, [but] it's not what we want to do". Lee criticised the latter film for lacking energy, in contrast to the "passionate tale" of Sense and Sensibility. The cast and crew experienced "slight culture shock" with Lee on a number of occasions. He expected the assistant directors to be the "tough ones" and keep production on schedule, while they expected the same of him; this led to a slower schedule in the early stages of production. Additionally, according to Thompson, the director became "deeply hurt and confused" when she and Grant made suggestions for certain scenes, which was something that was not done in his native country. Lee thought his authority was being undermined and lost sleep, though this was gradually resolved as he became used to their methods. The cast "grew to trust his instincts so completely", making fewer and fewer suggestions. Co-producer James Schamus stated that Lee also adapted by becoming more verbal and willing to express his opinion. Lee became known for his "frightening" tendency not to "mince words". He often had the cast do numerous takes for a scene to get the perfect shot, and was not afraid to call something "boring" if he disliked it. Thompson later recalled that Lee would "always come up to you and say something unexpectedly crushing", such as asking her not to "look so old". She also commented, however, that "he doesn't indulge us but is always kind when we fail". Due to Thompson's extensive acting experience, Lee encouraged her to practice tai chi to "help her relax [and] make her do things simpler". Other actors soon joined them in meditating – according to Doran, it "was pretty interesting. There were all these pillows on the floor and these pale-looking actors were saying, 'What have we got ourselves into?' [Lee] was more focused on body language than any director I've ever seen or heard of." He suggested Winslet read books of poetry and report back to him to best understand her character. He also had Thompson and Winslet live together to develop their characters' sisterly bond. Many of the cast took lessons in etiquette and riding side-saddle. Lee found that in contrast to Chinese cinema, he had to dissuade many of the actors from using a "very stagy, very English tradition. Instead of just being observed like a human being and getting sympathy, they feel they have to do things, they have to carry the movie." Grant in particular often had to be restrained from giving an "over-the-top" performance; Lee later recalled that the actor is "a show stealer. You can't stop that. I let him do, I have to say, less 'star' stuff, the Hugh Grant thing ... and not [let] the movie serve him, which is probably what he's used to now." For the scene in which Elinor learns Edward is unmarried, Thompson found inspiration from her reaction to her father's death. Grant was unaware that Thompson would cry through most of his speech, and the actress attempted to reassure him, "'There's no other way, and I promise you it'll work, and it will be funny as well as being touching.' And he said, 'Oh, all right,' and he was very good about it." Lee had one demand for the scene, that Thompson avoid the temptation to turn her head towards the camera. #### Locations Production of Sense and Sensibility was scheduled for fifty-eight days, though this was eventually extended to sixty-five. Filming commenced in mid-April 1995 at a number of locations in Devon, beginning with Saltram House (standing in for Norland Park), where Winslet and Jones shot the first scene of the production: when their characters read about Barton Cottage. As Saltram was a National Trust property, Schamus had to sign a contract before production began, and staff with the organisation remained on set to carefully monitor the filming. Production later returned to shoot several more scenes, finishing there on 29 April. The second location of filming, Flete House, stood in for part of Mrs. Jennings' London estate, where Edward first sees Elinor with Lucy. Representing Barton Cottage was a Flete Estate stone cottage called Efford House in Holbeton, which Thompson called "one of the most beautiful spots we've ever seen." Early May saw production at the "exquisite" St Mary's Church in Berry Pomeroy for the final wedding scene. From the tenth to the twelfth of May, Marianne's first rescue sequence, depicting her encounter with Willoughby, was shot. Logistics were difficult, as the scene was set upon a hill during a rainy day. Lee shot around fifty takes, with the actors becoming soaked under rain machines; this led to Winslet eventually collapsing from hypothermia. Further problems occurred midway through filming, when Winslet contracted phlebitis in her leg, developed a limp, and sprained her wrist after falling down a staircase. From May to July, production took place at a number of other National Trust estates and stately homes across England. Trafalgar House and Wilton House in Wiltshire stood in for the grounds of Barton Park and the London Ballroom respectively. Mompesson House, an eighteenth-century townhouse located in Salisbury, represented Mrs. Jennings' sumptuous townhouse. Sixteenth-century Montacute House in Somerset was the setting for the Palmer estate of Cleveland House. Further scenes were shot at Compton Castle in Devon (Mr Willoughby's estate) and at the National Maritime Museum in Greenwich. ### Music Composer Patrick Doyle, who had previously worked with his friend Emma Thompson in the films Henry V, Much Ado About Nothing, and Dead Again, was hired to produce the music for Sense and Sensibility. Asked by the director to select existing music or compose new "gentle" melodies, Doyle wrote a score that reflected the film's events. He explained, "You had this middle-class English motif, and with the music you would have occasional outbursts of emotion." Doyle explains that the score "becomes a little more grown-up" as the story progresses to one of "maturity and an emotional catharsis." The score contains romantic elements and has been described by National Public Radio as a "restricted compass ... of emotion" with "instruments [that] blend together in a gentle sort of way". They also noted that as a reflection of the story, the score is a "little wistful ... and sentimental." Two songs are sung by Marianne in the film, with lyrics adapted from seventeenth-century poems. Lee believed that the two songs conveyed the "vision of duality" visible both in the novel and script. In his opinion, the second song expressed Marianne's "mature acceptance," intertwined with a "sense of melancholy". The melody of "Weep You No More Sad Fountains", Marianne's first song, appears in the opening credits, while her second song's melody features again during the ending credits, this time sung by dramatic soprano Jane Eaglen. The songs were written by Doyle before filming began. The composer received his first Academy Award nomination for his score. ### Editing Thompson and Doran discussed how much of the love stories to depict, as the male characters spend much of the novel away from the Dashwood sisters. The screenwriter had to carefully balance the amount of screentime she gave to the male leads, noting in her film production diary that such a decision would "very much lie in the editing." Thompson wrote "hundreds of different versions" of romantic storylines. She considered having Edward re-appear midway through the film before deciding that it would not work as "there was nothing for him to do." Thompson also opted to exclude the duel scene between Brandon and Willoughby, which is described in the novel, because it "only seemed to subtract from the mystery." She and Doran agonised about when and how to reveal Brandon's backstory, as they wanted to prevent viewers from becoming bored. Thompson described the process of reminding audiences of Edward and Brandon as "keeping plates spinning". A scene was shot of Brandon finding his ward in a poverty-stricken area in London, but this was excluded from the film. Thompson's script included a scene of Elinor and Edward kissing, as the studio "couldn't stand the idea of these two people who we've been watching all the way through not kissing." It was one of the first scenes cut during editing: the original version was over three hours, Lee was less interested in the story's romance, and Thompson found a kissing scene to be inappropriate. The scene was included in marketing materials and the film trailer. Thompson and Doran also cut out a scene depicting Willoughby as remorseful when Marianne is sick. Doran said that despite it "being one of the great scenes in book history," they could not get it to fit into the film. Tim Squyres edited the film, his fourth collaboration with Ang Lee. He reflected in 2013 about the editing process: > It was the first film that I had done with Ang that was all in English, and it's Emma Thompson, Kate Winslet, Alan Rickman, and Hugh Grant — these great, great actors. When you get footage like that, you realise that your job is really not technical. It was my job to look at something that Emma Thompson had done and say, "Eh, that's not good, I'll use this other one instead." And not only was I allowed to pass judgment on these tremendous actors, I was required to. ## Themes and analysis ### Changes from source material Scholar Louise Flavin has noted that Thompson's screenplay contains significant alterations to the characters of Elinor and Marianne Dashwood: in the novel, the former embodies "sense", i.e. "sensible" in our terms, and the latter, "sensibility", i.e. "sensitivity" in our terms. Audience members are meant to view self-restrained Elinor as the person in need of reform, rather than her impassioned sister. To heighten the contrast between them, Marianne and Willoughby's relationship includes an "erotic" invented scene in which the latter requests a lock of her hair – a direct contrast to Elinor's "reserved relationship" with Edward. Lee also distinguishes them through imagery – Marianne is often seen with musical instruments, near open windows, and outside, while Elinor is pictured in door frames. Another character altered for modern viewers is Margaret Dashwood, who conveys "the frustrations that a girl of our times might feel at the limitations facing her as a woman in the early nineteenth century." Thompson uses Margaret for exposition in order to detail contemporary attitudes and customs. For instance, Elinor explains to a curious Margaret – and by extension, the audience – why their half-brother inherits the Dashwood estate. Margaret's altered storyline, giving her an interest in fencing and geography, also allows audience members to see the "feminine" side of Edward and Brandon, as they become father or brother figures to her. The film omits the characters of Lady Middleton and her children, as well as that of Ann Steele, Lucy's sister. When adapting the characters for film, Thompson found that in the novel, "Edward and Brandon are quite shadowy and absent for long periods," and that "making the male characters effective was one of the biggest problems. Willoughby is really the only male who springs out in three dimensions." Several major male characters in Sense and Sensibility were consequently altered significantly from the novel in an effort to appeal to contemporary audiences. Grant's Edward and Rickman's Brandon are "ideal" modern males who display an obvious love of children as well as "pleasing manners", especially when contrasted with Palmer. Thompson's script both expanded and omitted scenes from Edward's storyline, including the deletion of an early scene in which Elinor assumes that a lock of hair found in Edward's possession is hers, when it belongs to Lucy. He was made more fully realised and honourable than in the novel to increase his appeal to viewers. To gradually show viewers why Brandon is worthy of Marianne's love, Thompson's screenplay has his storyline mirroring Willoughby's; they are similar in appearance, share a love of music and poetry, and rescue Marianne in the rain while on horseback. ### Class Thompson viewed the novel as a story of "love and money," noting that some people needed one more than the other. During the writing process, executive producer Sydney Pollack stressed that the film be understandable to modern audiences, and that it be made clear why the Dashwood sisters could not just obtain a job. "I'm from Indiana; if I get it, everyone gets it," he said. Thompson believed that Austen was just as comprehensible in a different century, "You don't think people are still concerned with marriage, money, romance, finding a partner?" She was keen to emphasise the realism of the Dashwoods' predicament in her screenplay, and inserted scenes to make the differences in wealth more apparent to modern audiences. Thompson made the Dashwood family richer than in the book and added elements to help contrast their early wealth with their later financial predicament; for instance, because it might have been confusing to viewers that one could be poor and still have servants, Elinor is made to address a large group of servants at Norland Park early in the film for viewers to remember when they see their few staff at Barton Cottage. Lee also sought to emphasise social class and the limitations it placed on the protagonists. Lee conveys this in part when Willoughby publicly rejects Marianne; he returns to a more lavishly furnished room, a symbol of the wealth she has lost. "Family dramas," he stated, "are all about conflict, about family obligations versus free will." The film's theme of class has attracted much scholarly attention. Carole Dole noted that class constitutes an important element in Austen's stories and is "impossible" to avoid when adapting her novels. According to Dole, Lee's film contains an "ambiguous treatment of class values" that stresses social differences but "underplays the consequences of the class distinctions so important in the novel"; for instance, Edward's story ends upon his proposal to Elinor, with no attention paid to how they will live on his small annual income from the vicarage. Louise Flavin believed that Lee used the houses to represent their occupants' class and character: the Dashwood sisters' decline in eligibility is represented through the contrast between the spacious rooms of Norland Park and those of the isolated, cramped Barton Cottage. James Thompson criticised what he described as the anaesthetised "mélange of disconnected picture postcard-gift-calendar-perfect scenes," in which little connection is made between "individual subjects and the land that supports them." Andrew Higson argued that while Sense and Sensibility includes commentary on sex and gender, it fails to pursue issues of class. Thompson's script, he wrote, displays a "sense of impoverishment [but is] confined to the still privileged lifestyle of the disinherited Dashwoods. The broader class system is pretty much taken for granted." The ending visual image of flying gold coins, depicted during Marianne's wedding, has also drawn attention; Marsha McCreadie noted that it serves as a "visual wrap-up and emblem of the merger between money and marriage." ### Gender Gender has been seen as another major theme of the film, often intersecting with class. Penny Gay observed that Elinor's early dialogue with Edward about "feel[ing] idle and useless ... [with] no hope whatsoever of any occupation" reflected Thompson's background as a "middle class, Cambridge-educated feminist." Conversely, Dole wrote that Thompson's version of Elinor "has a surprising anti-feminist element to it," as she appears more dependent on men than the original character; the film presents her as repressed, resulting in her emotional breakdown with Edward. Linda Troost opined that Lee's production prominently features "radical feminist and economic issues" while "paradoxically endorsing the conservative concept of marriage as a woman's goal in life." Despite this "mixed political agenda," Troost believed that the film's faithfulness to the traditional heritage film genre is evident through its use of locations, costumes, and attention to details, all of which also emphasize class and status. Gay and Julianne Pidduck stated that gender differences are expressed by showing the female characters indoors, while their male counterparts are depicted outside confidently moving throughout the countryside. Nora Stovel observed that Thompson "emphasises Austen's feminist satire on Regency gender economics," drawing attention not only to the financial plight of the Dashwoods but also to eighteenth-century women in general. ## Marketing and release In the United States, Sony and Columbia Pictures released Sense and Sensibility on a slow schedule compared to mainstream films, first premiering it on 13 December 1995. Believing that a limited release would position the film both as an "exclusive quality picture" and increase its chances of winning Academy Awards, Columbia dictated that its first weekend involve only seventy cinemas in the US; it opened in eleventh place in terms of box office takings and earned \$721,341. To benefit from the publicity surrounding potential Academy Award candidates and increase its chance of earning nominations, the film was released within "Oscar season". The number of theatres showing Sense and Sensibility was slowly expanded, with particular surges when its seven Oscar nominations were announced and at the time of the ceremony in late March, until it was present in over one thousand cinemas across the US. By the end of its American release, Sense and Sensibility had been watched by more than eight million people, garnering an "impressive" total domestic gross of \$43,182,776. On the basis of Austen's reputation as a serious author, the producers were able to rely on high-brow publications to help market their film. Near the time of its US release, large spreads in The New York Review of Books, Vanity Fair, Film Comment, and other media outlets featured columns on Lee's production. In late December, Time magazine declared it and Persuasion to be the best films of 1995. Andrew Higson referred to all this media exposure as a "marketing coup" because it meant the film "was reaching one of its target audiences." Meanwhile, most promotional images featured the film as a "sort of chick flick in period garb." New Market Press published Thompson's screenplay and film diary; in its first printing, the hard cover edition sold 28,500 copies in the US. British publisher Bloomsbury released a paperback edition of the novel containing film pictures, same title design, and the cast's names on the cover, whilst Signet Publishing in the US printed 250,000 copies instead of the typical 10,000 a year; actress Julie Christie read the novel in an audiobook released by Penguin Audiobooks. Sense and Sensibility increased dramatically in terms of its book sales, ultimately hitting tenth place on The New York Times Best Seller list for paperbacks in February 1996. In the United Kingdom, Sense and Sensibility was released on 23 February 1996 in order to "take advantage of the hype from Pride and Prejudice", another popular Austen adaptation recently broadcast. Columbia Tristar's head of UK marketing noted that "if there was any territory this film was going to work, it was in the UK." After receiving positive responses at previews, marketing strategies focused on selling it as both a costume drama and as a film attractive to mainstream audiences. Attention was also paid to marketing Sense and Sensibility internationally. Because the entire production cycle had consistently emphasised it as being "bigger" than a normal British period drama literary film, distributors avoided labelling it as "just another English period film." Instead, marketing materials featured quotations from populist newspapers such as the Daily Mail, which compared the film to Four Weddings and a Funeral (1994). It opened in the UK on 102 screens and grossed £629,152 in its opening weekend, placing fourth at the box office. It went on to gross £13,605,627 in the UK, the seventh highest-grossing film for the year. It was watched by more than ten million viewers in Europe. Worldwide, the film ultimately grossed \$134,582,776, a sum that reflected its commercial success. It had the largest box office gross out of the Austen adaptations of the 1990s. ## Reception ### Critical response Sense and Sensibility received overwhelmingly positive reviews from film critics, and was included on more than a hundred top-ten of the year lists. On Rotten Tomatoes, the film has a 97% approval rating based on 64 reviews, with an average rating of 8.00/10. The website's consensus reads, "Sense and Sensibility is an uncommonly deft, very funny Jane Austen adaptation, marked by Emma Thompson's finely tuned performance." On Metacritic, the film has an average score of 84 out of 100 based on 21 reviews, indicating "universal acclaim." Audiences polled by CinemaScore gave the film a grade "A" on scale of A to F. Writing for Variety magazine, Todd McCarthy observed that the film's success was assisted by its "highly skilled cast of actors," as well as its choice of Lee as director. McCarthy clarified, "Although [Lee's] previously revealed talents for dramatizing conflicting social and generational traditions will no doubt be noted, Lee's achievement here with such foreign material is simply well beyond what anyone could have expected and may well be posited as the cinematic equivalent of Kazuo Ishiguro writing The Remains of the Day." Mick LaSalle of the San Francisco Chronicle lauded the film for containing a sense of urgency "that keeps the pedestrian problems of an unremarkable 18th century family immediate and personal." LaSalle concluded that the adaptation has a "right balance of irony and warmth. The result is a film of great understanding and emotional clarity, filmed with an elegance that never calls attention to itself." Film critic John Simon praised most of the film, particularly focusing on Thompson's performance, though he criticised Grant for being "much too adorably bumbling ... he urgently needs to chasten his onscreen persona, and stop hunching his shoulders like a dromedary." Other major critics such as LaSalle, Roger Ebert, James Berardinelli and Janet Maslin praised Grant's performance. Maslin wrote, Grant "rises touchingly to the film's most straightforward and meaningful encounters." Jay Carr of The Boston Globe thought that Lee "nail[ed] Austen's acute social observation and tangy satire," and viewed Thompson and Winslet's age discrepancy as a positive element that helped feed the dichotomy of sense and sensibility. The Radio Times' David Parkinson was equally appreciative of Lee's direction, writing that he "avoid[s] the chocolate-box visuals that cheapen so many British costume dramas" and "brings a refreshing period realism to the tale of two sisters that allows Emma Thompson's respectful Oscar-winning script to flourish." Although as others have pointed out the adaptation is not faithful to Austen's novel: "Thompson plays fast and loose with Austen, cutting huge chunks out of the novel, adding whole scenes; a mere six or seven lines from the book actually make it into the film". ### Accolades Out of the 1990s Austen adaptations, Sense and Sensibility received the most recognition from Hollywood. It garnered seven nominations at the 68th Academy Awards ceremony, where Thompson received the Award for Best Screenplay Based on Material Previously Produced or Published, making her the only person to have won an Oscar for both her writing and acting (Thompson won the Best Actress award for Howards End, in 1993). The film also was the recipient of twelve nominations at the 49th British Academy Film Awards, including Best Film, Best Actress in a Leading Role (for Thompson), and Best Actress in a Supporting Role (for Winslet). In addition, the film won the Golden Bear at the 46th Berlin International Film Festival, making Lee the first director to win this twice. Despite the recognition given to the film, Lee was not nominated for the Academy Award for Best Director (though he was nominated for the Golden Globe). The scholar Shu-mei Shih and the journalist Clarence Page have attributed this snub to Hollywood's racism against Lee, and Chinese cinema in general. Lee sought to avoid turning his omission into a scandal and specifically asked the Taiwan state media not to make it a "national issue," explaining that he endured more pressure when forced to act as his country's representative. ## Legacy and influence Following the theatrical release of Persuasion by a few months, Sense and Sensibility was one of the first English-language period adaptations of an Austen novel to be released in cinemas in over fifty years, the previous being the 1940 film Pride and Prejudice. The year 1995 saw a resurgence of popularity for Austen's works, as Sense and Sensibility and the serial Pride and Prejudice both rocketed to critical and financial success. The two adaptations helped draw more attention to the previously little-known 1995 television film Persuasion, and led to additional Austen adaptations in the following years. In 1995 and 1996, six Austen adaptations were released onto film or television. The filming of these productions led to a surge in popularity of many of the landmarks and locations depicted; according to the scholar Sue Parrill, they became "instant meccas for viewers." When Sense and Sensibility was released in cinemas in the US, Town & Country published a six-page article entitled "Jane Austen's England", which focused on the landscape and sites shown in the film. A press book released by the studio, as well as Thompson's published screenplay and diaries, listed all the filming locations and helped to boost tourism. Saltram House for instance was carefully promoted during the film's release, and saw a 57 percent increase in attendance. In 1996, JASNA's membership increased by fifty percent. The popularity of both Sense and Sensibility and Pride and Prejudice led to the BBC and ITV releasing their Austen adaptations from the 1970s and 1980s onto DVD. As the mid-1990s included adaptations of four Austen novels, there were few of her works to adapt. Andrew Higson argues that this resulted in a "variety of successors" in the genres of romantic comedy and costume drama, as well as with films featuring strong female characters. Cited examples include Mrs Dalloway (1997), Mrs. Brown (1997), Shakespeare in Love (1998), and Bridget Jones's Diary (2001). In 2008, Andrew Davies, the screenwriter of Pride and Prejudice, adapted Sense and Sensibility for television. As a reaction to what he said was Lee's overly "sentimental" film, this production features events found in the novel but excluded from Thompson's screenplay, such as Willoughby's seduction of Eliza and his duel with Brandon. It also features actors closer to the ages in the source material. Sense and Sensibility has maintained its popularity into the twenty-first century. In 2004, Louise Flavin referred to the 1995 film as "the most popular of the Austen film adaptations," and in 2008, The Independent ranked it as the third-best Austen adaptation of all time, opining that Lee "offered an acute outsider's insight into Austen in this compelling 1995 interpretation of the book [and] Emma Thompson delivered a charming turn as the older, wiser, Dashwood sister, Elinor." Journalist Zoe Williams credits Thompson as the person most responsible for Austen's popularity, explaining in 2007 that Sense and Sensibility "is the definitive Austen film and that's largely down to her." In 2011, The Guardian film critic Paul Laity named it his favourite film of all time, partly because of its "exceptional screenplay, crisply and skilfully done.". Devoney Looser reflected on the film in The Atlantic on the 20th anniversary of its release, arguing that the film served as "a turning point" for "pro-feminist masculinity" in Austen adaptations. ## See also - Jane Austen in popular culture - Styles and themes of Jane Austen
5,669,034
Red rail
1,169,978,404
Extinct species of flightless rail which was endemic to Mauritius
[ "Bird extinctions since 1500", "Birds described in 1848", "Extinct animals of Mauritius", "Extinct birds of Indian Ocean islands", "Extinct flightless birds", "Rallidae" ]
The red rail (Aphanapteryx bonasia) is an extinct species of rail that was endemic to the Mascarene island of Mauritius, east of Madagascar in the Indian Ocean. It had a close relative on Rodrigues island, the likewise extinct Rodrigues rail (Erythromachus leguati), with which it is sometimes considered congeneric, but their relationship with other rails is unclear. Rails often evolve flightlessness when adapting to isolated islands, free of mammalian predators, and that was also the case for this species. The red rail was a little larger than a chicken and had reddish, hairlike plumage, with dark legs and a long, curved beak. The wings were small, and its legs were slender for a bird of its size. It was similar to the Rodrigues rail, but was larger, and had proportionally shorter wings. It has been compared to a kiwi or a limpkin in appearance and behaviour. This bird is believed to have fed on invertebrates, and snail shells have been found with damage matching an attack by its beak. Human hunters took advantage of an attraction red rails had to red objects by using coloured cloth to lure the birds so that they could be beaten with sticks. Until subfossil remains were discovered in the 1860s, scientists only knew the red rail from 17th century descriptions and illustrations. These were thought to represent several different species, which resulted in a large number of invalid junior synonyms. It has been suggested that all late 17th-century accounts of the dodo actually referred to the red rail, after the former had become extinct. The last mention of a red rail sighting is from 1693, and it is thought to have gone extinct around 1700, due to predation by humans and introduced species. ## Taxonomy The red rail was first mentioned as "Indian river woodcocks" by the Dutch ships’ pilot Heyndrick Dircksz Jolinck in 1598. By the 19th century, the bird was known only from a few contemporary descriptions referring to red "hens" and names otherwise used for grouse or partridges in Europe, as well as the sketches of the Dutch merchant Pieter van den Broecke and the English traveller Sir Thomas Herbert from 1646 and 1634. While they differed in some details, they were thought to depict a single species by the English naturalist Hugh Edwin Strickland in 1848. The Belgian scientist Edmond de Sélys Longchamps coined the scientific name Apterornis bonasia based on the old accounts mentioned by Strickland. He also included two other Mascarene birds, at the time only known from contemporary accounts, in the genus Apterornis: the Réunion ibis (now Threskiornis solitarius); and the Réunion swamphen (now Porphyrio caerulescens). He thought them related to the dodo and Rodrigues solitaire, due to their shared rudimentary wings, tail, and the disposition of their digits. The name Apterornis had already been used for a different extinct bird genus from New Zealand (originally spelled Aptornis, the adzebills) by the British biologist Richard Owen earlier in 1848. The meaning of bonasia is unclear. Some early accounts refer to red rails by the vernacular names for the hazel grouse, Tetrastes bonasia, so the name evidently originates there. The name itself perhaps refers to bonasus, meaning "bull" in Latin, or bonum and assum, meaning "good roast". It has also been suggested to be a Latin form of the French word bonasse, meaning simple-minded or good-natured. It is also possible that the name alludes to bulls due the bird being said to have had a similar attraction to the waving of red cloth. The German ornithologist Hermann Schlegel thought van den Broecke's sketch depicted a smaller dodo species from Mauritius, and that the Herbert sketch showed a dodo from Rodrigues, and named them Didus broecki and Didus herberti in 1854. In 1968, the Austrian naturalist Georg Ritter von Frauenfeld brought attention to paintings by the Flemish artist Jacob Hoefnagel depicting animals in the royal menagerie of Emperor Rudolph II in Prague, including a dodo and a bird he named Aphanapteryx imperialis. Aphanapteryx means "invisible-wing", from Greek aphanēs, unseen, and pteryx, wing. He compared it with the birds earlier named form old accounts, and found its beak similar to that of a kiwi or ibis. In 1865, subfossil foot bones and a lower jaw were found along with remains of other Mauritian animals in the Mare aux Songes swamp, and were sent by the British ornithologists Edward Newton to the French zoologist Alphonse Milne-Edwards, who identified them as belonging to a rail in 1868. Milne-Edwards correlated the bones with the bird in the Hoefnagel painting and the old accounts, and combined the genus name Aphanapteryx with the older specific name broecki. Due to nomenclatural priority, the genus name was later combined with the oldest species name bonasia. In the 1860s, the travel journal of the Dutch East India Company ship Gelderland (1601–1603) was rediscovered, which contains good sketches of several now-extinct Mauritian birds attributed to the Dutch artist Joris Laerle, including an unlabelled red rail. More fossils were later found by the French naturalist Theodore Sauzier, who had been commissioned to explore the "historical souvenirs" of Mauritius in 1889, and these were described by Newton and the German ornithologist Hans F. Gadow in 1893. In 1899, an almost complete specimen was found by the French barber Louis Etienne Thirioux, who also found important dodo remains, in a cave in the Vallée des Prêtres; this is the most completely known red rail specimen, and is catalogued as MI 923 in the Mauritius Institute. The second most complete individual (specimen CMNZ AV6284) also mainly consists of bones from the Thirioux collection. More material has since been found in various settings. The yellowish colouration mentioned by English traveller Peter Mundy in 1638 instead of the red of other accounts was used by the Japanese ornithologist Masauji Hachisuka in 1937 as an argument for this referring to a distinct genus and species, Kuina mundyi (the generic name means "water-rail" in Japanese), but the American ornithologist Storrs L. Olson suggested in 1977 that it was possibly due to the observed bird being a juvenile red rail. ### Evolution Apart from being a close relative of the Rodrigues rail, the relationships of the red rail are uncertain. The two are commonly kept as separate genera, Aphanapteryx and Erythromachus, but have also been united as species of Aphanapteryx at times. They were first generically synonymised by Newton and Albert Günther in 1879, due to skeletal similarities. In 1892, the Scottish naturalist Henry Ogg Forbes described Hawkins's rail, an extinct species of rail from the Chatham Islands located east of New Zealand, as a new species of Aphanapteryx; A. hawkinsi. He found the Chatham Islands species more similar to the red rail than the latter was to the Rodrigues rail, and proposed that the Mascarene Islands had once been connected with the Chatham Islands, as part of a lost continent he called "Antipodea". Forbes moved the Chatham Islands bird to its own genus, Diaphorapteryx, in 1893, on the recommendation of Newton, but later reverted to his older name. The idea that the Chatham Islands bird was closely related to the red rail and the idea of a connection between the Mascarenes and the Chatham Islands were later criticised by the British palaeontologist Charles William Andrews due to no other species being shared between the islands, and Gadow explained the similarity between the two rails as parallel evolution. In 1945, the French palaeontologist Jean Piveteau found skull features of the red and Rodrigues rail different enough for generic separation, and in 1977, Olson stated that though the two species were similar and derived from the same stock, they had also diverged considerably, and should possibly be kept separate. Based on geographic location and the morphology of the nasal bones, Olson suggested that they were related to the genera Gallirallus, Dryolimnas, Atlantisia, and Rallus. The American ornithologist Bradley C. Livezey was unable to determine the affinities of the red and Rodrigues rail in 1998, stating that some of the features uniting them and some other rails were associated with the loss of flight rather than common descent. He also suggested that the grouping of the red and Rodrigues rail into the same genus may have been influenced by their geographical distribution. The French palaeontologist Cécile Mourer-Chauviré and colleagues also considered the two as belonging to separate genera in 1999. Rails have reached many oceanic archipelagos, which has frequently led to speciation and evolution of flightlessness. According to the British researchers Anthony S. Cheke and Julian P. Hume in 2008, the fact that the red rail lost much of its feather structure indicates it was isolated for a long time. These rails may be of Asian origin, like many other Mascarene birds. In 2019, Hume supported the distinction of the two genera, and cited the relation between the extinct Mauritius scops owl and the Rodrigues scops owl as another example of the diverging evolutionary paths on these islands. He stated that the relationships of the red and Rodrigues rails was more unclear than that of other extinct Mascarene rails, with many of their distinct features being related to flightlessness and modifications to their jaws due to their diet, suggesting long time isolation. He suggested their ancestors could have arrived on the Mascarenes during the middle Miocene at the earliest, but it may have happened more recently. The speed of which these features evolved may also have been affected by gene flow, resource availability, and climate events, and flightlessness can evolve rapidly in rails, as well as repeatedly within the same groups, as seen in for example Dryolimnas, so the distinctness of the red and Rodrigues rails may not have taken long to evolve (some other specialised rails evolved in less than 1–3 million years). Hume suggested that the two rails were probably related to Dryolimnas, but their considerably different morphology made it difficult to establish how. In general, rails are adept at colonising islands, and can become flightless within few generations in suitable environments, for example without predators, yet this also makes them vulnerable to human activities. ## Description From the subfossil bones, illustrations and descriptions, it is known that the red rail was a flightless bird, somewhat larger than a chicken. Subfossil specimens range in size, which may indicate sexual dimorphism, as is common among rails. It was about 35–40 cm (14–16 in) long, and the male may have weighed 1.3 kg (2.9 lb) and the female 1 kg (2.2 lb). Its plumage was reddish brown all over, and the feathers were fluffy and hairlike; the tail was not visible in the living bird and the short wings likewise also nearly disappeared in the plumage. It had a long, slightly curved, brown bill, and some illustrations suggest it had a nape crest. The bird perhaps resembled a lightly built kiwi, and it has also been likened to a limpkin, both in appearance and behaviour. The cranium of the red rail was the largest among Mascarene rails, and was compressed from top to bottom in side view. The premaxilla that comprised most of the upper bill was long (nearly 47% longer than the cranium) and narrow, and ended in a sharp point. The narial (nostril) openings were 50% of the rostrum's length, and prominent, elongate foramina (openings) ran almost to the front edge of the narial opening. The mandibular rostrum of the lower jaw was long, with the length of the mandibular symphysis (where the halves of the mandible connect) being about 79% of the cranium's length. The mandible had large, deep set foramina, which ran almost up to a deep sulcus (furrow). Hume examined all available upper beaks in 2019, and while he found no differences in curvature, he thought the differences in length was most likely due to sexual dimorphism. The scapula (shoulder blade) was wide in side view, and the coracoid was comparatively short, with a wide shaft. The sternum (breast bone) and humerus (upper arm bone) were small, indicating that it had lost the power of flight. The humerus was 60–66 mm (2.4–2.6 in), and its shaft was strongly curved from top to bottom. The ulna (lower arm bone) was short and strongly arched from top to bottom. Its legs were long and slender for such a large bird, but the pelvis was very wide, robust, and compact, and was 60 mm (2.4 in) in length. The femur (thigh-bone) was very robust, 69–71 mm (2.7–2.8 in) long, and the upper part of the shaft was strongly arched. The tibiotarsus (lower leg bone) was large and robust, especially the upper and lower ends, and was 98–115 mm (3.9–4.5 in) long. The fibula was short and robust. The tarsometatarsus (ankle bone) was large and robust, and 79 mm (3.1 in) long. The red rail differed from the Rodrigues rail in having a proportionately shorter humerus, a narrower and longer skull, and having shorter and higher nostrils. They differed considerably in plumage, based on early descriptions. The red rail was also larger, with somewhat smaller wings, but their leg proportions were similar. The pelvis and sacrum was also similar. The Dutch ornithologist Marc Herremans suggested in 1989 that the red and Rodrigues rails were neotenic, with juvenile features such as weak pectoral apparatuses and downy plumage. ### Contemporary descriptions Mundy visited Mauritius in 1638 and described the red rail as follows: > A Mauritius henne, a Fowle as bigge as our English hennes, of a yellowish Wheaten Colour, of which we only got one. It hath a long, Crooked sharpe pointed bill. Feathered all over, butte on their wings they are soe Few and smalle that they cannot with them raise themselves From the ground. There is a pretty way of taking them with a red cap, but this of ours was taken with a stick. They bee very good Meat, and are also Cloven footed, soe that they can Neyther Fly nor Swymme. Another English traveller, John Marshall, described the bird as follows in 1668: > Here are also great plenty of Dodos or red hens which are larger a little than our English henns, have long beakes and no, or very little Tayles. Their fethers are like down, and their wings so little that it is not able to support their bodies; but they have long leggs and will runn very fast, and that a man shall not catch them, they will turn so about in the trees. They are good meate when roasted, tasting something like a pig, and their skin like pig skin when roosted, being hard. ### Contemporary depictions The two most realistic contemporary depictions of red rails, the Hoefnagel painting from ca. 1610 and the sketches from the Gelderland ship's journal from 1601 attributed to Laerle, where brought to attention in the 19th century. Much information about the bird's appearance comes from Hoefnagel's painting, based on a bird in the menagerie of Emperor Rudolph II around 1610. It is the only unequivocal coloured depiction of the species, showing the plumage as reddish brown, but it is unknown whether it was based on a stuffed or living specimen. The bird had most likely been brought alive to Europe, as it is unlikely that taxidermists were on board the visiting ships, and spirits were not yet used to preserve biological specimens. Most tropical specimens were preserved as dried heads and feet. It had probably lived in the emperor's zoo for a while together with the other animals painted for the same series. The painting was discovered in the emperor's collection and published in 1868 by Georg von Frauenfeld, along with a painting of a dodo from the same collection and artist. This specimen is thought to have been the only red rail that ever reached Europe. The red rail depicted in the Gelderland journal appears to have been stunned or killed, and the sketch is the earliest record of the species. It is the only illustration of the species drawn on Mauritius, and according to Hume, the most accurate depiction. The image was sketched with pencil and finished in ink, but details such as a deeper beak and the shoulder of the wing are only seen in the underlying sketch. In addition, there are three rather crude black-and-white sketches, but differences in them were enough for some authors to suggest that each image depicted a distinct species, leading to the creation of several scientific names which are now synonyms. An illustration in van den Broecke's 1646 account (based on his stay on Mauritius in 1617) shows a red rail next to a dodo and a one-horned goat, but is not referenced in the text. An illustration in Herbert's 1634 account (based on his stay in 1629) shows a red rail between a broad-billed parrot and a dodo, and has been referred to as "extremely crude" by Hume. Mundy's 1638 illustration was published in 1919. As suggested by Greenway, there are also depictions of what appears to be a red rail in three of the Dutch artist Roelant Savery's paintings. In his famous Edwards' Dodo painting from 1626, a rail-like bird is seen swallowing a frog behind the dodo, but Hume has doubted this identification and that of red rails in other Savery paintings, suggesting may instead show Eurasian bitterns. In 1977, the American ornithologist Sidney Dillon Ripley noted a bird resembling a red rail figured in the Italian artist Jacopo Bassano's painting Arca di Noè ("Noah's Ark") from ca. 1570. Cheke pointed out that it is doubtful that a Mauritian bird could have reached Italy this early, but the attribution may be inaccurate, as Bassano had four artist sons who used the same name. A similar bird is also seen in the Flemish artist Jan Brueghel the Elder's Noah's Ark painting. Hume concluded that these paintings also show Eurasian bitterns rather than red rails. ## Behaviour and ecology Contemporary accounts are repetitive and do not shed much light on the life history of the red rail. Based on fossil localities, the bird widely occurred on Mauritius, in montane, lowland, and coastal habitats. The shape of the beak indicates it could have captured reptiles and invertebrates, and the differences in bill length suggests the sexes foraged on items of different sizes. It may also have scavenged breeding colonies of birds and nesting-sites of tortoises, as the Rodrigues rail did. No contemporary accounts were known to mention the red rail's diet, until the 1660s report of Johannes Pretorius about his stay on Mauritius was published in 2015, where he mentioned that the bird "scratches in the earth with its sharp claws like a fowl to find food such as worms under the fallen leaves." Milne-Edwards suggested that since the tip of the red rail's bill was sharp and strong, it fed by crushing molluscs and other shells, like oystercatchers do. There were many endemic land-snails on Mauritius, including the large, extinct Tropidophora carinata, and subfossil shells have been found with puncture holes on their lower surfaces, which suggest predation by birds, probably matching attacks from the beak of the red rail. The similarly sized weka of New Zealand punctures shells of land-snails to extract meat, but can also swallow Powelliphanta snails; Hume suggested the red rail was also able to swallow snails whole. Since Pretorius mentioned the red rail searched for worms in leaf-litter, Hume suggested this could refer to nemertean and planarian worms; Mauritius has endemic species of these groups which live in leaf-litter and rotten wood. He could also have referred to the now extinct worm-snake Madatyphlops cariei, which was up to 200 mm (7.9 in) long, and probably lived in leaf-litter like its relatives do. Hume noted that the front of the red rail's jaws were pitted with numerous foramina, running from the nasal aperture to almost the tip of the premaxilla. These were mostly oval, varying in depth and inclination, and became shallower hindward from the tip. Similar foramina can be seen in probing birds, such as kiwis, ibises, and sandpipers. While unrelated, these three bird groups share a foraging strategy; they probe for live food beneath substrate, and have elongated bills with clusters of mechanoreceptors concentrated at the tip. Their bill-tips allow them to detect buried prey by sensing cues from the substrate. The foramina on the bill of the red rail were comparable to those in other probing rails with long bills (such as the extinct snipe-rail), though not as concentrated on the tip, and the front end of the bill's curvature also began at the front of the nasal opening (as well as the same point in the mandible). The bill's tip was thereby both strong and very sensitive, and a useful tool for probing for invertebrates. A 1631 letter probably by the Dutch lawyer Leonardus Wallesius (long thought lost, but rediscovered in 2017) uses word-play to refer to the animals described, with red rails supposedly being an allegory for soldiers: > The soldiers were very small and moved slowly, so that we could catch them easily with our hands. Their armor was their mouth, which was very sharp and pointed; they used it instead of a dagger, were very cowardly and nervous; they did not behave as soldiers at all, and walked in a disorderly manner, one here, the other there, and did not show any faithfulness towards one another. While it was swift and could escape when chased, it was easily lured by waving a red cloth, which they approached to attack; a similar behaviour was noted in its relative, the Rodrigues rail. The birds could then be picked up, and their cries when held would draw more individuals to the scene, as the birds, which had evolved in the absence of mammalian predators, were curious and not afraid of humans. Herbert described its behaviour towards red cloth in 1634: > The hens in eating taste like parched pigs, if you see a flocke of twelve or twenties, shew them a red cloth, and with their utmost silly fury they will altogether flie upon it, and if you strike downe one, the rest are as good as caught, not budging an iot till they be all destroyed. Many other endemic species of Mauritius became extinct after the arrival of humans to the island heavily damaged the ecosystem, making it hard to reconstruct. Before humans arrived, Mauritius was entirely covered in forests, but very little remains today due to deforestation. The surviving endemic fauna is still seriously threatened. The red rail lived alongside other recently extinct Mauritian birds such as the dodo, the broad-billed parrot, the Mascarene grey parakeet, the Mauritius blue pigeon, the Mauritius scops owl, the Mascarene coot, the Mauritian shelduck, the Mauritian duck, and the Mauritius night heron. Extinct Mauritian reptiles include the saddle-backed Mauritius giant tortoise, the domed Mauritius giant tortoise, the Mauritian giant skink, and the Round Island burrowing boa. The small Mauritian flying fox and the snail Tropidophora carinata lived on Mauritius and Réunion, but became extinct in both islands. Some plants, such as Casearia tinifolia and the palm orchid, have also become extinct. ## Relationship with humans To the sailors who visited Mauritius from 1598 and onwards, the fauna was mainly interesting from a culinary standpoint. The dodo was sometimes considered rather unpalatable, but the red rail was a popular gamebird for the Dutch and French settlers. The reports dwell upon the varying ease with which the bird could be caught according to the hunting method and the fact that when roasted it was considered similar to pork. The last detailed account of the red rail was by the German pastor Johann Christian Hoffmann, on Mauritius in the early 1670s, who described a hunt as follows: > ... [there is also] a particular sort of bird known as toddaerschen which is the size of an ordinary hen. [To catch them] you take a small stick in the right hand and wrap the left hand in a red rag, showing this to the birds, which are generally in big flocks; these stupid animals precipitate themselves almost without hesitation on the rag. I cannot truly say whether it is through hate or love of this colour. Once they are close enough, you can hit them with the stick, and then have only to pick them up. Once you have taken one and are holding it in your hand, all the others come running up as it [sic] to its aid and can be offered the same fate. Hoffman's account refers to the red rail by the German version of the Dutch name originally applied to the dodo, "dod-aers", and John Marshall used "red hen" interchangeably with "dodo" in 1668. Milne-Edwards suggested that early travellers may have confused young dodos with red rails. The British ornithologist Alfred Newton (brother of Edward) suggested in 1868 that the name of the dodo was transferred to the red rail after the former had gone extinct. Cheke suggested in 2008 that all post 1662 references to "dodos" therefore refer to the rail instead. A 1681 account of a "dodo", previously thought to have been the last, mentioned that the meat was "hard", similar to the description of red hen meat. The British writer Errol Fuller has also cast the 1662 "dodo" sighting in doubt, as the reaction to distress cries of the birds mentioned matches what was described for the red rail. In 2020, Cheke and the British researcher Jolyon C. Parish suggested that all mentions of dodos after the mid-17th century instead referred to red rails, and that the dodo had disappeared due to predation by feral pigs during a hiatus in settlement of Mauritius (1658–1664). The dodo's extinction therefore was not realised at the time, since new settlers had not seen real dodos, but as they expected to see flightless birds, they referred to the red rail by that name instead. Since red rails probably had larger clutches than dodos (as in other rails) and their eggs could be incubated faster, and their nests were perhaps concealed like those of the Rodrigues rail, they probably bred more efficiently, and were less vulnerable to pigs. They may also have foraged from the digging, scraping and rooting of the pigs, as does the weka. 230 years before Charles Darwin's theory of evolution, the appearance of the red rail and the dodo led Mundy to speculate: > Of these 2 sorts off fowl afforementionede, For oughtt wee yett know, Not any to bee Found out of this Iland, which lyeth aboutt 100 leagues From St. Lawrence. A question may bee demaunded how they should bee here and Not elcewhere, beeing soe Farer From other land and can Neither fly or swymme; whither by Mixture off kindes producing straunge and Monstrous formes, or the Nature of the Climate, ayer and earth in alltring the First shapes in long tyme, or how. ### Extinction Many terrestrial rails are flightless, and island populations are particularly vulnerable to anthropogenic (human-caused) changes; as a result, rails have suffered more extinctions than any other family of birds. All six endemic species of Mascarene rails are extinct, all caused by human activities. In addition to hunting pressure by humans, the fact that the red rail nested on the ground made it vulnerable to pigs and other introduced animals, which ate their eggs and young, probably contributing to its extinction, according to Cheke. Hume pointed out that the red rail had coexisted with introduced rats since at least the 14th century, which did not appear to have affected them (as they seem to have been relatively common in the 1680s), and they were probably able to defend their nests (Dryolimnas rails have been observed killing rats, for example). They also seemed to have managed to survive alongside humans as well as introduced pigs and crab-eating macaques. Since the red rail was referred to by the names of the dodo in the late 1600s, it is uncertain which is the latest account of the latter. When the French traveller François Leguat, who had become familiar with the Rodrigues rail in the preceding years, arrived on Mauritius in 1693, he remarked that the red rail had become rare. He was the last source to mention the bird, so it is assumed that it became extinct around 1700. Feral cats, which are effective predators of ground-inhabiting birds, were established on Mauritius around the late 1680s (to control rats), and this has been cause for rapid disappearance of rails elsewhere, for example on Aldabra Atoll. Being inquisitive and fearless, Hume suggested the red rail would have been easy prey for cats, and was thereby driven to extinction. ## See also - Holocene extinction - List of extinct birds
47,335,140
The Triumph of Cleopatra
1,063,434,910
1821 painting by William Etty
[ "1821 paintings", "Bathing in art", "Collections of the Lady Lever Art Gallery", "Maritime paintings", "Nude art", "Paintings based on works by Plutarch", "Paintings by William Etty", "Paintings depicting Cleopatra" ]
The Triumph of Cleopatra, also known as Cleopatra's Arrival in Cilicia and The Arrival of Cleopatra in Cilicia, is an oil painting by English artist William Etty. It was first exhibited in 1821, and is now in the Lady Lever Art Gallery in Port Sunlight, Merseyside. During the 1810s Etty had become widely respected among staff and students at the Royal Academy of Arts, in particular for his use of colour and ability to paint realistic flesh tones. Despite having exhibited at every Summer Exhibition since 1811, he attracted little commercial or critical interest. In 1820, he exhibited The Coral Finder, which showed nude figures on a gilded boat. This painting attracted the attention of Sir Francis Freeling, who commissioned a similar painting on a more ambitious scale. The Triumph of Cleopatra illustrates a scene from Plutarch's Life of Antony and Shakespeare's Antony and Cleopatra, in which Cleopatra, Queen of Egypt, travels to Tarsus in Cilicia aboard a magnificently decorated ship to cement an alliance with the Roman general Mark Antony. An intentionally cramped and crowded composition, it shows a huge group of people in various states of undress, gathering on the bank to watch the ship's arrival; another large number is on board. Although not universally admired in the press, the painting was an immediate success, making Etty famous almost overnight. Buoyed by its reception, Etty devoted much of the next decade to creating further history paintings containing nude figures, becoming renowned for his combination of nudity and moral messages. ## Background William Etty was born in York in 1787, the son of a miller and baker. He showed artistic promise from an early age, but his family were financially insecure, and at the age of 12 he left school to become an apprentice printer in Hull. On completing his seven-year indenture he moved to London "with a few pieces of chalk-crayons in colours", with the aim of emulating the Old Masters and becoming a history painter. Etty gained acceptance to the Royal Academy Schools in early 1807. After a year spent studying under the renowned portrait painter Thomas Lawrence, Etty returned to the Royal Academy, drawing in the life class and copying other paintings. He was unsuccessful in all the Academy's competitions, and every painting he submitted for the Summer Exhibition was rejected. In 1811, one of his paintings, Telemachus Rescues Antiope from the Fury of the Wild Boar, was finally accepted for the Summer Exhibition. Etty was becoming widely respected at the Royal Academy for his painting, particularly his use of colour and his ability to produce realistic flesh tones, and from 1811 onwards had at least one work accepted for the Summer Exhibition each year. However, he had little commercial success and generated little interest over the next few years. At the 1820 Summer Exhibition Etty exhibited The Coral Finder: Venus and her Youthful Satellites Arriving at the Isle of Paphos. Strongly inspired by Titian, The Coral Finder depicts Venus Victrix lying nude in a golden boat, surrounded by scantily-clad attendants. It was sold at exhibition to piano manufacturer Thomas Tomkinson for £30 (about £ in 2023 terms). Sir Francis Freeling admired The Coral Finder at its exhibition, and learning that it had been sold he commissioned Etty to paint a similar picture on a more ambitious scale, for a fee of 200 guineas (about £ in 2023 terms). Etty had for some time been musing on the possibility of a painting of Cleopatra and took the opportunity provided by Freeling to paint a picture of her based loosely on the composition of The Coral Finder. ## Composition The Triumph of Cleopatra is based loosely on Plutarch's Life of Antony as repeated in Shakespeare's Antony and Cleopatra, in which Cleopatra, Queen of Egypt, travels to Tarsus in Cilicia aboard a grand ship to cement an alliance with the Roman general Mark Antony. > Therefore when she was sent unto by divers letters, both from Antonius himself and also from his friends, she made so light of it and mocked Antonius so much that she disdained to set forward otherwise but to take her barge in the river of Cydnus, the poop whereof was of gold, the sails of purple, and the oars of silver, which kept stroke in rowing after the sound of the music of flutes, howboys, cithernes, viols, and such other instruments as they played upon in the barge. And now for the person of herself: she was laid under a pavilion of cloth of gold of tissue, apparelled and attired like the goddess Venus commonly drawn in picture; and hard by her, on either hand of her, pretty fair boys apparelled as painters do set forth god Cupid, with little fans in their hands, with the which they fanned wind upon her. Her ladies and gentlewomen also, the fairest of them were apparelled like the nymphs Nereides (which are the mermaids of the waters) and like the Graces, some steering the helm, others tending the tackle and ropes of the barge, out of which there came a wonderful passing sweet savour of perfumes, that perfumed the wharf's side, pestered with innumerable multitudes of people. While superficially similar to The Coral Finder, Cleopatra is more closely related to the style of Jean-Baptiste Regnault, with its deliberately cramped and crowded composition. The individual figures are intentionally out of proportion to each other and to the ship, while numerous figures are tightly positioned within a relatively small section of the painting. As well as from Regnault, the work borrows elements from Titian, Rubens and classical sculpture. The figures are painted as groups, and while each figure and group of figures is carefully arranged and painted, the combination of groups gives the appearance of a confused mass surrounding the ship when the painting is viewed as a whole. (Etty's 1958 biographer Dennis Farr comments that "[Cleopatra] contains elements enough for three or four paintings no less ambitious but more maturely planned.") The scene includes a number of images based on drawings Etty had sketched while out and about in London, such as the mother holding up her baby to see the view and the crowd on the roof of a temple in the background. It also includes elements of European painting that Etty had learned while copying Old Master artworks as a student, such as the putti in the sky. Etty greatly admired the Venetian school, and the painting includes obvious borrowings from Titian and other Venetian artists. It also contains a number of elements from the paintings of Rubens, such as the Nereids and Triton in the sea in front of the ship. Unusually for an English painting of the period in its representation of a queen of an African country the group of Cleopatra's attendants includes both dark- and light-skinned figures shown on equal terms and with equal prominence. From the earliest days of his career Etty had been interested in depicting variations in skin colour, and The Missionary Boy, believed to be his oldest significant surviving painting, shows a dark-skinned child. ## Reception Cleopatra caused an immediate sensation; Etty later claimed that the day after the Summer Exhibition opened he "awoke famous". The May 1821 issue of The Gentleman's Magazine hailed Cleopatra as "belonging to the highest class", and Charles Robert Leslie described it as "a splendid triumph of colour". The painting did not meet with universal approval. Blackwood's Edinburgh Magazine conceded that the painting had been "seen and admired at the Royal Academy" but condemned Etty's taking a mythological approach to a historical subject: > The effect of this picture would have been much more intense had the painter treated it as a mere fact, and had not brought upon the scene those flying Cupids who turn the thing into a mythological fable. Real boys dressed like Cupids would have been proper, but aerial beings are impertinences, and put one out when one is thinking of the sex. If this amorous pageant had been a mere fiction, instead of having actually taken place, still the power of its delineation would have consisted in its probability. Etty attempted to replicate the success of Cleopatra, and his next significant exhibited work was A Sketch from One of Gray's Odes (Youth on the Prow), exhibited at the British Institution in January 1822. As with The Coral Finder and Cleopatra, this painting showed a gilded boat filled with nude figures, and its exhibition provoked condemnation from The Times: > We take this opportunity of advising Mr. Etty, who got some reputation for painting "Cleopatra's Galley", not to be seduced into a style which can gratify only the most vicious taste. Naked figures, when painted with the purity of Raphael, may be endured: but nakedness without purity is offensive and indecent, and on Mr. Etty's canvass is mere dirty flesh. Mr. Howard, whose poetical subjects sometimes require naked figures, never disgusts the eye or mind. Let Mr. Etty strive to acquire a taste equally pure: he should know, that just delicate taste and pure moral sense are synonymous terms. Despite the tone, Etty was pleased to be noticed by a newspaper as influential as The Times, and much later confessed how delighted he was that the "Times noticed me. I felt my chariot wheels were on the right road to fame and honour, and I now drove on like another Jehu!" Possibly as a result of the criticism in The Times, Freeling asked Etty to overpaint the figures in the foreground of Cleopatra. In 1829, after Etty had become a respected artist, Freeling allowed the restoration of the figures to their original condition. ## Legacy The criticism did little to dissuade Etty from attempting to reproduce the success of Cleopatra, and he concentrated on painting further history paintings containing nude figures. He exhibited 15 paintings at the Summer Exhibition in the 1820s (including Cleopatra), and all but one contained at least one nude figure. In so doing Etty became the first English artist to treat nude studies as a serious art form in their own right, capable of being aesthetically attractive and of delivering moral messages. In 1823–24 Etty made an extended trip to study in France and Italy, and returned a highly accomplished artist. His monumental 304 by 399 cm (10 ft by 13 ft 1 in) 1825 painting The Combat: Woman Pleading for the Vanquished was extremely well-received, and Etty began to be spoken of as one of England's finest painters. In February 1828 Etty soundly defeated John Constable by eighteen votes to five to become a full Royal Academician, at the time the highest honour available to an artist. On occasion he would re-use elements from Cleopatra in his later paintings, such as the black soldier who squats on the side of the ship in Cleopatra and who also sits watching dancers in his 1828 The World Before the Flood. Etty continued to produce paintings ranging from still lifes to formal portraits, and to attract both admiration for his technique and criticism for supposed obscenity, until his death in 1849. In the years following his death Etty's work became highly collectable, and his works fetched huge sums on resale. Changing tastes from the 1870s onwards meant history paintings in Etty's style fell rapidly out of fashion, and by the end of the 19th century, the value of all of his works had fallen below their original prices. Despite its technical flaws, Cleopatra remained a favourite among many of Etty's admirers during his lifetime; in 1846 Elizabeth Rigby described it as a "glorious confusion of figures" and "that wonderful 'Cleopatra' of Etty's". Following Freeling's death in 1836, Cleopatra was sold for 210 guineas, around the same price Freeling had paid for it, and entered the collection of Lord Taunton. While in Taunton's ownership it was shown at a number of important exhibitions, including a major 1849 Etty retrospective, the Art Treasures Exhibition of 1857 and the 1862 International Exhibition. Following Taunton's death in 1869 it was sold to a succession of owners for a variety of prices, peaking at 500 guineas (about £ in 2023 terms) in 1880 and dropping in price on each subsequent resale. In 1911 it was bought for 240 guineas (about £ in 2023 terms) by William Lever, 1st Viscount Leverhulme, who was a great admirer of Etty and had a number of his paintings hanging in the entrance hall of his home. It has remained in the collection Leverhulme assembled, housed from 1922 in the Lady Lever Art Gallery, ever since.
556,344
Angus Lewis Macdonald
1,119,499,091
Canadian lawyer and politician (1890–1954)
[ "1890 births", "1954 deaths", "Canadian Expeditionary Force officers", "Canadian King's Counsel", "Canadian Roman Catholics", "Canadian legal scholars", "Canadian military personnel of World War I", "Canadian people of Scottish descent", "Harvard Law School alumni", "Lawyers in Nova Scotia", "Liberal Party of Canada MPs", "Members of the House of Commons of Canada from Ontario", "Members of the King's Privy Council for Canada", "Nova Scotia Liberal Party MLAs", "Nova Scotia political party leaders", "People from Inverness County, Nova Scotia", "Premiers of Nova Scotia", "Schulich School of Law alumni", "St. Francis Xavier University alumni" ]
Angus Lewis Macdonald PC QC (August 10, 1890 – April 13, 1954), popularly known as 'Angus L.', was a Canadian lawyer, law professor and politician from Nova Scotia. He served as the Liberal premier of Nova Scotia from 1933 to 1940, when he became the federal minister of defence for naval services. He oversaw the creation of an effective Canadian navy and Allied convoy service during World War II. After the war, he returned to Nova Scotia to become premier again. In the election of 1945, his Liberals returned to power while their main rivals, the Conservatives, failed to win a single seat. The Liberal rallying cry, "All's Well With Angus L.," was so effective that the Conservatives despaired of ever beating Macdonald. He died in office in 1954. Macdonald's more than 15 years as premier brought fundamental changes. Under his leadership, the Nova Scotia government spent more than \$100 million paving roads, building bridges, extending electrical transmission lines and improving public education. Macdonald dealt with the mass unemployment of the Great Depression by putting the jobless to work on highway projects. He felt direct government relief payments would weaken moral character, undermine self-respect and discourage personal initiative. However, he also faced the reality that the financially strapped Nova Scotia government could not afford to participate fully in federal relief programs that required matching contributions from the provinces. Macdonald was considered one of his province's most eloquent political orators. He articulated a philosophy of provincial autonomy, arguing that poorer provinces needed a greater share of national tax revenues to pay for health, education and welfare. He contended that Nova Scotians were victims of a national policy that protected the industries of Ontario and Quebec with steep tariffs forcing people to pay higher prices for manufactured goods. It was no accident, Macdonald said, that Nova Scotia had gone from the richest province per capita before Canadian Confederation in 1867 to poorest by the 1930s. Macdonald was a classical liberal in the 19th-century tradition of John Stuart Mill. He believed in individual freedom and responsibility and feared that the growth of government bureaucracy would threaten liberty. For him, the role of the state was to provide basic services. He supported public ownership of utilities like the Nova Scotia Power Commission, but rejected calls for more interventionist policies such as government ownership of key industries or big loans to private companies. ## Early life and education Angus Lewis Macdonald was born August 10, 1890, on a small family farm at Dunvegan, Inverness County, on Cape Breton Island. He was the son of Lewis Macdonald and Veronique "Veronica" Perry, and the ninth child in a family of 14. His mother was from a prominent Acadian family on Prince Edward Island and his maternal grandfather was politician Stanislaus Francis Perry. His father's family had emigrated to Cape Breton from the Scottish Highlands in 1810. The Macdonalds were devout Roman Catholics as well as ardent Liberal Party supporters. In 1905, when Macdonald was 15, the family moved to the town of Port Hood, Cape Breton. Macdonald attended the Port Hood Academy. He hoped to enroll next in the Bachelor of Arts program at St Francis Xavier University in Antigonish, but his family couldn't afford to pay for a university education so Macdonald obtained a teaching licence and taught for two years to finance his education. Midway through his university studies, he took another year off to earn money teaching. He completed his final term on credit and was required to teach in the university's high school during 1914–15 to pay off his debt. Macdonald did well at St. FX. He played rugby, joined the debating team, edited the student newspaper and, in his graduating year, won the gold medal in seven of his eight courses. He was also class valedictorian. ## War service The First World War broke out while Macdonald was earning his university degree. In 1915, he underwent military training in the Canadian Officers Training Corps. In February 1916, he joined the 185th battalion, known as the Cape Breton Highlanders, leaving for Britain in October 1916 where he received further training. Macdonald was finally sent to the front lines in France in May 1918 as a lieutenant in Nova Scotia's 25th battalion. He participated in heavy fighting and on one occasion led his entire company because all of the other officers had been wounded or killed. Macdonald felt fortunate to have been spared, but his luck ran out in Belgium when he was hit in the neck by a German sniper's bullet on November 7, 1918, just four days before the Armistice. Macdonald spent eight months in Britain recovering from his wound. He returned home to his family in Cape Breton in 1919. Biographer Stephen Henderson writes that the war had made him "more serious and less self-confident", but "struck by the willingness of so many to march to horrible deaths in the name of an abstract principle". ## Life before politics In September 1919, the 29-year-old Macdonald began studying at Dalhousie Law School in Halifax. During his two years there, Macdonald formed lifelong friendships with students who were to become members of the political elite in the region. Once again, he excelled in athletics, was elected to the Dalhousie students' council, became the associate editor of the student newspaper and led the opposition in the law school's Mock Parliament. He scored firsts in nearly every course and graduated in 1921 with academic distinction. Macdonald was hired by the Nova Scotia government as assistant deputy attorney-general immediately after graduating from law school. He worked mainly as an administrator, although he occasionally appeared in court to help the attorney general prosecute a case. In 1922, Macdonald became a part-time lecturer at the law school. When he left the attorney-general's office in 1924, he became a full-time professor. Macdonald was a popular and effective teacher. One former student describes him sitting at his desk on the rostrum speaking slowly and deliberately while gazing intently at the ceiling. "The more students disagreed (with one another in class) the more Angus encouraged it." On June 17, 1924, when he was 33, Macdonald married Agnes Foley, a member of a prominent Irish Catholic family. They had worked together in the attorney general's office where Foley served as secretary. Between 1925 and 1936, the Macdonalds had three daughters and a son. Agnes raised the children and ran the household after Macdonald entered politics. Biographer John Hawkins writes she eventually helped her husband win election in a Halifax riding with a significant Irish Catholic population. She had a large circle of friends including members of the powerful Liberal Women's societies of Halifax. Hawkins also notes that Agnes Macdonald was a gifted hostess who loved conversation. "Quick witted, her rapid and varied flow of language contrasted with Angus L.'s deliberate, thoughtful manner of speaking, which some have described as a 'drawl'." In 1925–26, while teaching at the Dalhousie Law School, Macdonald took additional courses in law at Columbia University in New York, mainly by correspondence. He used these courses as the basis for full-time graduate work at the Harvard Law School in Boston, Massachusetts in 1928. Harvard's faculty members saw the law as an instrument for social improvement. That view was reflected in Macdonald's 1929 doctoral thesis on the responsibility of property holders under civil law. When the deanship of the law school came open in 1929, Macdonald agonized over whether he should seek the job. He apparently had strong support from several members of the university's board of governors. At the same time however, he was increasingly drawn to politics and accepting the deanship would mean postponing his political ambitions indefinitely. In the end, the job was offered to Sidney Smith, another prominent Canadian academic who accepted on condition that Macdonald remain at the school. Macdonald did stay, but only for one more year. In 1930, he resigned so he would be free to enter politics. ## Early political career ### Federal campaign, 1930 The federal election in the summer of 1930 gave the 40-year-old Macdonald a chance to run for office. He decided to contest the riding of Inverness in his native Cape Breton. There he faced a Conservative opponent whose style contrasted sharply with his own cool and reserved manner. According to biographer John Hawkins, I. D. "Ike" MacDougall "was a gifted performer who before an audience could cut an opponent's well-marshalled arguments until they fell amid roars of laughter. He was the master of hyperbole, pun and high spirits. He could win a rural audience, not by his logic, but by his performance on the platform". Macdonald campaigned hard, but the trend was against him. The Conservatives led by R. B. Bennett defeated Mackenzie King's unpopular Liberals. And in Inverness, Ike MacDougall was re-elected by the narrow margin of 165 votes. It was to be Macdonald's only election defeat. Afterwards, Macdonald retreated to Halifax where he opened his own private law office in August 1930. ### Provincial convention, 1930 Macdonald was active in provincial Liberal Party organizational work during the latter part of the 1920s. In 1925, the party had suffered a crushing defeat after 43 years in power. On election day, the Liberals were reduced to three seats in the Nova Scotia legislature. Many believed that the time had come to return the party to its reformist roots. Macdonald worked with other reform-minded members to establish a network of younger Liberals intent on reviving their party. In the 1928 provincial election, the Liberals regained some of their lost popularity in one of the closest votes in Nova Scotia history. The Conservatives remained in power with 23 seats to the Liberals' 20. Economic conditions worsened after the stock market crash of 1929 making it seem increasingly likely that the Liberals would return to power in the next election. Macdonald helped draft a 15-point party platform for approval at a Liberal convention in the fall of 1930. It promised an eight-hour working day and free elementary school textbooks. It also pledged to establish a formal inquiry into Nova Scotia's economic prospects and the province's place within Confederation. The convention, held on October 1, 1930, proved to be a turning point both for the party and for Macdonald. In a departure from tradition, the party's new leader was chosen by convention delegates instead of Liberal caucus members at the legislature. Two veterans of Liberal politics, both wealthy businessmen, were contesting the leadership. There was little enthusiasm, however, for either. Just as nominations were about to close, a delegate from Truro rose unexpectedly to nominate Macdonald. Surprised, Macdonald at first declined the nomination, then agreed to accept it when he sensed strong support on the convention floor. A few hours later, the 40-year-old Macdonald had won a resounding first-ballot victory to become the new Liberal leader. ### Liberal party leader After winning the Liberal leadership, Macdonald travelled the province on speaking tours helping organize party support in every constituency. As Liberal leader, he proved to be an effective platform speaker. According to biographer John Hawkins, Macdonald's "plain talk and simplicity" persuaded audiences of his honesty. He developed the ability to explain political issues with a "clarity that every voter could understand". When the legislature was in session, he led the Liberals from the public galleries because he had no seat in the House. There were six vacancies, but the Conservatives refused to call by-elections fearing they would lose their five-seat majority. Macdonald publicly criticized Premier Gordon Harrington for depriving so many Nova Scotians of representation. He deplored what he called "the loss of responsible government." It was a message that struck a chord in the province that had been the first in Canada to achieve responsible government in 1848 thanks to the efforts of the great liberal Reformer Joseph Howe. Privately however, Macdonald rejoiced that the government couldn't risk calling a by-election telling one supporter years later, "If the truth must be told, I was sometimes afraid that they would open up a seat and deprive me of this sort of ammunition". Macdonald was able to use the theme of responsible government even more effectively during the provincial election campaign of 1933. The governing Conservatives, desperate to avoid electoral defeat, had enacted changes requiring that new voters' lists be drawn up by government-appointed registrars immediately before each election. Predictably, thousands of Liberal voters were left off the lists and the new law allowed only three days for corrections. The Liberals secured a court order requiring the appointment of additional registrars and some of the disenfranchised voters were finally added to the lists. The so-called Franchise Scandal enabled the Liberal press to cast Macdonald as a latter-day Joe Howe, crusading for the rights of the people. "No newcomer to the political scene", writes historian Murray Beck, "has ever become so quickly, widely, and favourably known in such a dramatic fashion". The scandal, compounded by suffering in the province due to the Great Depression, resulted in Macdonald's Liberals winning 22 of the 30 seats on August 22, 1933. The Conservatives were now associated in the public mind with corruption and hard times. They did not regain power for 23 years. ## First term as premier 1933–37 Macdonald was sworn in as Premier of Nova Scotia on September 5, 1933. He was 43 years old and had never held a seat in the legislature. Historian Murray Beck writes that Macdonald's cabinet was "probably Nova Scotia's strongest". Biographer Stephen Henderson points out that the "ministers were fresh, motivated and knowledgeable about their portfolios", although Macdonald himself had no experience in finance. Biographer John Hawkins characterizes the Liberal party of 1933 as "a party of thinkers and reformers". During the 1930s, Macdonald's Liberals took credit for leading the province out of the depths of the Great Depression. As journalist Harry Flemming wrote many years later, Macdonald became "God himself", the premier who "paved the roads and put the power into every home from Cape North to Cape Sable". ### Pensions and relief programs On his first day in office, Macdonald kept a key Liberal promise by bringing in old age pensions for elderly people in need. Cheques were mailed out to 6,000 pensioners by the end of March 1934. It was a popular move even though monthly pension payments in Nova Scotia were substantially below the national average. The economic conditions facing the new government were dismal. Tens of thousands of Nova Scotians were impoverished and unemployed. The government expected that 75,000 Nova Scotians would need assistance during the coming winter. Biographer Stephen Henderson writes that Macdonald sympathized with the poor, but he worried that direct government relief payments would undermine their pride and self-respect. Even though direct relief might be cheaper, the Macdonald government preferred to hire the unemployed for public works projects such as paving roads. Henderson reports that in 1933, there were only 45 kilometres (28 mi) of paved roads in the province. By 1937, that figure had risen to 605. The government financed such public works by selling low-interest bonds and raising gasoline taxes from six to eight cents a gallon. Macdonald also urged the federal Conservative government of R. B. Bennett to increase financial support to poorer provinces. At the time, there was no national system of unemployment insurance and the Bennett Conservatives insisted that the unemployed were mainly the responsibility of the provinces and municipalities. Although the federal government did provide relief during the Depression, Nova Scotia and the two other Maritime provinces were hampered by the federal system of matching grants for relief programs. Under that system, provinces received federal money only if they were willing to contribute a percentage of their own revenues. Thus, the poorest provinces received less federal aid than the richer ones because they couldn't afford to match the federal grants. Historian E. R. Forbes points out for example, that from January to May 1935, all three levels of government spent an average of \$2.84 for each relief recipient in the Maritimes, an amount less than half the \$6.18 spent in the other six provinces. ### Jones Commission Macdonald tried to deal with the financial imbalances in Confederation by appointing a Royal Commission. He asked it to recommend economic policies the province should follow to lessen the effects of the Depression and to lay out a framework for negotiations with the federal government. The three-man Jones Commission included Harold Innis, a prominent economic historian who had studied disparities between highly developed manufacturing regions and marginal ones that depended primarily on exploiting natural resources. After touring the province and hearing from more than 200 witnesses, the Commission issued its report in December 1934. Macdonald could take satisfaction in its finding that high tariffs had sheltered central Canadian manufacturing at Nova Scotia's expense and that federal subsidies to the province were "seriously inadequate". The Commission recommended that the federal government assume responsibility for financing social programs such as old-age pensions and unemployment insurance. It also argued that Ottawa should establish equity among provinces and that redistribution of federal tax revenues should be based on need, an idea that became central to Macdonald's thinking about federal-provincial relations. Among other things, the Commission called on the Macdonald government to continue paving roads; to undertake a program of rural electrification to keep young people on family farms; and, to establish a professional civil service that would defend Nova Scotia's interests against federal bureaucrats in Ottawa. ### Tourism and Nova Scotian identity The Macdonald government took practical steps to promote tourism as a way of bringing money into the province. It improved conditions for tourists by granting small loans to hotel, motel and cottage owners to upgrade their facilities. It also offered cooking classes to restaurant and hotel employees. The government's extensive road building program made it easier for tourists to travel. But biographer Stephen Henderson writes that Macdonald went well beyond these practical steps to promote Nova Scotia as a beautiful and rustic place peopled by colourful Scots, Acadians, Germans and Mi'kmaq. Government advertising portrayed the province "as a place where urban, middle-class families could go to 'step back in time'". Gradually, Henderson maintains, the tourism campaigns created a new identity for Nova Scotians. "They witnessed the provincial state constructing an elaborate network of modern roads; they read books and brochures extolling the beauty of the province, and they heard their premier waxing romantically about the pure, simple nobility of their ancestors." Macdonald was especially enthusiastic about "the romanticized culture of the Highland Scots". Historian Ian McKay writes that under his leadership, the provincial government gave money to the Gaelic College; bestowed Scottish names on key tourism sites and stationed "a brawny Scots piper" at the border with New Brunswick. Macdonald also helped assemble more than a quarter of a million acres (4,000 km<sup>2</sup>) for the Cape Breton Highlands National Park complete with a fancy resort hotel and world-class golf course. "Macdonald believed", Henderson writes, "he had created a piece of Scotland for tourists in the New World". And, as more tourists came, Macdonald's stature grew. ### Trade Union Act The Nova Scotia legislature recognized the growing power of industrial unions in the 1930s by passing what historian Stephen Henderson calls "Canada's first piece of modern labour legislation". Although Macdonald's governing Liberals and the opposition Conservatives agreed on the need to protect union rights, the parties vied with each other to take credit for the Trade Union Act. In January 1937, Premier Macdonald carried a bottle of bootleg rum to a meeting with union officials in Sydney, Cape Breton where they gave him a draft bill based on the American National Labor Relations Act. Before the Macdonald government could introduce the bill in the legislature, the Conservatives presented a similar one of their own. The legislation faced opposition from the Canadian Manufacturers' Association during public hearings, but Liberals and Conservatives combined to pass it unanimously. The new Trade Union Act required employers to bargain with any union chosen by a majority of their employees. It also prohibited employers from firing workers for organizing a union. ### Government patronage Nova Scotia's well-entrenched system of paying off government supporters with jobs and contracts continued to flourish under the Macdonald Liberals. In his comprehensive history of Canadian patronage, journalist Jeffrey Simpson writes that the Liberals used road improvements to win votes, with highway crews "especially busy before and during election campaigns." Simpson adds that the Liberals awarded government contracts to companies approved by the party. In return, the firms were required to kickback some of the money they received to the Liberals. Biographer Stephen Henderson argues that Macdonald himself did not relish the traditional practice of filling government jobs with party supporters. Nevertheless, the "wave of partisan hirings and firings" continued as committees in each riding "scrutinized employees for inappropriate political activity and rated prospective candidates based on what they or their families had done for the Liberal party". ## Second term as premier, 1937–40 Fortunately for the Macdonald government, economic conditions improved during the 1930s. In March 1937, Macdonald announced that after 14 years of running operating deficits, the Nova Scotia government had recorded a surplus with another forecast for the next year. The pro-Liberal Halifax Chronicle gleefully described the scene in the legislature: "the House sat for a moment, as if not comprehending the good news, then rocked with acclaim, at least the Government side of the House did, though the opposition, stilled and stunned-like, sat like figures carved in stone". Macdonald promised the government would spend another \$7.5 million on its popular road paving program overseen by A. S. MacMillan, the veteran Minister of Highways. MacMillan, also Chairman of the Nova Scotia Power Commission, had been extending electrical service into rural areas. He now introduced a rural electrification bill designed to subsidize the cost of providing electricity. After these preparations, the premier called a provincial election for June 29, 1937. Macdonald campaigned on his government's record. On election day, his Liberals were rewarded with 25 of the 30 seats in the legislature. Prime Minister William Lyon Mackenzie King had invited Macdonald to run for federal office during the general election of 1935. Although Macdonald turned him down, there were strong rumours in 1937 that Macdonald would soon enter federal politics. Biographer Stephen Henderson writes however, that Macdonald wanted to remain as premier so he could present Nova Scotia's case to a Royal Commission on federal-provincial relations. ### Rowell-Sirois Commission The Depression of the 1930s exposed glaring weaknesses in federal-provincial financial arrangements. Canada's poorer provinces found it impossible to cope with widespread poverty and hunger while the federal government resisted taking full responsibility for unemployment relief. By 1937, conditions had become so desperate that the provinces of Manitoba and Saskatchewan faced bankruptcy. Finally, in August 1937, Prime Minister King appointed the Royal Commission on Dominion-Provincial Relations, popularly known as the Rowell-Sirois Commission. According to biographer Stephen Henderson, Macdonald played an important role in shaping the Commission's final recommendations. Macdonald wrote Nova Scotia's submission and presented it himself when the Commission held hearings in Halifax in February 1938. He called on the federal government to take full responsibility for social programs such as unemployment insurance, old-age pensions and mothers' allowances. Macdonald recommended that the federal government be given exclusive jurisdiction over income taxes and succession duties to pay for these programs. He argued however, that to maintain their independence, the provinces needed to collect indirect sources of revenue such as sales taxes. He also called for exclusive provincial control over such minor tax fields as gasoline and electricity taxes. A central part of Macdonald's case concerned the redistribution of wealth from richer provinces to poorer ones. His argument was based on the premise that richer provinces benefited from national economic policies such as high tariffs while poorer provinces were penalized by them. Macdonald suggested that compensatory subsidies to poorer, less-populated provinces be based on need, not population, so that they could pay for government services available in other parts of the country without having to impose higher-than-average levels of taxation. The Commission's final report, released in May 1940, reflected many of Macdonald's recommendations. Mackenzie King called a federal-provincial conference in January 1941 to discuss the report. The provinces failed to agree on what should be done, but in April, the federal government went ahead on its own announcing it would levy steep taxes on personal and corporate incomes as a temporary measure to finance Canada's participation in the Second World War. ### Summons to Ottawa The course of Macdonald's political career changed sharply after Canada declared war on Germany in September 1939. Three months later, Mackenzie King called a federal election and on March 26, 1940, his Liberals won a decisive victory. In spite of his victory, King was under pressure to recruit the country's "best brains" into his wartime cabinet. The death of his minister of defence in an air crash in June 1940 gave King an opportunity to reorganize his administration. He asked J. L. Ralston, a native Nova Scotian, to become his new minister of defence. Ralston agreed but imposed two conditions: First that J. L. Ilsley of Nova Scotia replace him as minister of finance and second that he get assistance in his new portfolio. King decided to appoint two additional ministers, one in charge of the Royal Canadian Air Force, the other to oversee the Royal Canadian Navy. He therefore, asked Macdonald to join the federal cabinet as minister of national defence for naval services. Macdonald, who had fought in World War I as a soldier on the front lines in France and Belgium, decided it was his duty to fight World War II as a political leader in Ottawa. He handed over his responsibilities as premier to A. S. MacMillan and was sworn into the federal cabinet on July 12, 1940. ## Wartime federal career, 1940–45 Macdonald's five years in Ottawa were tumultuous ones. He oversaw a massive increase in Canada's naval forces and played a key role in a political crisis that threatened to tear the Liberal government and the country apart. He also incurred the wrath of Mackenzie King, a political leader whom Macdonald grew to loathe. When he entered the federal cabinet in 1940, Macdonald seemed a likely candidate to replace the aging King and one day become prime minister himself. By the time he resigned in 1945, Macdonald's federal political career was in tatters. Mackenzie King wanted Macdonald to stand for a vacant seat in Kingston, Ontario. It was a traditional Conservative riding that had been represented by Sir John A. Macdonald, Canada's first prime minister. In 1935 however, the riding had switched to the Liberals and King wanted to keep it. "I told Mr. King that I did not know Kingston at all, nor its problems, nor its people", Macdonald wrote later. When the Conservatives agreed not to run a candidate against him however, Macdonald had no choice but to stand for office in Kingston. He won the seat by acclamation on August 12, 1940. ### Building the navy Macdonald faced a huge, but critical task in overseeing the expansion of the Royal Canadian Navy (RCN). As historian Desmond Morton points out, the RCN was tiny when Canada entered the war in 1939. It consisted of six destroyers, five minesweepers and about 3,000 personnel in its regular forces and volunteer reserves. By the time Macdonald took office in 1940, the RCN had grown to 100 ships and more than 7,000 personnel, but as biographer Stephen Henderson notes, "few of its ships and sailors were ready for service at sea". By the end of the war, the RCN had expanded by 50 times its original strength with about 400 fighting ships, almost 500 additional craft and about 96,000 men and women. The RCN was assigned the task of escorting supply vessels transporting food and other materials needed to keep the war going. This convoy duty was critically important as German submarines or U-boats sought to starve Britain into submission by sinking supply ships. The RCN performed about 40 percent of the war's transatlantic Allied escort duty. Desmond Morton argues it was Canada's "most decisive" military contribution. Canada's convoy protection efforts did not always run smoothly, however. In the early part of the war, the Canadian navy lacked equipment that could detect underwater submarines as well as efficient radar for sighting ones on the surface. To make matters worse, Canada didn't have the long-range aircraft that were the most effective anti-submarine weapons. As supply ship losses mounted, the RCN struggled to catch up to the better-equipped British and American navies. Macdonald himself lacked military expertise and often depended on senior naval staff who kept him in the dark about equipment shortages and other problems. "Macdonald's administration of Naval Affairs did not rise to brilliance", Henderson writes, "[but] the problem may have lain more with the senior naval staff than with Macdonald". Macdonald's conflict with high ranking naval officers, particularly Rear Admiral Percy W. Nelles, led to the effective dismissal of the latter in 1944. Yet, as the war progressed, the RCN, led by Macdonald, gradually became more effective in protecting the huge cargoes of materials on which Allied victory depended. ### Conscription crisis Biographer Stephen Henderson maintains that Macdonald played a key role in the wartime conscription crises that beset the federal government in 1942, and again in 1944, as Prime Minister Mackenzie King tried to avoid imposing compulsory military service overseas. Macdonald himself strongly favoured conscription rather than relying solely on voluntary enlistment. A committed internationalist, he believed it unfair that some bore the sacrifices of overseas service while others escaped what he saw as their military obligations. Macdonald realized however, that conscription was highly unpopular in French-speaking Quebec and that enforcing it would split the country at a time when national unity was crucial. He also recognized that in the early years of the war, voluntary enlistment was producing enough recruits to meet the needs of the armed forces. Nevertheless, Macdonald continued to push the government to commit itself to conscription if circumstances should change. His position earned him the enmity of the politically cautious Mackenzie King. "Macdonald is a very vain man", the prime minister complained in his diary, "and has an exceptional opinion of himself. Undoubtedly, he came here expecting to possibly lead the Liberal party later on but has found that he will not be able to command the following that he expected". As the opposition Conservatives continued to press for overseas conscription, the King government held a national plebiscite on April 27, 1942. The plebiscite asked voters to release the government from its previous promise not to introduce compulsory war service. The results confirmed the sharp national split. English Canada voted strongly in favour and French Canada overwhelmingly against. The results of the plebiscite seemed to strengthen the position of ministers who supported conscription. Macdonald's two cabinet colleagues from Nova Scotia, defence minister J. L. Ralston, and finance minister J. L. Ilsley, urged the government to introduce conscription immediately. A more cautious Macdonald wanted the government to commit itself to conscription should it be required to support the war effort. The crisis flared again two years later when the Canadian military called for overseas reinforcements. Ralston wanted King to impose conscription, but at Macdonald's urging, seemed willing to compromise by going along with the prime minister's plan for one last voluntary recruitment campaign. King however, suddenly dismissed Ralston during a cabinet meeting on November 1, 1944. Macdonald considered resigning, but said later he would have struck King if he had risen to leave. Instead he sat in his chair ripping sheets of notepaper into small shreds and dropping them on the floor. Stephen Henderson writes that Macdonald's decision not to resign probably saved the King government. King himself seemed to recognize that if Macdonald had left, Ilsley would have resigned too, possibly taking other ministers with him and causing the government's collapse. In the end, King was forced to impose overseas conscription after the failure of the voluntary recruitment campaign, but the war ended soon after and his government survived unscathed. The conscription crisis however, hardened the animosity between King and his naval minister. Macdonald, disillusioned by what he saw as the chicanery and ruthlessness of national politics, longed to return to Nova Scotia. After King called an election for June 11, 1945, Macdonald resigned from the federal cabinet. ## Provincial premier, 1945–54 When Macdonald returned to Nova Scotia in 1945, he was only 55, but the silver-haired politician now seemed 20 years older. After the retirement of Premier A. S. MacMillan, the Liberals reaffirmed Macdonald's leadership at their convention on August 31, 1945. Less than two months later, Macdonald's Liberals swept the province wiping out the Conservatives for the first time since Confederation and winning all but two Cape Breton ridings where voters elected members of the Co-operative Commonwealth Federation or CCF, the forerunner of the present-day New Democratic Party, or NDP. In spite of his huge victory, a close colleague noted that Macdonald was not the same man he had been before he left Nova Scotia in 1940. He had trouble making decisions, not because he was a procrastinator, but because he was not well. Nevertheless, Macdonald plunged into his role as a leading champion for the provinces. He argued that in order to maintain their independence, provinces needed exclusive jurisdiction over such sources of revenue as gasoline, electricity and amusement taxes. He lobbied for constitutional amendments designed to guarantee provincial rights. Macdonald urged the federal government to accept the 1940 recommendations of the Rowell-Sirois Commission and redistribute national wealth based on need. Such a policy, he maintained, would enable poorer provinces to sustain government services available in other parts of the country without having to impose higher-than-average levels of taxation. In the end, Macdonald won only small victories such as gaining exclusive provincial access to gasoline taxes. The federal government refused to recognize financial need as the basis for provincial subsidies. Aside from his role as a national spokesman for provincial rights, Macdonald presided over an administration that invested heavily in education. His government financed the building of rural high schools and extended financial assistance to Dalhousie University's schools of medicine and law. Macdonald also appointed Nova Scotia's first minister of education, Henry Hicks, in 1949 to oversee \$7.6 million in spending, about a fifth of the provincial budget. The Macdonald Liberals easily won re-election in 1949 and 1953, but the Conservatives made steady gains under Robert Stanfield, their new leader. The Conservatives for example, drew attention to kickback schemes under which brewing companies, wineries and distilleries contributed to the Liberal party in exchange for the right to sell their products in government liquor stores. The Liberals seemed secure against such allegations however, as long as they were led by the popular Angus L. Macdonald. However, Macdonald suffered a slight heart attack on April 11, 1954, and was admitted to hospital where he died in his sleep two nights later, just four months before his 64th birthday. Stephen Henderson writes that the Nova Scotia legislature sat on the day of his death. Macdonald's seat was draped in Clanranald tartan and a sprig of heather decorated his desk. Macdonald's body lay in state for three days in the legislative building as more than 100,000 people filed past to pay their respects. ## Aftermath of Macdonald's death Macdonald's death proved disastrous for provincial Liberals. There was no obvious successor to the popular premier. At the party's leadership convention held on September 9, 1954, the Liberals appeared badly split along religious lines. After five ballots, the convention rejected Harold Connolly, a Roman Catholic who had served as interim premier after Macdonald's death. Instead they chose the Protestant Henry Hicks. "Unfortunately for the Liberals", historian Murray Beck writes, "it appeared as if the delegates had ganged up to defeat the only Catholic among the contestants". Beck also notes that "Nova Scotia governments have always been most vulnerable after a change in leadership". In the next provincial election held on October 30, 1956, Robert Stanfield and his Conservatives won 24 seats, the Liberals 18. The 23-year Liberal era, begun under Macdonald's leadership, had finally ended. ## Assessment and legacy Murray Beck writes that Macdonald's political appeal to Nova Scotians may have been even stronger than the legendary Joseph Howe's. Like Howe, Macdonald was a passionate and eloquent leader whose elegantly crafted speeches reflected his wit, wide learning and respect for factual accuracy. Beck writes that by scrupulously fulfilling his campaign promises, Macdonald became known as a leader who always kept his word. Macdonald's reputation as the premier who led the province out of the Great Depression rested on his commitment to ambitious government projects such as highway construction and rural electrification. He continued to support highway improvements throughout his career. Two projects that he pushed especially hard for, the Canso Causeway linking Cape Breton Island to mainland Nova Scotia and a suspension bridge spanning Halifax Harbour were completed after his death. The bridge, named in his honour, made it possible to travel between Halifax and Dartmouth without having to board a ferry or drive several kilometres around the Bedford Basin. Macdonald consistently called for a more equitable redistribution of wealth, so that poorer provinces such as Nova Scotia, could share fully in Canada's prosperity. Biographer Stephen Henderson writes that Macdonald deserves credit for the introduction, in 1957, of an equalization scheme designed to enable poorer provinces to provide comparable levels of services to their citizens. Macdonald's advocacy of provincial autonomy however, fell victim to the centralizing tendencies of a post-war welfare state in which the federal government increasingly assumed greater control over national social programs. Throughout his life, Macdonald maintained ties to his alma mater, St. Francis Xavier University. He received an honorary doctor of laws degree from St. FX in 1946. He served as honorary chair and fundraiser for the university's centennial celebrations in 1953 and raised money to support student research into the early history of the Scots in Nova Scotia. Macdonald suggested that the reading room in a new university library be called the Hall of the Clans. St. FX adopted the idea and decided to name the library in his honour. Thus, when the Angus L. Macdonald Library officially opened on July 17, 1965, 50 coats of arms representing both Scottish and Irish clans adorned the walls of its reading room.
89,988
W. E. B. Du Bois
1,173,066,676
American sociologist and activist (1868–1963)
[ "1868 births", "1963 deaths", "19th-century African-American academics", "19th-century American academics", "19th-century American philosophers", "20th-century African-American academics", "20th-century American academics", "20th-century American male writers", "20th-century American non-fiction writers", "20th-century American philosophers", "20th-century Ghanaian historians", "Activists for African-American civil rights", "African-American agnostics", "African-American historians", "African-American philosophers", "African-American sociologists", "American academic administrators", "American anti-capitalists", "American anti-racism activists", "American economists", "American emigrants to Ghana", "American human rights activists", "American humanitarians", "American male non-fiction writers", "American pan-Africanists", "American people of Bahamian descent", "American people of Dutch descent", "American people of English descent", "American people of French descent", "American people of Haitian descent", "American political philosophers", "American rhetoricians", "American social sciences writers", "American social workers", "American socialists", "American sociologists", "Black studies scholars", "Du Bois family", "Fisk University alumni", "Ghanaian philosophers", "Harvard College alumni", "Historians from Maryland", "Historians from Massachusetts", "Historians of Africa", "Historians of African Americans", "Historians of race relations", "Historians of the Reconstruction Era", "History of the Southern United States", "Humboldt University of Berlin alumni", "Members of the American Academy of Arts and Letters", "Members of the Communist Party USA", "Members of the German Academy of Sciences at Berlin", "NAACP activists", "Naturalized citizens of Ghana", "People from Great Barrington, Massachusetts", "Philosophers from Massachusetts", "Progressive Era in the United States", "Recipients of the Lenin Peace Prize", "Spingarn Medal winners", "University of Massachusetts Amherst", "Urban sociologists", "W. E. B. Du Bois", "White culture scholars", "Wilberforce University faculty" ]
William Edward Burghardt Du Bois (/djuːˈbɔɪs/ dew-BOYSS; February 23, 1868 – August 27, 1963) was an American sociologist, socialist, historian, and Pan-Africanist civil rights activist. Born in Great Barrington, Massachusetts, Du Bois grew up in a relatively tolerant and integrated community. After completing graduate work at the Friedrich Wilhelm University (in Berlin, Germany) and Harvard University, where he was the first African American to earn a doctorate, he became a professor of history, sociology, and economics at Atlanta University. Du Bois was one of the founders of the National Association for the Advancement of Colored People (NAACP) in 1909. Du Bois rose to national prominence as a leader of the Niagara Movement, a group of black civil rights activists who wanted equal rights for blacks. Du Bois and his supporters opposed the Atlanta compromise, an agreement crafted by Booker T. Washington which provided that Southern blacks would work and submit to white political rule, while Southern whites guaranteed that blacks would receive basic educational and economic opportunities. Instead, Du Bois insisted on full civil rights and increased political representation, which he believed would be brought about by the African-American intellectual elite. He referred to this group as the Talented Tenth, a concept under the umbrella of racial uplift, and believed that African Americans needed the chances for advanced education to develop its leadership. Du Bois primarily targeted racism in his polemic, which protested strongly against lynching, Jim Crow laws, and discrimination in education and employment. His cause included people of color everywhere, particularly Africans and Asians in colonies. He was a proponent of Pan-Africanism and helped organize several Pan-African Congresses to fight for the independence of African colonies from European powers. Du Bois made several trips to Europe, Africa and Asia. After World War I, he surveyed the experiences of American black soldiers in France and documented widespread prejudice and racism in the United States military. Du Bois was a prolific author. His collection of essays, The Souls of Black Folk, is a seminal work in African-American literature; and his 1935 magnum opus, Black Reconstruction in America, challenged the prevailing orthodoxy that blacks were responsible for the failures of the Reconstruction Era. Borrowing a phrase from Frederick Douglass, he popularized the use of the term color line to represent the injustice of the separate but equal doctrine prevalent in American social and political life. He opens The Souls of Black Folk with the central thesis of much of his life's work: "The problem of the twentieth century is the problem of the color-line." His 1940 autobiography Dusk of Dawn is regarded in part as one of the first scientific treatises in the field of American sociology, and he published two other life stories, all three containing essays on sociology, politics and history. In his role as editor of the NAACP's journal The Crisis, he published many influential pieces. Du Bois believed that capitalism was a primary cause of racism, and he was generally sympathetic to socialist causes throughout his life. He was an ardent peace activist and advocated nuclear disarmament. The United States Civil Rights Act, embodying many of the reforms for which Du Bois had campaigned his entire life, was enacted a year after his death. ## Early life ### Family and childhood William Edward Burghardt Du Bois was born on February 23, 1868, in Great Barrington, Massachusetts, to Alfred and Mary Silvina ( Burghardt) Du Bois. Mary Silvina Burghardt's family was part of the very small free black population of Great Barrington and had long owned land in the state. She was descended from Dutch, African, and English ancestors. William Du Bois's maternal great-great-grandfather was Tom Burghardt, a slave (born in West Africa around 1730) who was held by the Dutch colonist Conraed Burghardt. Tom briefly served in the Continental Army during the American Revolutionary War, which may have been how he gained his freedom during the late 18th century. His son Jack Burghardt was the father of Othello Burghardt, who in turn was the father of Mary Silvina Burghardt. William Du Bois claimed Elizabeth Freeman as his relative; he wrote that she had married his great-grandfather Jack Burghardt. But Freeman was 20 years older than Burghardt, and no record of such a marriage has been found. It may have been Freeman's daughter, Betsy Humphrey, who married Burghardt after her first husband, Jonah Humphrey, left the area "around 1811", and after Burghardt's first wife died (c. 1810). If so, Freeman would have been William Du Bois's step-great-great-grandmother. Anecdotal evidence supports Humphrey's marrying Burghardt; a close relationship of some form is likely. William Du Bois's paternal great-grandfather was James Du Bois of Poughkeepsie, New York, an ethnic French-American of Huguenot origin who fathered several children with slave women. One of James' mixed-race sons was Alexander, who was born on Long Cay in the Bahamas in 1803; in 1810, he immigrated to the United States with his father. Alexander Du Bois traveled and worked in Haiti, where he fathered a son, Alfred, with a mistress. Alexander returned to Connecticut, leaving Alfred in Haiti with his mother. Sometime before 1860, Alfred Du Bois immigrated to the United States, settling in Massachusetts. He married Mary Silvina Burghardt on February 5, 1867, in Housatonic, a village in Great Barrington. Alfred left Mary in 1870, two years after their son William was born. Mary Du Bois moved with her son back to her parents' house in Great Barrington, and they lived there until he was five. She worked to support her family (receiving some assistance from her brother and neighbors), until she suffered a stroke in the early 1880s. She died in 1885. Great Barrington had a majority European American community, who generally treated Du Bois well. He attended the local integrated public school and played with white schoolmates. As an adult, he wrote about racism that he felt as a fatherless child and being a minority in the town. But teachers recognized his ability and encouraged his intellectual pursuits, and his rewarding experience with academic studies led him to believe that he could use his knowledge to empower African Americans. He graduated from the town's Searles High School. When he decided to attend college, the congregation of his childhood church, the First Congregational Church of Great Barrington, raised the money for his tuition. ### University education Relying on this money donated by neighbors, Du Bois attended Fisk University, a historically black college in Nashville, Tennessee, from 1885 to 1888. Like other Fisk students who relied on summer and intermittent teaching to support their university studies, Du Bois taught school during the summer of 1886 after his sophomore year. His travel to and residency in the South was Du Bois's first experience with Southern racism, which at the time encompassed Jim Crow laws, bigotry, suppression of black voting, and lynchings; the lattermost reached a peak in the next decade. After receiving a bachelor's degree from Fisk, he attended Harvard College (which did not accept course credits from Fisk) from 1888 to 1890, where he was strongly influenced by professor William James, prominent in American philosophy. Du Bois paid his way through three years at Harvard with money from summer jobs, an inheritance, scholarships, and loans from friends. In 1890, Harvard awarded Du Bois his second bachelor's degree, cum laude, in history. In 1891, Du Bois received a scholarship to attend the sociology graduate school at Harvard. In 1892, Du Bois received a fellowship from the John F. Slater Fund for the Education of Freedmen to attend the Friedrich Wilhelm University for graduate work. While a student in Berlin, he traveled extensively throughout Europe. He came of age intellectually in the German capital while studying with some of that nation's most prominent social scientists, including Gustav von Schmoller, Adolph Wagner, and Heinrich von Treitschke. He also met Max Weber who was highly impressed with Du Bois and would later cite Du Bois as a counter-example to racists alleging the inferiority of Blacks. Weber would again meet Du Bois in 1904 on a visit to the US just ahead of the publication of the seminal The Protestant Ethic and the Spirit of Capitalism. He wrote about his time in Germany: "I found myself on the outside of the American world, looking in. With me were white folk – students, acquaintances, teachers – who viewed the scene with me. They did not always pause to regard me as a curiosity, or something sub-human; I was just a man of the somewhat privileged student rank, with whom they were glad to meet and talk over the world; particularly, the part of the world whence I came." After returning from Europe, Du Bois completed his graduate studies; in 1895, he was the first African American to earn a Ph.D. from Harvard University. ### Wilberforce and Philadelphia In the summer of 1894, Du Bois received several job offers, including from the prestigious Tuskegee Institute; he accepted a teaching job at Wilberforce University in Ohio. At Wilberforce, Du Bois was strongly influenced by Alexander Crummell, who believed that ideas and morals are necessary tools to effect social change. While at Wilberforce, Du Bois married Nina Gomer, one of his students, on May 12, 1896. After two years at Wilberforce, Du Bois accepted a one-year research job from the University of Pennsylvania as an "assistant in sociology" in the summer of 1896. He performed sociological field research in Philadelphia's African-American neighborhoods, which formed the foundation for his landmark study, The Philadelphia Negro, published in 1899 while he was teaching at Atlanta University. It was the first case study of a black community in the United States. Among his Philadelphia consultants on the project was William Henry Dorsey, an artist who collected documents, paintings and artifact pertaining to Black history. Dorsey compiled hundreds of scrapbooks on the lives of Black people during the 18th century and built a collection that he laid out in his home in Philadelphia. Du Bois used the scrapbooks in his research. By the 1890s, Philadelphia's black neighborhoods had a negative reputation in terms of crime, poverty, and mortality. Du Bois's book undermined the stereotypes with empirical evidence and shaped his approach to segregation and its negative impact on black lives and reputations. The results led him to realize that racial integration was the key to democratic equality in American cities. The methodology employed in The Philadelphia Negro, namely the description and the mapping of social characteristics onto neighborhood areas was a forerunner to the studies under the Chicago School of Sociology. While taking part in the American Negro Academy (ANA) in 1897, Du Bois presented a paper in which he rejected Frederick Douglass's plea for black Americans to integrate into white society. He wrote: "we are Negroes, members of a vast historic race that from the very dawn of creation has slept, but half awakening in the dark forests of its African fatherland". In the August 1897 issue of The Atlantic Monthly, Du Bois published "Strivings of the Negro People", his first work aimed at the general public, in which he enlarged upon his thesis that African Americans should embrace their African heritage while contributing to American society. ## Atlanta University In July 1897, Du Bois left Philadelphia and took a professorship in history and economics at the historically black Atlanta University in Georgia. His first major academic work was his book The Philadelphia Negro (1899), a detailed and comprehensive sociological study of the African-American people of Philadelphia, based on his fieldwork in 1896–1897. This breakthrough in scholarship was the first scientific study of African Americans and a major contribution to early scientific sociology in the U.S. Du Bois coined the phrase "the submerged tenth" to describe the black underclass in the study. Later in 1903, he popularized the term, the "Talented Tenth", applied to society's elite class. His terminology reflected his opinion that the elite of a nation, both black and white, were critical to achievements in culture and progress. During this period he wrote dismissively of the underclass, describing them as "lazy" or "unreliable", but – in contrast to other scholars – he attributed many of their societal problems to the ravages of slavery. Du Bois's output at Atlanta University was prodigious, in spite of a limited budget: he produced numerous social science papers and annually hosted the Atlanta Conference of Negro Problems. He also received grants from the U.S. government to prepare reports about African-American workforce and culture. His students considered him to be a teacher that was brilliant, but aloof and strict. ### First Pan-African Conference Du Bois attended the First Pan-African Conference, held in London on July 23–25, 1900, shortly ahead of the Paris Exhibition of 1900 ("to allow tourists of African descent to attend both events".) The Conference had been organized by people from the Caribbean: Haitians Anténor Firmin and Bénito Sylvain and Trinidadian barrister Henry Sylvester Williams. Du Bois played a leading role in drafting a letter ("Address to the Nations of the World"), asking European leaders to struggle against racism, to grant colonies in Africa and the West Indies the right to self-government and to demand political and other rights for African Americans. By this time, southern states were passing new laws and constitutions to disfranchise most African Americans, an exclusion from the political system that lasted into the 1960s. At the conclusion of the conference, delegates unanimously adopted the "Address to the Nations of the World", and sent it to various heads of state where people of African descent were living and suffering oppression. The address implored the United States and the imperial European nations to "acknowledge and protect the rights of people of African descent" and to respect the integrity and independence of "the free Negro States of Abyssinia, Liberia, Haiti, etc." It was signed by Bishop Alexander Walters (President of the Pan-African Association), the Canadian Rev. Henry B. Brown (vice-president), Williams (General Secretary) and Du Bois (chairman of the committee on the Address). The address included Du Bois's observation, "The problem of the Twentieth Century is the problem of the colour-line." He used this again three years later in the "Forethought" of his book The Souls of Black Folk (1903). ### 1900 Paris Exposition Du Bois was the primary organizer of The Exhibit of American Negroes at the Exposition Universelle held in Paris between April and November 1900, for which he put together a series of 363 photographs aiming to commemorate the lives of African Americans at the turn of the century and challenge the racist caricatures and stereotypes of the day. Also included were charts, graphs, and maps. He was awarded a gold medal for his role as compiler of the materials, which are now housed at the Library of Congress. ### Booker T. Washington and the Atlanta Compromise In the first decade of the new century, Du Bois emerged as a spokesperson for his race, second only to Booker T. Washington. Washington was the director of the Tuskegee Institute in Alabama, and wielded tremendous influence within the African-American and white communities. Washington was the architect of the Atlanta Compromise, an unwritten deal that he had struck in 1895 with Southern white leaders who dominated state governments after Reconstruction. Essentially the agreement provided that Southern blacks, who overwhelmingly lived in rural communities, would submit to the current discrimination, segregation, disenfranchisement, and non-unionized employment; that Southern whites would permit blacks to receive a basic education, some economic opportunities, and justice within the legal system; and that Northern whites would invest in Southern enterprises and fund black educational charities. Despite initially sending congratulations to Washington for his Atlanta Exposition Speech, Du Bois later came to oppose Washington's plan, along with many other African Americans, including Archibald H. Grimke, Kelly Miller, James Weldon Johnson, and Paul Laurence Dunbar – representatives of the class of educated blacks that Du Bois would later call the "talented tenth". Du Bois felt that African Americans should fight for equal rights and higher opportunities, rather than passively submit to the segregation and discrimination of Washington's Atlanta Compromise. Du Bois was inspired to greater activism by the lynching of Sam Hose, which occurred near Atlanta in 1899. Hose was tortured, burned, and hanged by a mob of two thousand whites. When walking through Atlanta to discuss the lynching with newspaper editor Joel Chandler Harris, Du Bois encountered Hose's burned knuckles in a storefront display. The episode stunned Du Bois, and he resolved that "one could not be a calm, cool, and detached scientist while Negroes were lynched, murdered, and starved". Du Bois realized that "the cure wasn't simply telling people the truth, it was inducing them to act on the truth". In 1901, Du Bois wrote a review critical of Washington's autobiography Up from Slavery, which he later expanded and published to a wider audience as the essay "Of Mr. Booker T. Washington and Others" in The Souls of Black Folk. Later in life, Du Bois regretted having been critical of Washington in those essays. One of the contrasts between the two leaders was their approach to education: Washington felt that African-American schools should focus primarily on industrial education topics such as agricultural and mechanical skills, to prepare southern blacks for the opportunities in the rural areas where most lived. Du Bois felt that black schools should focus more on liberal arts and academic curriculum (including the classics, arts, and humanities), because liberal arts were required to develop a leadership elite. However, as sociologist E. Franklin Frazier and economists Gunnar Myrdal and Thomas Sowell have argued, such disagreement over education was a minor point of difference between Washington and Du Bois; both men acknowledged the importance of the form of education that the other emphasized. Sowell has also argued that, despite genuine disagreements between the two leaders, the supposed animosity between Washington and Du Bois actually formed among their followers, not between Washington and Du Bois themselves. Du Bois also made this observation in an interview published in The Atlantic Monthly in November 1965. ### Niagara Movement In 1905, Du Bois and several other African-American civil rights activists – including Fredrick L. McGhee, Jesse Max Barber and William Monroe Trotter – met in Canada, near Niagara Falls, where they wrote a declaration of principles opposing the Atlanta Compromise, and which were incorporated as the Niagara Movement in 1906. They wanted to publicize their ideals to other African Americans, but most black periodicals were owned by publishers sympathetic to Washington, so Du Bois bought a printing press and started publishing Moon Illustrated Weekly in December 1905. It was the first African-American illustrated weekly, and Du Bois used it to attack Washington's positions, but the magazine lasted only for about eight months. Du Bois soon founded and edited another vehicle for his polemics, The Horizon: A Journal of the Color Line, which debuted in 1907. Freeman H. M. Murray and Lafayette M. Hershaw served as The Horizon'''s co-editors. The Niagarites held a second conference in August 1906, in celebration of the 100th anniversary of abolitionist John Brown's birth, at the West Virginia site of Brown's raid on Harper's Ferry. Reverdy C. Ransom spoke, explaining that Washington's primary goal was to prepare blacks for employment in their current society: "Today, two classes of Negroes, ...are standing at the parting of the ways. The one counsels patient submission to our present humiliations and degradations; ...The other class believe that it should not submit to being humiliated, degraded, and remanded to an inferior place. ...[I]t does not believe in bartering its manhood for the sake of gain." ### The Souls of Black Folk In an effort to portray the genius and humanity of the black race, Du Bois published The Souls of Black Folk (1903), a collection of 14 essays. James Weldon Johnson said the book's effect on African Americans was comparable to that of Uncle Tom's Cabin. The introduction famously proclaimed that "the problem of the Twentieth Century is the problem of the color line". Each chapter begins with two epigraphs – one from a white poet, and one from a black spiritual – to demonstrate intellectual and cultural parity between black and white cultures. A major theme of the work was the double consciousness faced by African Americans: being both American and black. This was a unique identity which, according to Du Bois, had been a handicap in the past, but could be a strength in the future: "Henceforth, the destiny of the race could be conceived as leading neither to assimilation nor separatism but to proud, enduring hyphenation." Jonathon S. Kahn in Divine Discontent: The Religious Imagination of Du Bois shows how Du Bois, in his The Souls of Black Folk, represents an exemplary text of pragmatic religious naturalism. On page 12, Kahn writes: "Du Bois needs to be understood as an African American pragmatic religious naturalist. By this I mean that, like Du Bois the American traditional pragmatic religious naturalism, which runs through William James, George Santayana, and John Dewey, seeks religion without metaphysical foundations." Kahn's interpretation of religious naturalism is very broad but he relates it to specific thinkers. Du Bois's anti-metaphysical viewpoint places him in the sphere of religious naturalism as typified by William James and others. ### Racial violence Two calamities in the autumn of 1906 shocked African Americans, and they contributed to strengthening support for Du Bois's struggle for civil rights to prevail over Booker T. Washington's accommodationism. First, President Teddy Roosevelt dishonorably discharged 167 Buffalo Soldiers because they were accused of crimes as a result of the Brownsville Affair. Many of the discharged soldiers had served for 20 years and were near retirement. Second, in September, riots broke out in Atlanta, precipitated by unfounded allegations of black men assaulting white women. This was a catalyst for racial tensions based on a job shortage and employers playing black workers against white workers. Ten thousand whites rampaged through Atlanta, beating every black person they could find, resulting in over 25 deaths. In the aftermath of the 1906 violence, Du Bois urged blacks to withdraw their support from the Republican Party, because Republicans Roosevelt and William Howard Taft did not sufficiently support blacks. Most African Americans had been loyal to the Republican Party since the time of Abraham Lincoln. Du Bois endorsed Taft's rival William Jennings Bryan in the 1908 presidential election despite Bryan's acceptance of segregation. Du Bois wrote the essay, "A Litany at Atlanta", which asserted that the riot demonstrated that the Atlanta Compromise was a failure. Despite upholding their end of the bargain, blacks had failed to receive legal justice in the South. Historian David Levering Lewis has written that the Compromise no longer held because white patrician planters, who took a paternalistic role, had been replaced by aggressive businessmen who were willing to pit blacks against whites. These two calamities were watershed events for the African American community, marking the ascendancy of Du Bois's vision of equal rights. ### Academic work In addition to writing editorials, Du Bois continued to produce scholarly work at Atlanta University. In 1909, after five years of effort, he published a biography of abolitionist John Brown. It contained many insights, but also contained some factual errors. The work was strongly criticized by The Nation, which was owned by Oswald Villard, who was writing his own, competing biography of John Brown. Possibly as a result, Du Bois's work was largely ignored by white scholars. After publishing a piece in Collier's magazine warning of the end of "white supremacy", Du Bois had difficulty getting pieces accepted by major periodicals, although he did continue to publish columns regularly in The Horizon magazine. Du Bois was the first African American invited by the American Historical Association (AHA) to present a paper at their annual conference. He read his paper, Reconstruction and Its Benefits, to an astounded audience at the AHA's December 1909 conference. The paper went against the mainstream historical view, promoted by the Dunning School of scholars at Columbia University, that Reconstruction was a disaster, caused by the ineptitude and sloth of blacks. To the contrary, Du Bois asserted that the brief period of African-American leadership in the South accomplished three important goals: democracy, free public schools, and new social welfare legislation. Du Bois asserted that it was the federal government's failure to manage the Freedmen's Bureau, to distribute land, and to establish an educational system, that doomed African-American prospects in the South. When Du Bois submitted the paper for publication a few months later in the American Historical Review, he asked that the word 'Negro' be capitalized. The editor, J. Franklin Jameson, refused, and published the paper without the capitalization. The paper was mostly ignored by white historians. Du Bois later developed his paper as his ground-breaking 1935 book, Black Reconstruction, which marshaled extensive references to support his assertions. The AHA did not invite another African-American speaker until 1940. ## NAACP era In May 1909, Du Bois attended the National Negro Conference in New York. The meeting led to the creation of the National Negro Committee, chaired by Oswald Villard, and dedicated to campaigning for civil rights, equal voting rights, and equal educational opportunities. The following spring, in 1910, at the second National Negro Conference, the attendees created the National Association for the Advancement of Colored People (NAACP). At Du Bois's suggestion, the word "colored", rather than "black", was used to include "dark skinned people everywhere". Dozens of civil rights supporters, black and white, participated in the founding, but most executive officers were white, including Mary Ovington, Charles Edward Russell, William English Walling, and its first president, Moorfield Storey. Feeling inspired by this, Indian social reformer and civil rights activist B.R. Ambedkar contacted Du Bois in the 1940s. In a letter to Du Bois in 1946, he introduced himself as a member of the "Untouchables of India" and "a student of the Negro problem" and expressed his interest in the NAACP's petition to the United Nations. He noted that his group was "thinking of following suit"; and requested copies of the proposed statement from Du Bois. In a letter dated July 31, 1946, Du Bois responded by telling Ambedkar he was familiar with his name, and that he had "every sympathy with the Untouchables of India." ### The Crisis NAACP leaders offered Du Bois the position of Director of Publicity and Research. He accepted the job in the summer of 1910, and moved to New York after resigning from Atlanta University. His primary duty was editing the NAACP's monthly magazine, which he named The Crisis. The first issue appeared in November 1910, and Du Bois wrote that its aim was to set out "those facts and arguments which show the danger of race prejudice, particularly as manifested today toward colored people". The journal was phenomenally successful, and its circulation would reach 100,000 in 1920. Typical articles in the early editions polemics against the dishonesty and parochialism of black churches, and discussions on the Afrocentric origins of Egyptian civilization. Du Bois's African-centered view of ancient Egypt was in direct opposition to many Egyptologists of his day, including Flinders Petrie, whom Du Bois had met a conference. A 1911 Du Bois editorial helped initiate a nationwide push to induce the Federal government to outlaw lynching. Du Bois, employing the sarcasm he frequently used, commented on a lynching in Pennsylvania: "The point is he was black. Blackness must be punished. Blackness is the crime of crimes ... It is therefore necessary, as every white scoundrel in the nation knows, to let slip no opportunity of punishing this crime of crimes. Of course if possible, the pretext should be great and overwhelming – some awful stunning crime, made even more horrible by the reporters' imagination. Failing this, mere murder, arson, barn burning or impudence may do." The Crisis carried Du Bois editorials supporting the ideals of unionized labor but denouncing its leaders' racism; blacks were barred from membership. Du Bois also supported the principles of the Socialist Party of America (he held party membership from 1910 to 1912), but he denounced the racism demonstrated by some socialist leaders. Frustrated by Republican president Taft's failure to address widespread lynching, Du Bois endorsed Democratic candidate Woodrow Wilson in the 1912 presidential race, in exchange for Wilson's promise to support black causes. Throughout his writings, Du Bois supported women's rights and women's suffrage, but he found it difficult to publicly endorse the women's right-to-vote movement because leaders of the suffragism movement refused to support his fight against racial injustice. A 1913 Crisis editorial broached the taboo subject of interracial marriage: although Du Bois generally expected persons to marry within their race, he viewed the problem as a women's rights issue, because laws prohibited white men from marrying black women. Du Bois wrote "[anti-miscegenation] laws leave the colored girls absolutely helpless for the lust of white men. It reduces colored women in the eyes of the law to the position of dogs. As low as the white girl falls, she can compel her seducer to marry her ... We must kill [anti-miscegenation laws] not because we are anxious to marry the white men's sisters, but because we are determined that white men will leave our sisters alone." During 1915 − 1916, some leaders of the NAACP – disturbed by financial losses at The Crisis, and worried about the inflammatory rhetoric of some of its essays – attempted to oust Du Bois from his editorial position. Du Bois and his supporters prevailed, and he continued in his role as editor. In a 1919 column titled "The True Brownies", he announced the creation of The Brownies' Book, the first magazine published for African-American children and youth, which he founded with Augustus Granville Dill and Jessie Redmon Fauset. ### Historian and author The 1910s were a productive time for Du Bois. In 1911, he attended the First Universal Races Congress in London and he published his first novel, The Quest of the Silver Fleece. Two years later, Du Bois wrote, produced, and directed a pageant for the stage, The Star of Ethiopia. In 1915, Du Bois published The Negro, a general history of black Africans, and the first of its kind in English. The book rebutted claims of African inferiority, and would come to serve as the basis of much Afrocentric historiography in the 20th century. The Negro predicted unity and solidarity for colored people around the world, and it influenced many who supported the Pan-African movement. In 1915, The Atlantic Monthly carried a Du Bois essay, "The African Roots of the War", which consolidated his ideas on capitalism, imperialism, and race. He argued that the Scramble for Africa was at the root of World War I. He also anticipated later communist doctrine, by suggesting that wealthy capitalists had pacified white workers by giving them just enough wealth to prevent them from revolting, and by threatening them with competition by the lower-cost labor of colored workers. ### Combating racism Du Bois used his influential NAACP position to oppose a variety of racist incidents. When the silent film The Birth of a Nation premiered in 1915, Du Bois and the NAACP led the fight to ban the movie, because of its racist portrayal of blacks as brutish and lustful. The fight was not successful, and possibly contributed to the film's fame, but the publicity drew many new supporters to the NAACP. The private sector was not the only source of racism: under President Wilson, the plight of African Americans in government jobs suffered. Many federal agencies adopted whites-only employment practices, the Army excluded blacks from officer ranks, and the immigration service prohibited the immigration of persons of African ancestry. Du Bois wrote an editorial in 1914 deploring the dismissal of blacks from federal posts, and he supported William Monroe Trotter when Trotter brusquely confronted Wilson about the President's failure to fulfill his campaign promise of justice for blacks. The Crisis continued to wage a campaign against lynching. In 1915, it published an article with a year-by-year tabulation of 2,732 lynchings from 1884 to 1914. The April 1916 edition covered the group lynching of six African Americans in Lee County, Georgia. Later in 1916, the "Waco Horror" article covered the lynching of Jesse Washington, a mentally impaired 17-year-old African American. The article broke new ground by utilizing undercover reporting to expose the conduct of local whites in Waco, Texas. The early 20th century was the era of the Great Migration of blacks from the Southern United States to the Northeast, Midwest, and West. Du Bois wrote an editorial supporting the Great Migration, because he felt it would help blacks escape Southern racism, find economic opportunities, and assimilate into American society. Also in the 1910s the American eugenics movement was in its infancy, and many leading eugenicists were openly racist, defining Blacks as "a lower race". Du Bois opposed this view as an unscientific aberration, but still maintained the basic principle of eugenics: that different persons have different inborn characteristics that make them more or less suited for specific kinds of employment, and that by encouraging the most talented members of all races to procreate would better the "stocks" of humanity. ### World War I As the United States prepared to enter World War I in 1917, Du Bois's colleague in the NAACP, Joel Spingarn, established a camp to train African Americans to serve as officers in the United States Armed Forces. The camp was controversial, because some whites felt that blacks were not qualified to be officers, and some blacks felt that African Americans should not participate in what they considered a white man's war. Du Bois supported Spingarn's training camp, but was disappointed when the Army forcibly retired one of its few black officers, Charles Young, on a pretense of ill health. The Army agreed to create 1,000 officer positions for blacks, but insisted that 250 come from enlisted men, conditioned to taking orders from whites, rather than from independent-minded blacks who came from the camp. Over 700,000 blacks enlisted on the first day of the draft, but were subject to discriminatory conditions which prompted vocal protests from Du Bois. After the East St. Louis riots occurred in the summer of 1917, Du Bois traveled to St. Louis to report on the riots. Between 40 and 250 African Americans were massacred by whites, primarily due to resentment caused by St. Louis industry hiring blacks to replace striking white workers. Du Bois's reporting resulted in an article "The Massacre of East St. Louis", published in the September issue of The Crisis, which contained photographs and interviews detailing the violence. Historian David Levering Lewis concluded that Du Bois distorted some of the facts in order to increase the propaganda value of the article. To publicly demonstrate the black community's outrage over the riots, Du Bois organized the Silent Parade, a march of around 9,000 African Americans down New York City's Fifth Avenue, the first parade of its kind in New York, and the second instance of blacks publicly demonstrating for civil rights. The Houston riot of 1917 disturbed Du Bois and was a major setback to efforts to permit African Americans to become military officers. The riot began after Houston police arrested and beat two black soldiers; in response, over 100 black soldiers took to the streets of Houston and killed 16 whites. A military court martial was held, and 19 of the soldiers were hanged, and 67 others were imprisoned. In spite of the Houston riot, Du Bois and others successfully pressed the Army to accept the officers trained at Spingarn's camp, resulting in over 600 black officers joining the Army in October 1917. Federal officials, concerned about subversive viewpoints expressed by NAACP leaders, attempted to frighten the NAACP by threatening it with investigations. Du Bois was not intimidated, and in 1918 he predicted that World War I would lead to an overthrow of the European colonial system and to the "liberation" of colored people worldwide – in China, in India, and especially in the Americas. NAACP chairman Joel Spingarn was enthusiastic about the war, and he persuaded Du Bois to consider an officer's commission in the Army, contingent on Du Bois writing an editorial repudiating his anti-war stance. Du Bois accepted this bargain and wrote the pro-war "Close Ranks" editorial in June 1918 and soon thereafter he received a commission in the Army. Many black leaders, who wanted to leverage the war to gain civil rights for African Americans, criticized Du Bois for his sudden reversal. Southern officers in Du Bois's unit objected to his presence, and his commission was withdrawn. ### After the war When the war ended, Du Bois traveled to Europe in 1919 to attend the first Pan-African Congress and to interview African-American soldiers for a planned book on their experiences in World War I. He was trailed by U.S. agents who were searching for evidence of treasonous activities. Du Bois discovered that the vast majority of black American soldiers were relegated to menial labor as stevedores and laborers. Some units were armed, and one in particular, the 92nd Division (the Buffalo soldiers), engaged in combat. Du Bois discovered widespread racism in the Army, and concluded that the Army command discouraged African Americans from joining the Army, discredited the accomplishments of black soldiers, and promoted bigotry. Du Bois returned from Europe more determined than ever to gain equal rights for African Americans. Black soldiers returning from overseas felt a new sense of power and worth, and were representative of an emerging attitude referred to as the New Negro. In the editorial "Returning Soldiers" he wrote: "But, by the God of Heaven, we are cowards and jackasses if, now that the war is over, we do not marshal every ounce of our brain and brawn to fight a sterner, longer, more unbending battle against the forces of hell in our own land." Many blacks moved to northern cities in search of work, and some northern white workers resented the competition. This labor strife was one of the causes of the Red Summer of 1919, a horrific series of race riots across America, in which over 300 African Americans were killed in over 30 cities. Du Bois documented the atrocities in the pages of The Crisis, culminating in the December publication of a gruesome photograph of a lynching that occurred during a race riot in Omaha, Nebraska. The most egregious episode during the Red Summer was a vicious attack on blacks in Elaine, Arkansas, in which nearly 200 blacks were murdered. Reports coming out of the South blamed the blacks, alleging that they were conspiring to take over the government. Infuriated with the distortions, Du Bois published a letter in the New York World, claiming that the only crime the black sharecroppers had committed was daring to challenge their white landlords by hiring an attorney to investigate contractual irregularities. Over 60 of the surviving blacks were arrested and tried for conspiracy, in the case known as Moore v. Dempsey. Du Bois rallied blacks across America to raise funds for the legal defense, which, six years later, resulted in a Supreme Court victory authored by Oliver Wendell Holmes. Although the victory had little immediate impact on justice for blacks in the South, it marked the first time the Federal government used the 14th amendment guarantee of due process to prevent states from shielding mob violence. In 1920, Du Bois published Darkwater: Voices From Within the Veil, the first of his three autobiographies. The "veil" was that which covered colored people around the world. In the book, he hoped to lift the veil and show white readers what life was like behind the veil, and how it distorted the viewpoints of those looking through it – in both directions. The book contained Du Bois's feminist essay, "The Damnation of Women", which was a tribute to the dignity and worth of women, particularly black women. Concerned that textbooks used by African-American children ignored black history and culture, Du Bois created a monthly children's magazine, The Brownies' Book. Initially published in 1920, it was aimed at black children, who Du Bois called "the children of the sun". ### Pan-Africanism and Marcus Garvey Du Bois traveled to Europe in 1921 to attend the second Pan-African Congress. The assembled black leaders from around the world issued the London Resolutions and established a Pan-African Association headquarters in Paris. Under Du Bois's guidance, the resolutions insisted on racial equality, and that Africa be ruled by Africans (not, as in the 1919 congress, with the consent of Africans). Du Bois restated the resolutions of the congress in his Manifesto To the League of Nations, which implored the newly formed League of Nations to address labor issues and to appoint Africans to key posts. The League took little action on the requests. Jamaican activist Marcus Garvey, promoter of the Back-to-Africa movement and founder of the Universal Negro Improvement Association (UNIA), denounced Du Bois's efforts to achieve equality through integration, and instead endorsed racial separatism. Du Bois initially supported the concept of Garvey's Black Star Line, a shipping company that was intended to facilitate commerce within the African diaspora. But Du Bois later became concerned that Garvey was threatening the NAACP's efforts, leading Du Bois to describe him as fraudulent and reckless. Responding to Garvey's slogan "Africa for the Africans", Du Bois said that he supported that concept, but denounced Garvey's intention that Africa be ruled by African Americans. Du Bois wrote a series of articles in The Crisis between 1922 and 1924 attacking Garvey's movement, calling him the "most dangerous enemy of the Negro race in America and the world." Du Bois and Garvey never made a serious attempt to collaborate, and their dispute was partly rooted in the desire of their respective organizations (NAACP and UNIA) to capture a larger portion of the available philanthropic funding. Du Bois decried Harvard's decision to ban blacks from its dormitories in 1921 as an instance of a broad effort in the U.S. to renew "the Anglo-Saxon cult; the worship of the Nordic totem, the disfranchisement of Negro, Jew, Irishman, Italian, Hungarian, Asiatic and South Sea Islander – the world rule of Nordic white through brute force." When Du Bois sailed for Europe in 1923 for the third Pan-African Congress, the circulation of The Crisis had declined to 60,000 from its World War I high of 100,000, but it remained the preeminent periodical of the civil rights movement. President Calvin Coolidge designated Du Bois an "Envoy Extraordinary" to Liberia and – after the third congress concluded – Du Bois rode a German freighter from the Canary Islands to Africa, visiting Liberia, Sierra Leone, and Senegal. ### Harlem Renaissance Du Bois frequently promoted African-American artistic creativity in his writings, and when the Harlem Renaissance emerged in the mid-1920s, his article "A Negro Art Renaissance" celebrated the end of the long hiatus of blacks from creative endeavors. His enthusiasm for the Harlem Renaissance waned as he came to believe that many whites visited Harlem for voyeurism, not for genuine appreciation of black art. Du Bois insisted that artists recognize their moral responsibilities, writing that "a black artist is first of all a black artist." He was also concerned that black artists were not using their art to promote black causes, saying "I do not care a damn for any art that is not used for propaganda." By the end of 1926, he stopped employing The Crisis to support the arts. ### Debate with Lothrop Stoddard In 1929, a debate organised by the Chicago Forum Council billed as "One of the greatest debates ever held" was held between Du Bois and Lothrop Stoddard, a member of the Ku Klux Klan, proponent of eugenics and so-called scientific racism. The debate was held in Chicago and Du Bois was arguing the affirmative to the question "Shall the Negro be encouraged to seek cultural equality? Has the Negro the same intellectual possibilities as other races?" Du Bois knew that the racists would be unintentionally funny onstage; as he wrote to Moore, Senator James Thomas Heflin "would be a scream" in a debate. Du Bois let the overconfident and bombastic Stoddard walk into a comic moment, which Stoddard then made even funnier by not getting the joke. This moment was captured in headlines "DuBois Shatters Stoddard's Cultural Theories in Debate; Thousands Jam Hall . . . Cheered As He Proves Race Equality," the Chicago Defender's front-page headline ran. "5,000 Cheer W.E.B. DuBois, Laugh at Lothrop Stoddard." Ian Frazier of the New Yorker writes that the comic potential of Stoddard's bankrupt ideas was left untapped until Stanley Kubrick's Dr. Strangelove. ### Socialism When Du Bois became editor of The Crisis magazine in 1911, he joined the Socialist Party of America on the advice of NAACP founders Mary Ovington, William English Walling and Charles Edward Russell. However, he supported the Democrat Woodrow Wilson in the 1912 presidential campaign, a breach of the rules, and was forced to resign from the Socialist Party. In 1913, his support for Wilson was shaken when racial segregation in government hiring was reported. Du Bois remained "convinced that socialism was an excellent way of life, but I thought it might be reached by various methods." Nine years after the 1917 Russian Revolution, Du Bois extended a trip to Europe to include a visit to the Soviet Union, where he was struck by the poverty and disorganization he encountered in the Soviet Union, yet was impressed by the intense labors of the officials and by the recognition given to workers. Although Du Bois was not yet familiar with the communist theories of Karl Marx or Vladimir Lenin, he concluded that socialism might be a better path towards racial equality than capitalism. Although Du Bois generally endorsed socialist principles, his politics were strictly pragmatic: in the 1929 New York City mayoral election, he endorsed Democrat Jimmy Walker for mayor of New York, rather than the socialist Norman Thomas, believing that Walker could do more immediate good for blacks, even though Thomas's platform was more consistent with Du Bois's views. Throughout the 1920s, Du Bois and the NAACP shifted support back and forth between the Republican Party and the Democratic Party, induced by promises from the candidates to fight lynchings, improve working conditions, or support voting rights in the South; invariably, the candidates failed to deliver on their promises. A rivalry emerged in 1931 between the NAACP and the Communist Party, when the communists responded quickly and effectively to support the Scottsboro Boys, nine African American youth arrested in 1931 in Alabama for rape. Du Bois and the NAACP felt that the case would not be beneficial to their cause, so they chose to let the Communist Party organize the defense efforts. Du Bois was impressed with the vast amount of publicity and funds which the communists devoted to the partially successful defense effort, and he came to suspect that the communists were attempting to present their party to African Americans as a better solution than the NAACP. Responding to criticisms of the NAACP from the Communist Party, Du Bois wrote articles condemning the party, claiming that it unfairly attacked the NAACP, and that it failed to fully appreciate racism in the United States. In their turn, the communist leaders accused him of being a "class enemy", and claimed that the NAACP leadership was an isolated elite, disconnected from the working-class blacks they ostensibly fought for. ## Return to Atlanta Du Bois did not have a good working relationship with Walter Francis White, president of the NAACP since 1931. That conflict, combined with the financial stresses of the Great Depression, precipitated a power struggle over The Crisis. Du Bois, concerned that his position as editor would be eliminated, resigned his job at The Crisis and accepted an academic position at Atlanta University in early 1933. The rift with the NAACP grew larger in 1934 when Du Bois reversed his stance on segregation, stating that "separate but equal" was an acceptable goal for African Americans. The NAACP leadership was stunned, and asked Du Bois to retract his statement, but he refused, and the dispute led to Du Bois's resignation from the NAACP. After arriving at his new professorship in Atlanta, Du Bois wrote a series of articles generally supportive of Marxism. He was not a strong proponent of labor unions or the Communist Party, but he felt that Marx's scientific explanation of society and the economy were useful for explaining the situation of African Americans in the United States. Marx's atheism also struck a chord with Du Bois, who routinely criticized black churches for dulling blacks' sensitivity to racism. In his 1933 writings, Du Bois embraced socialism, but asserted that "[c]olored labor has no common ground with white labor", a controversial position that was rooted in Du Bois's dislike of American labor unions, which had systematically excluded blacks for decades. Du Bois did not support the Communist Party in the U.S. and did not vote for their candidate in the 1932 presidential election, in spite of an African American on their ticket. ### Black Reconstruction in America Back in the world of academia, Du Bois was able to resume his study of Reconstruction, the topic of the 1910 paper that he presented to the American Historical Association. In 1935, he published his magnum opus, Black Reconstruction in America. The book presented the thesis, in the words of the historian David Levering Lewis, that "black people, suddenly admitted to citizenship in an environment of feral hostility, displayed admirable volition and intelligence as well as the indolence and ignorance inherent in three centuries of bondage." Du Bois documented how black people were central figures in the American Civil War and Reconstruction, and also showed how they made alliances with white politicians. He provided evidence that the coalition governments established public education in the South, and many needed social service programs. The book also demonstrated the ways in which black emancipation – the crux of Reconstruction – promoted a radical restructuring of United States society, as well as how and why the country failed to continue support for civil rights for blacks in the aftermath of Reconstruction. The book's thesis ran counter to the orthodox interpretation of Reconstruction maintained by white historians, and the book was virtually ignored by mainstream historians until the 1960s. Thereafter, however, it ignited a "revisionist" trend in the historiography of Reconstruction, which emphasized black people's search for freedom and the era's radical policy changes. By the 21st century, Black Reconstruction was widely perceived as "the foundational text of revisionist African American historiography." In the final chapter of the book, "XIV. The Propaganda of History", Du Bois evokes his efforts at writing an article for the Encyclopædia Britannica on the "history of the American Negro". After the editors had cut all reference to Reconstruction, he insisted that the following note appear in the entry: "White historians have ascribed the faults and failures of Reconstruction to Negro ignorance and corruption. But the Negro insists that it was Negro loyalty and the Negro vote alone that restored the South to the Union; established the new democracy, both for white and black, and instituted the public schools." The editors refused and, so, Du Bois withdrew his article. ### Projected encyclopedia In 1932, Du Bois was selected by several philanthropies, including the Phelps-Stokes Fund, the Carnegie Corporation, and the General Education Board, to be the managing editor for a proposed Encyclopedia of the Negro, a work which Du Bois had been contemplating for 30 years. After several years of planning and organizing, the philanthropies canceled the project in 1938 because some board members believed that Du Bois was too biased to produce an objective encyclopedia. ### Trip around the world Du Bois took a trip around the world in 1936, which included visits to Germany, China, and Japan. While in Germany, Du Bois remarked that he was treated with warmth and respect. After his return to the United States, he expressed his ambivalence about the Nazi regime. He admired how the Nazis had improved the German economy, but he was horrified by their treatment of the Jewish people, which he described as "an attack on civilization, comparable only to such horrors as the Spanish Inquisition and the African slave trade". Following the 1905 Japanese victory in the Russo-Japanese War, Du Bois became impressed by the growing strength of Imperial Japan. He came to view the ascendant Japanese Empire as an antidote to Western imperialism, arguing for over three decades after the war that its rise represented a chance to break the monopoly that white nations had on international affairs. A representative of Japan's "Negro Propaganda Operations" traveled to the United States during the 1920s and 1930s, meeting with Du Bois and giving him a positive impression of Imperial Japan's racial policies. In 1936, the Japanese ambassador arranged a trip to Japan for Du Bois and a small group of academics, visiting China, Japan, and Manchukuo (Manchuria). Du Bois viewed Japanese colonialism in Manchuria as benevolent; he wrote that "colonial enterprise by a colored nation need not imply the caste, exploitation and subjection which it has always implied in the case of white Europe." He also believed that it was natural for Chinese and Japanese to quarrel with each other as "relatives" and that the segregated schools in Manchruria were established because the natives spoke Chinese only. While disturbed by the eventual Japanese alliance with Nazi Germany, Du Bois also argued Japan was only compelled to enter the pact because of the hostility of the United States and United Kingdom, and he viewed American apprehensions over Japanese expansion in Asia as racially motivated both before and after the attack on Pearl Harbor. He was similarly disturbed by how Chinese culture might be extinguished under Japanese rule but argued that Western imperialism was a greater existential concern. ### World War II Du Bois opposed the US intervention in World War II, particularly in the Pacific War, because he believed that China and Japan were emerging from the clutches of white imperialists. He felt that the European Allies waging war against Japan was an opportunity for whites to reestablish their influence in Asia. He was deeply disappointed by the US government's plan for African Americans in the armed forces: Blacks were limited to 5.8% of the force, and there were to be no African-American combat units – virtually the same restrictions as in World War I. With blacks threatening to shift their support to President Franklin D. Roosevelt's Republican opponent Wendell Willkie in the 1940 election, Roosevelt appointed a few blacks to leadership posts in the military. Dusk of Dawn, Du Bois's second autobiography, was published in 1940. The title refers to his hope that African Americans were passing out of the darkness of racism into an era of greater equality. The work is part autobiography, part history, and part sociological treatise. Du Bois described the book as "the autobiography of a concept of race ... elucidated and magnified and doubtless distorted in the thoughts and deeds which were mine ... Thus for all time my life is significant for all lives of men." In 1943, at age 75, Du Bois was abruptly fired from his position at Atlanta University by college president Rufus Clement. Many scholars expressed outrage, prompting Atlanta University to provide Du Bois with a lifelong pension and the title of professor emeritus. Arthur Spingarn remarked that Du Bois spent his time in Atlanta "battering his life out against ignorance, bigotry, intolerance and slothfulness, projecting ideas nobody but he understands, and raising hopes for change which may be comprehended in a hundred years." Turning down job offers from Fisk and Howard, Du Bois re-joined the NAACP as director of the Department of Special Research. Surprising many NAACP leaders, Du Bois jumped into the job with vigor and determination. During his 10−years hiatus, the NAACP's income had increased fourfold, and its membership had soared to 325,000 members. ## Later life ### United Nations Du Bois was a member of the three-person delegation from the NAACP that attended the 1945 conference in San Francisco at which the United Nations was established. The NAACP delegation wanted the United Nations to endorse racial equality and to bring an end to the colonial era. To push the United Nations in that direction, Du Bois drafted a proposal that pronounced "[t]he colonial system of government ... is undemocratic, socially dangerous and a main cause of wars". The NAACP proposal received support from China, India, and the Soviet Union, but it was virtually ignored by the other major powers, and the NAACP proposals were not included in the final United Nations Charter. After the United Nations conference, Du Bois published Color and Democracy: Colonies and Peace, a book that attacked colonial empires and, in the words of a most sympathetic reviewer, "contains enough dynamite to blow up the whole vicious system whereby we have comforted our white souls and lined the pockets of generations of free-booting capitalists." In late 1945, Du Bois attended the fifth, and final, Pan-African Congress, in Manchester, England. The congress was the most productive of the five congresses, and there Du Bois met Kwame Nkrumah, the future first president of Ghana, who would later invite him to Africa. Du Bois helped to submit petitions to the UN concerning discrimination against African Americans, the most noteworthy of which was the NAACP's "An Appeal to the World: A Statement on the Denial of Human Rights to Minorities in the Case of Citizens of Negro Descent in the United States of America and an Appeal to the United Nations for Redress". This advocacy laid the foundation for the later report and petition called "We Charge Genocide", submitted in 1951 by the Civil Rights Congress. "We Charge Genocide" accuses the U.S. of systematically sanctioning murders and inflicting harm against African Americans and therefore committing genocide. ### Cold War When the Cold War commenced in the mid-1940s, the NAACP distanced itself from communists, lest its funding or reputation suffer. The NAACP redoubled its efforts in 1947 after Life magazine published a piece by Arthur M. Schlesinger Jr. claiming that the NAACP was heavily influenced by communists. Ignoring the NAACP's desires, Du Bois continued to fraternize with communist sympathizers such as Paul Robeson, Howard Fast and Shirley Graham (his future second wife). Du Bois wrote "I am not a communist ... On the other hand, I ... believe ... that Karl Marx ... put his finger squarely upon our difficulties ...". In 1946, Du Bois wrote articles giving his assessment of the Soviet Union; he did not embrace communism and he criticized its dictatorship. However, he felt that capitalism was responsible for poverty and racism, and felt that socialism was an alternative that might ameliorate those problems. The Soviets explicitly rejected racial distinctions and class distinctions, leading Du Bois to conclude that the USSR was the "most hopeful country on earth". Du Bois's association with prominent communists made him a liability for the NAACP, especially since the Federal Bureau of Investigation was starting to aggressively investigate communist sympathizers; so – by mutual agreement – he resigned from the NAACP for the second time in late 1948. After departing the NAACP, Du Bois started writing regularly for the leftist weekly newspaper the National Guardian, a relationship that would endure until 1961. ### Peace activism Du Bois was a lifelong anti-war activist, but his efforts became more pronounced after World War II. In 1949, Du Bois spoke at the Scientific and Cultural Conference for World Peace in New York: "I tell you, people of America, the dark world is on the move! It wants and will have Freedom, Autonomy and Equality. It will not be diverted in these fundamental rights by dialectical splitting of political hairs ... Whites may, if they will, arm themselves for suicide. But the vast majority of the world's peoples will march on over them to freedom!" In the spring of 1949, he spoke at the World Congress of the Partisans of Peace in Paris, saying to the large crowd: "Leading this new colonial imperialism comes my own native land built by my father's toil and blood, the United States. The United States is a great nation; rich by grace of God and prosperous by the hard work of its humblest citizens ... Drunk with power we are leading the world to hell in a new colonialism with the same old human slavery which once ruined us; and to a third World War which will ruin the world." Du Bois affiliated himself with a leftist organization, the National Council of Arts, Sciences and Professions, and he traveled to Moscow as its representative to speak at the All-Soviet Peace Conference in late 1949. ### The FBI, McCarthyism, and trial During the 1950s, the U.S. government's anti-communist McCarthyism campaign targeted Du Bois because of his socialist leanings. Historian Manning Marable characterizes the government's treatment of Du Bois as "ruthless repression" and a "political assassination". The FBI began to compile a file on Du Bois in 1942, investigating him for possible subversive activities. The original investigation appears to have ended in 1943 because the FBI was unable to discover sufficient evidence against Du Bois, but the FBI resumed its investigation in 1949, suspecting he was among a group of "Concealed Communists". The most aggressive government attack against Du Bois occurred in the early 1950s, as a consequence of his opposition to nuclear weapons. In 1950 he became chair of the newly created Peace Information Center (PIC), which worked to publicize the Stockholm Peace Appeal in the United States. The primary purpose of the appeal was to gather signatures on a petition, asking governments around the world to ban all nuclear weapons. In , the U.S. Justice Department alleged that the PIC was acting as an agent of a foreign state, and thus required the PIC to register with the federal government under the Foreign Agents Registration Act. Du Bois and other PIC leaders refused, and they were indicted for failure to register. After the indictment, some of Du Bois's associates distanced themselves from him, and the NAACP refused to issue a statement of support; but many labor figures and leftists – including Langston Hughes – supported Du Bois. He was finally tried in 1951 and was represented by civil rights attorney Vito Marcantonio. The case was dismissed before the jury rendered a verdict as soon as the defense attorney told the judge that "Dr. Albert Einstein has offered to appear as character witness for Dr. Du Bois". Du Bois's memoir of the trial is In Battle for Peace. Even though Du Bois was not convicted, the government confiscated Du Bois's passport and withheld it for eight years. ### Communism Du Bois was bitterly disappointed that many of his colleagues – particularly the NAACP – did not support him during his 1951 PIC trial, whereas working class whites and blacks supported him enthusiastically. After the trial, Du Bois lived in Manhattan, writing and speaking, and continuing to associate primarily with leftist acquaintances. His primary concern was world peace, and he railed against military actions such as the Korean War, which he viewed as efforts by imperialist whites to maintain colored people in a submissive state. In 1950, at the age of 82, Du Bois ran for U.S. Senator from New York on the American Labor Party ticket and received about 200,000 votes, or 4% of the statewide total. He continued to believe that capitalism was the primary culprit responsible for the subjugation of colored people around the world, and although he recognized the faults of the Soviet Union, he continued to uphold communism as a possible solution to racial problems. In the words of biographer David Lewis, Du Bois did not endorse communism for its own sake, but did so because "the enemies of his enemies were his friends". The same ambiguity characterized his opinions of Joseph Stalin: in 1940 he wrote disdainfully of the "Tyrant Stalin", but when Stalin died in 1953, Du Bois wrote a eulogy characterizing Stalin as "simple, calm, and courageous", and lauding him for being the "first [to] set Russia on the road to conquer race prejudice and make one nation out of its 140 groups without destroying their individuality". The U.S. government prevented Du Bois from attending the 1955 Bandung Conference in Indonesia. The conference was the culmination of 40 years of Du Bois's dreams – a meeting of 29 nations from Africa and Asia, many recently independent, representing most of the world's colored peoples. The conference celebrated those nations' independence as they began to assert their power as non-aligned nations during the Cold War. In 1958, Du Bois regained his passport and with his second wife, Shirley Graham Du Bois, traveled around the world. They visited the Soviet Union and China, to much celebration. Du Bois later wrote approvingly of the conditions in both countries. Du Bois became incensed in 1961 when the U.S. Supreme Court upheld the 1950 McCarran Act, a key piece of McCarthyism legislation which required communists to register with the government. To demonstrate his outrage, he joined the Communist Party in October 1961, at the age of 93. Around that time, he wrote: "I believe in communism. I mean by communism, a planned way of life in the production of wealth and work designed for building a state whose object is the highest welfare of its people and not merely the profit of a part." He asked Herbert Aptheker, a communist and historian of African American history, to be his literary executor. ### Death in Africa Nkrumah invited Du Bois to the Dominion of Ghana to participate in their independence celebration in 1957, but he was unable to attend because the U.S. government had confiscated his passport in 1951. By 1960 – the "Year of Africa" – Du Bois had recovered his passport, and was able to cross the Atlantic and celebrate the creation of the Republic of Ghana. Du Bois returned to Africa in late 1960 to attend the inauguration of Nnamdi Azikiwe as the first African governor of Nigeria. While visiting Ghana in 1960, Du Bois spoke with its president about the creation of a new encyclopedia of the African diaspora, the Encyclopedia Africana. In early 1961, Ghana notified Du Bois that they had appropriated funds to support the encyclopedia project, and they invited him to travel to Ghana and manage the project there. In October 1961, at the age of 93, Du Bois and his wife traveled to Ghana to take up residence and commence work on the encyclopedia. In early 1963, the United States refused to renew his passport, so he made the symbolic gesture of becoming a citizen of Ghana. While it is sometimes stated that Du Bois renounced his U.S. citizenship at that time, and he stated his intention to do so, Du Bois never actually did. His health declined during the two years he was in Ghana; he died on August 27, 1963, in the capital, Accra, at the age of 95. The following day, at the March on Washington, speaker Roy Wilkins asked the hundreds of thousands of marchers to honor Du Bois with a moment of silence. The Civil Rights Act of 1964, embodying many of the reforms Du Bois had campaigned for during his entire life, was enacted almost a year after his death. Du Bois was given a state funeral on August 29–30, 1963, at Nkrumah's request, and was buried near the western wall of Christiansborg Castle (now Osu Castle), then the seat of government in Accra. In 1985, another state ceremony honored Du Bois. With the ashes of his wife Shirley Graham Du Bois, who had died in 1977, his body was re-interred at their former home in Accra, which was dedicated the W. E. B. Du Bois Memorial Centre for Pan African Culture in his memory. Du Bois's first wife Nina, their son Burghardt, and their daughter Yolande, who died in 1961, were buried in the cemetery of Great Barrington, Massachusetts, his hometown. ## Personal life Du Bois was organized and disciplined: his lifelong regimen was to rise at 7:15, work until 5:00, eat dinner and read a newspaper until 7:00, then read or socialize until he was in bed, invariably before 10:00. He was a meticulous planner, and frequently mapped out his schedules and goals on large pieces of graph paper. Many acquaintances found him to be distant and aloof, and he insisted on being addressed as "Dr. Du Bois". Although he was not gregarious, he formed several close friendships with associates such as Charles Young, Paul Laurence Dunbar, John Hope, Mary White Ovington, and Albert Einstein. His closest friend was Joel Spingarn – a white man – but Du Bois never accepted Spingarn's offer to be on a first-name basis. Du Bois was something of a dandy – he dressed formally, carried a walking stick, and walked with an air of confidence and dignity. He was relatively short, standing at 5 feet 5.5 inches (166 cm), and always maintained a well-groomed mustache and goatee. He enjoyed singing and playing tennis. Du Bois married Nina Gomer (b. about 1870, m. 1896, d. 1950), with whom he had two children. Their son Burghardt died as an infant before their second child, daughter Yolande, was born. Yolande attended Fisk University and became a high school teacher in Baltimore, Maryland. Her father encouraged her marriage to Countee Cullen, a nationally known poet of the Harlem Renaissance. They divorced within two years. She married again and had a daughter, Du Bois's only grandchild. That marriage also ended in divorce. As a widower, Du Bois married Shirley Graham (m. 1951, d. 1977), an author, playwright, composer, and activist. She brought her son David Graham to the marriage. David grew close to Du Bois and took his stepfather's name; he also worked for African-American causes. The historian David Levering Lewis wrote that Du Bois engaged in several extramarital relationships. ### Religion Although Du Bois attended a New England Congregational church as a child, he abandoned organized religion while at Fisk College. As an adult, Du Bois described himself as agnostic or a freethinker, but at least one biographer concluded that Du Bois was virtually an atheist. However, another analyst of Du Bois's writings concluded that he had a religious voice, albeit radically different from other African-American religious voices of his era. Du Bois was credited with inaugurating a 20th-century spirituality to which Ralph Ellison, Zora Neale Hurston, and James Baldwin also belong. When asked to lead public prayers, Du Bois would refuse. In his autobiography, Du Bois wrote: > When I became head of a department at Atlanta, the engagement was held up because again I balked at leading in prayer ... I flatly refused again to join any church or sign any church creed. ... I think the greatest gift of the Soviet Union to modern civilization was the dethronement of the clergy and the refusal to let religion be taught in the public schools. Du Bois accused American churches of being the most discriminatory of all institutions. He also provocatively linked African American Christianity to indigenous African religions. He did occasionally acknowledge the beneficial role that religion played in African American life – as the "basic rock" which served as an anchor for African American communities – but in general disparaged African American churches and clergy because he felt they did not support the goals of racial equality and hindered activists' efforts. Although Du Bois was not personally religious, he infused his writings with religious symbology. Many contemporaries viewed him as a prophet. His 1904 prose poem, "Credo", was written in the style of a religious creed and widely read by the African-American community. Moreover, Du Bois, both in his own fiction and in stories published in The Crisis, often drew analogies between the lynchings of African Americans and the crucifixion of Christ. Between 1920 and 1940, Du Bois shifted from overt black messiah symbolism to more subtle messianic language. ### Voting In 1889, Du Bois became eligible to vote at the age of 21. During his life he followed the philosophy of voting for third parties if the Democratic and Republican parties were unsatisfactory; or voting for the lesser of two evils if a third option was not available. Du Bois endorsed the Democratic nominee William Jennings Bryan in the 1908 presidential election. In the 1912 presidential election, Du Bois supported Woodrow Wilson, the Democratic nominee, as he believed Wilson was a "liberal Southerner" although he had wanted to support Theodore Roosevelt and the Progressive Party, but the Progressives ignored issues facing black people. He later regretted his decision, as he came to the conclusion that Wilson was opposed to racial equality. During the 1916 presidential election he supported Charles Evans Hughes, the Republican nominee, as he believed that Wilson was the greater evil. During the 1920 presidential election he supported Warren G. Harding, the Republican nominee, as Harding promised to end the United States occupation of Haiti. During the 1924 presidential election he supported Robert M. La Follette, the Progressive nominee, although he believed that La Follette could not win. During the 1928 presidential election he believed that both Herbert Hoover and Al Smith insulted black voters, and instead Du Bois supported Norman Thomas, the Socialist nominee. From 1932 to 1944, Du Bois supported Franklin D. Roosevelt, the Democratic nominee, as Roosevelt's attitude towards workers was more realistic. During the 1948 presidential election he supported Henry A. Wallace, the Progressive nominee, and supported the Progressives' nominee, Vincent Hallinan, again in 1952. During the 1956 presidential election Du Bois stated that he would not vote. He criticized the foreign, taxation, and crime policies of the Eisenhower administration and Adlai Stevenson II for promising to maintain those policies. However, he could not vote third party due to the lack of ballot access for the Socialist Party. ## Honors and legacy - The NAACP awarded the Spingarn Medal to Du Bois in 1920. - In 1958, Du Bois was inducted into the Fisk University chapter of Phi Beta Kappa when he returned to campus to receive an honorary degree. - In 1959, Du Bois was awarded the International Lenin Peace Prize by the USSR. - In 1969, the W. E. B. Du Bois Institute for African and African-American Research, now part of the Hutchins Center for African and African American Research, was established at Harvard University. - The site of the house where Du Bois grew up in Great Barrington, Massachusetts, was designated a National Historic Landmark in 1976. - In 1992, the United States Postal Service honored Du Bois with his portrait on a postage stamp. A second stamp of face value 32¢ was issued on February 3, 1998, as part of the Celebrate the Century stamp sheet series. - In 1994, the main library at the University of Massachusetts Amherst was named for Du Bois. He transferred his papers to the university via his literary executor, historian Herbert Aptheker. - In 2000, Harvard's Hutchins Center for African & African American Research began awarding the W. E. B. Du Bois Medal, which is considered Harvard's highest honor in the field of African and African American studies. - A dormitory was named for Du Bois at the University of Pennsylvania, where he conducted field research for his sociological study The Philadelphia Negro. - A dormitory is named for Du Bois at Hampton University. - Africana: The Encyclopedia of the African and African-American Experience was inspired by and dedicated to Du Bois by its editors Kwame Anthony Appiah and Henry Louis Gates Jr. - Humboldt University in Berlin hosts a series of lectures named in his honor. - Scholar Molefi Kete Asante listed Du Bois in his 2002 list of the 100 Greatest African Americans. - In 2005, Du Bois was honored with a medallion in The Extra Mile, Washington DC's memorial to important American volunteers. - The highest career award given by the American Sociological Association, the W.E.B. Du Bois Career of Distinguished Scholarship Award, was renamed for Du Bois in 2006. - Du Bois was appointed Honorary Emeritus Professor at the University of Pennsylvania in 2012. - A bust was commissioned from Ayokunle Odeleye to honor Du Bois, and dedicated on the Clark Atlanta University on the anniversary of his birth, February 23, 2013 (pictured right). - In 2015, the Du Bois Orchestra at Harvard was founded. - In March 2018, Du Bois was awarded Grand Prix de la Mémoire for the Grand Prix of Literary Associations 2017. - Du Bois was featured as a character in the 2020 Netflix miniseries Self Made, portrayed by Cornelius Smith Jr. ## Selected works ### Non-fiction books - The Study of the Negro Problems (1898) - The Philadelphia Negro (1899) - The Negro in Business (1899) - The Souls of Black Folk (1903) - "The Talented Tenth", second chapter of The Negro Problem, a collection of articles by African Americans (September 1903) - Voice of the Negro II (September 1905) - John Brown: A Biography (1909) - Efforts for Social Betterment among Negro Americans (1909) - Atlanta University's Studies of the Negro Problem (1897–1910) - The Negro (1915) - The Gift of Black Folk: The Negroes in the Making of America (1924) - Africa, Its Geography, People and Products (1930) - Africa: Its Place in Modern History (1930) - Black Reconstruction in America (1935) - What the Negro Has Done for the United States and Texas (1936) - Black Folk, Then and Now (1939) - Color and Democracy: Colonies and Peace (1945) - The Encyclopedia of the Negro (1946) - The World and Africa (1946) - The World and Africa, an Inquiry into the Part Which Africa Has Played in World History (1947) - Peace Is Dangerous (1951) - I Take My Stand for Peace (1951) - In Battle for Peace (1952) - Africa in Battle Against Colonialism, Racialism, Imperialism (1960) ### Articles - "An Essay Toward a History of the Black Man in the Great War." The Crisis, vol. 18, no. 2, June 1919, pp. 63–87. ### Autobiographies - Darkwater: Voices From Within the Veil (1920) - Dusk of Dawn: An Essay Toward an Autobiography of a Race Concept (1940) - The Autobiography of W. E. Burghardt Du Bois (1968) ### Novels - The Quest of the Silver Fleece (1911) - Dark Princess: A Romance (1928) - The Black Flame Trilogy: - The Ordeal of Mansart (1957) - Mansart Builds a School (1959) - Worlds of Color (1961) ### Archives of The Crisis Du Bois edited The Crisis from 1910 to 1933, and it contains many of his important polemics. \* Archives of The Crisis at the University of Tulsa: Modernist Journals Collection \* Archives of The Crisis at Brown University \* Issues of The Crisis at Google Books ### Recordings - Socialism and the American Negro (1960) - W. E. B. Du Bois: A Recorded Autobiography, Interview with Moses Asch (1961) ### Dissertations - The Suppression of the African Slave Trade to the United States of America: 1638–1870'', (Ph.D. dissertation), Harvard Historical Studies, Longmans, Green, and Co. (1896) ### Speeches ## Archival material The W. E. B. Du Bois Library at the University of Massachusetts Amherst contains Du Bois's archive, consisting of 294 boxes and 89 microfilm reels; 99,625 items have been digitized. ## See also - African American founding fathers of the United States - Fisk University protest - Grand Prix of Literary Associations - List of civil rights leaders - List of peace activists
184,830
Song thrush
1,153,224,637
Species of bird
[ "Articles containing video clips", "Birds described in 1831", "Birds of Europe", "Taxa named by Christian Ludwig Brehm", "Thrushes", "Turdus" ]
The song thrush (Turdus philomelos) is a thrush that breeds across the West Palearctic. It has brown upper-parts and black-spotted cream or buff underparts and has three recognised subspecies. Its distinctive song, which has repeated musical phrases, has frequently been referred to in poetry. The song thrush breeds in forests, gardens and parks, and is partially migratory with many birds wintering in southern Europe, North Africa and the Middle East; it has also been introduced into New Zealand and Australia. Although it is not threatened globally, there have been serious population declines in parts of Europe, possibly due to changes in farming practices. The song thrush builds a neat mud-lined cup nest in a bush or tree and lays four to five dark-spotted blue eggs. It is omnivorous and has the habit of using a favourite stone as an "anvil" on which to break open the shells of snails. Like other perching birds (passerines), it is affected by external and internal parasites and is vulnerable to predation by cats and birds of prey. ## Taxonomy and systematics ### Name The song thrush was described by German ornithologist Christian Ludwig Brehm in 1831, and still bears its original scientific name, Turdus philomelos. The generic name, Turdus, is the Latin for thrush, and the specific epithet refers to a character in Greek mythology, Philomela, who had her tongue cut out, but was changed into a singing bird. Her name is derived from the Ancient Greek Φιλο philo- (loving), and μέλος melos (song). The dialect names throstle and mavis both mean thrush, being related to the German drossel and French mauvis respectively. Throstle dates back to at least the fourteenth century and was used by Chaucer in the Parliament of Fowls. Mavis is derived via Middle English mavys and Old French mauvis from Middle Breton milhuyt meaning "thrush." Mavis (Μαβής) can also mean "purple" in Greek. ### Classification A molecular study indicated that the song thrush's closest relatives are the similarly plumaged mistle thrush (T. viscivorus) and Chinese thrush (T. mupinensis); these three species are early offshoots from the Eurasian lineage of Turdus thrushes after they spread north from Africa. They are less closely related to other European thrush species such as the blackbird (T. merula) which are descended from ancestors that had colonised the Canary islands from Africa and subsequently reached Europe from there. The song thrush has three subspecies, with the nominate subspecies, T. p. philomelos, covering the majority of the species' range. T. p. hebridensis, described by British ornithologist William Eagle Clarke in 1913, is a mainly sedentary (non-migratory) form found in the Outer Hebrides and Isle of Skye in Scotland. It is the darkest subspecies, with a dark brown back, greyish rump, pale buff background colour to the underparts and grey-tinged flanks. T. p. clarkei, described by German zoologist Ernst Hartert in 1909, and named for William Eagle Clarke, breeds in the rest of Great Britain and Ireland and on mainland Europe in France, Belgium, the Netherlands and possibly somewhat further east. It has brown upperparts which are warmer in tone than those of the nominate form, an olive-tinged rump and rich yellow background colour to the underparts. It is a partial migrant with some birds wintering in southern France and Iberia. This form intergrades with the nominate subspecies in central Europe, and with T. p. hebridensis in the Inner Hebrides and western Scotland, and in these areas birds show intermediate characteristics. Additional subspecies, such as T. p. nataliae of Siberia, proposed by the Russian Sergei Buturlin in 1929, are not widely accepted. ## Description The song thrush (as represented by the nominate subspecies T. p. philomelos) is 20 to 23.5 centimetres (7+3⁄4 to 9+1⁄4 inches) in length and weighs 50 to 107 grams (1+3⁄4 to 3+3⁄4 ounces). The sexes are similar, with plain brown backs and neatly black-spotted cream or yellow-buff underparts, becoming paler on the belly. The underwing is warm yellow, the bill is yellowish and the legs and feet are pink. The upperparts of this species become colder in tone from west to east across the breeding range from Sweden to Siberia. The juvenile resembles the adult, but has buff or orange streaks on the back and wing coverts. The most similar European thrush species is the redwing (T. iliacus), but that bird has a strong white supercilium, red flanks, and shows a red underwing in flight. The mistle thrush (T. viscivorus) is much larger and has white tail corners, and the Chinese thrush (T. mupinensis), although much more similar in plumage, has black face markings and does not overlap in range. The song thrush has a short, sharp tsip call, replaced on migration by a thin high seep, similar to the redwing's call but shorter. The alarm call is a chook-chook becoming shorter and more strident with increasing danger. The male's song, given from trees, rooftops or other elevated perches, is a loud clear run of musical phrases, repeated two to four times, filip filip filip codidio codidio quitquiquit tittit tittit tereret tereret tereret, and interspersed with grating notes and mimicry. It is given mainly from February to June by the Outer Hebridean race, but from November to July by the more widespread subspecies. For its weight, this species has one of the loudest bird calls. An individual male may have a repertoire of more than 100 phrases, many copied from its parents and neighbouring birds. Mimicry may include the imitation of man-made items like telephones, and the song thrush will also repeat the calls of captive birds, including exotics such as the white-faced whistling duck. ## Distribution and habitat The song thrush breeds in most of Europe (although not in the greater part of Iberia, lowland Italy or southern Greece), and across Ukraine and Russia almost to Lake Baikal. It reaches to 75°N in Norway, but only to about 60°N in Siberia. Birds from Scandinavia, Eastern Europe and Russia winter around the Mediterranean, North Africa and the Middle East, but only some of the birds in the milder west of the breeding range leave their breeding areas. In Great Britain song thrushes are commonly found where there are trees and bushes. Such areas include parks, gardens, coniferous and deciduous woodland and hedgerows. Birds of the nominate subspecies were introduced to New Zealand and Australia by acclimatisation societies between 1860 and 1880, apparently for purely sentimental reasons. In New Zealand, where it was introduced on both the main islands, the song thrush quickly established itself and spread to surrounding islands such as the Kermadecs, Chatham and Auckland Islands. Although it is common and widespread in New Zealand, in Australia only a small population survives around Melbourne. In New Zealand, there appears to be a limited detrimental effect on some invertebrates due to predation by introduced bird species, and the song thrush also damages commercial fruit crops in that country. As an introduced species it has no legal protection in New Zealand, and can be killed at any time. The song thrush typically nests in forest with good undergrowth and nearby more open areas, and in western Europe also uses gardens and parks. It breeds up to the tree-line, reaching 2,200 metres (7,200 feet) in Switzerland. The island subspecies T. p. hebridensis breeds in more open country, including heathland, and in the east of the song thrush's Eurasian range, the nominate subspecies is restricted to the edge of the dense conifer forests. In intensively farmed areas where agricultural practices appear to have made cropped land unsuitable, gardens are an important breeding habitat. In one English study, only 3.5% of territories were found in farmland, whereas gardens held 71.5% of the territories, despite that habitat making up only 2% of the total area. The remaining nests were in woodlands (1% of total area). The winter habitat is similar to that used for breeding, except that high ground and other exposed localities are avoided; however, the island subspecies T. p. hebridensis will frequent the seashore in winter. ## Behaviour and ecology The song thrush is not usually gregarious, although several birds may roost together in winter or be loosely associated in suitable feeding habitats, perhaps with other thrushes such as the blackbird, fieldfare, redwing and dark-throated thrush. Unlike the more nomadic fieldfare and redwing, the song thrush tends to return regularly to the same wintering areas. This is a monogamous territorial species, and in areas where it is fully migratory, the male re-establishes its breeding territory and starts singing as soon as he returns. In the milder areas where some birds stay year round, the resident male remains in his breeding territory, singing intermittently, but the female may establish a separate individual wintering range until pair formation begins in the early spring. During migration, the song thrush travels mainly at night with a strong and direct flight action. It flies in loose flocks which cross the sea on a broad front rather than concentrating at short crossings (as occurs in the migration of large soaring birds), and calls frequently to maintain contact. Migration may start as early as late August in the most easterly and northerly parts of the range, but the majority of birds, with shorter distances to cover, head south from September to mid-December. However, hard weather may force further movement. Return migration varies between mid-February around the Mediterranean to May in northern Sweden and central Siberia. Vagrants have been recorded in Greenland, various Atlantic islands, and West Africa. ### Breeding and survival The female song thrush builds a neat cup-shaped nest lined with mud and dry grass in a bush, tree or creeper, or, in the case of the Hebridean subspecies, on the ground. She lays four or five bright glossy blue eggs which are lightly spotted with black or purple; they are typically 2.7 cm × 2.0 cm (1+1⁄8 in × 3⁄4 in) size and weigh 6.0 g (3⁄16 oz), of which 6% is shell. The female incubates the eggs alone for 10–17 days, and after hatching a similar time elapses until the young fledge. Two or three broods in a year is normal, although only one may be raised in the north of the range. On average, 54.6% of British juveniles survive the first year of life, and the adult annual survival rate is 62.2%. The typical lifespan is three years, but the maximum recorded age is 10 years 8 months. The song thrush is occasionally a host of parasitic cuckoos, such as the common cuckoo, but this is very rare because the thrush recognizes the cuckoo's non-mimetic eggs. However, the song thrush does not demonstrate the same aggression toward the adult cuckoo that is shown by the blackbird. The introduced birds in New Zealand, where the cuckoo does not occur, have, over the past 130 years, retained the ability to recognize and reject non-mimetic eggs. Adult birds may be killed by cats, little owls and sparrowhawks, and eggs and nestlings are taken by magpies, jays, and, where present, grey squirrels. As with other passerine birds, parasites are common, and include endoparasites, such as the nematode Splendidofilaria (Avifilaria) mavis whose specific name mavis derives from this thrush. A Russian study of blood parasites showed that all the fieldfares, redwings and song thrushes sampled carried haematozoans, particularly Haemoproteus and Trypanosoma. Ixodes ticks are also common, and can carry pathogens, including tick-borne encephalitis in forested areas of central and eastern Europe and Russia, and, more widely, Borrelia bacteria. Some species of Borrelia cause Lyme disease, and ground-feeding birds like the song thrush may act as a reservoir for the disease. ### Feeding The song thrush is omnivorous, eating a wide range of invertebrates, especially earthworms and snails, as well as soft fruit and berries. Like its relative, the blackbird, the song thrush finds animal prey by sight, has a run-and-stop hunting technique on open ground, and will rummage through leaf-litter seeking potential food items. Land snails are an especially important food item when drought or hard weather makes it hard to find other food. The thrush often uses a favorite stone as an "anvil" on which to break the shell of the snail before extracting the soft body and invariably wiping it on the ground before consumption. Young birds initially flick objects and attempt to play with them until they learn to use anvils as tools to smash snails. The nestlings are mainly fed on animal food such as worms, slugs, snails and insect larvae. The grove snail (Cepaea nemoralis) is regularly eaten by the song thrush, and its polymorphic shell patterns have been suggested as evolutionary responses to reduce predation; however, song thrushes may not be the only selective force involved. ## Status and conservation The song thrush has an extensive range, estimated at 10 million square kilometres (4 million square miles), and a large population, with an estimated 40 to 71 million individuals in Europe alone. In the western Palaearctic, there is evidence of population decline, but at a level below the threshold required for global conservation concern (i.e., a reduction in numbers of more than 30% in ten years or three generations) and the IUCN Red List categorises this species as of "Least Concern". In Great Britain and the Netherlands, there has been a more than 50% decline in population, and the song thrush is included in regional Red Lists. The decreases are greatest in farmlands (73% since the mid-1970s) and believed to be due to changes in agricultural practices in recent decades. The precise reasons for the decline are not known but may be related to the loss of hedgerows, a move to sowing crops in autumn rather than spring, and possibly the increased use of pesticides. These changes may have reduced the availability of food and of nest sites. In gardens, the use of poison bait to control slugs and snails may pose a threat. In urban areas, some thrushes are killed while using the hard surface of roads to smash snails. ## Relationship with humans The song thrush's characteristic song, with melodic phrases repeated twice or more, is described by the nineteenth-century British poet Robert Browning in his poem Home Thoughts, from Abroad: > That's the wise thrush; he sings each song twice over, > Lest you should think he never could recapture > The first fine careless rapture! The song also inspired the nineteenth-century British writer Thomas Hardy, who spoke in Darkling Thrush of the bird's "full-hearted song evensong/Of joy illimited", but twentieth-century British poet Ted Hughes in Thrushes concentrated on its hunting prowess: "Nothing but bounce and/stab/and a ravening second". Nineteenth-century Welsh poet Edward Thomas wrote 15 poems concerning blackbirds or thrushes, including The Thrush: > I hear the thrush, and I see > Him alone at the end of the lane > Near the bare poplar's tip, > Singing continuously. In The Tables Turned, Romantic poet William Wordsworth references the song thrush, writing > Hark, how blithe the throstle sings > And he is no mean preacher > Come forth into the light of things > Let Nature be your teacher The song thrush is the emblem of West Bromwich Albion Football Club, chosen because the public house in which the team used to change kept a pet thrush in a cage. It also gave rise to Albion's early nickname, The Throstles. A few English pubs and hotels share the name Throstles Nest. ### As food Thrushes have been trapped for food from as far back as 12,000 years ago and an early reference is found in the Odyssey: "Then, as doves or thrushes beating their spread wings against some snare rigged up in thickets—flying in for a cosy nest but a grisly bed receives them." Hunting continues today around the Mediterranean, but is not believed to be a major factor in this species' decline in parts of its range. In Spain, this species is normally caught as it migrates through the country, often using birdlime which, although banned by the European Union, is still tolerated and permitted in the Valencian Community. In 2003 and 2004 the EU tried, but failed, to stop this practice in the Valencian region. ### As pets Up to at least the nineteenth century the song thrush was kept as a cage bird because of its melodious voice. As with hunting, there is little evidence that the taking of wild birds for aviculture has had a significant effect on wild populations.
52,840
Tom Swift
1,165,400,963
Fictional literary character
[ "American novels adapted into television shows", "Book series introduced in 1910", "Characters in American novels of the 20th century", "Characters in American novels of the 21st century", "Children's science fiction novels", "Fictional scientists", "Juvenile series", "Literary characters introduced in 1910", "Novel series", "Novels set in New York (state)", "Series of children's books", "Stratemeyer Syndicate", "Tom Swift", "Works published under a pseudonym" ]
Tom Swift is the main character of six series of American juvenile science fiction and adventure novels that emphasize science, invention, and technology. Inaugurated in 1910, the sequence of series comprises more than 100 volumes. The first Tom Swift – later, Tom Swift Sr. – was created by Edward Stratemeyer, the founder of the Stratemeyer Syndicate, a book packaging firm. Tom's adventures have been written by various ghostwriters, beginning with Howard Garis. Most of the books are credited to the collective pseudonym "Victor Appleton". The 33 volumes of the second series use the pseudonym Victor Appleton II for the author. For this series, and some later ones, the main character is "Tom Swift Jr." New titles have been published again from 2019 after a gap of about ten years, roughly the time that has passed before every resumption. Most of the series emphasized Tom's inventions. The books generally describe the effects of science and technology as wholly beneficial, and the role of the inventor in society as admirable and heroic. Translated into many languages, the books have sold more than 30 million copies worldwide. Tom Swift has also been the subject of a board game and several attempted adaptations into other media. Tom Swift has been cited as an inspiration by various scientists and inventors, including aircraft designer Kelly Johnson. ## Inventions In his various incarnations, Tom Swift, usually a teenager, is inventive and science-minded, "Swift by name and swift by nature." Tom is portrayed as a natural genius. In the earlier series, he is said to have had little formal education, the character modeled originally after such inventors as Henry Ford, Thomas Edison, aviation pioneer Glenn Curtiss and Alberto Santos-Dumont. For most of the six series, each book concerns Tom's latest invention, and its role either in solving a problem or mystery, or in assisting Tom in feats of exploration or rescue. Often Tom must protect his new invention from villains "intent on stealing Tom's thunder or preventing his success," but Tom is always successful in the end. Many of Tom Swift's fictional inventions describe actual technological developments or predate technologies now considered commonplace. Tom Swift Among the Diamond Makers (1911) was based on Charles Parsons's attempts to synthesize diamonds using electric current. Tom Swift and His Photo Telephone was published in 1912. Sending photographs by telephone was not fully developed until 1925. Tom Swift and His Wizard Camera (1912) features a portable movie camera, not invented until 1923. Tom Swift and His Electric Locomotive (1922) was published two years before the Central Railroad of New Jersey began using the first diesel electric locomotive. The house on wheels that Tom invents for 1929's Tom Swift and His House on Wheels pre-dated the first house trailer by a year. Tom Swift and His Diving Seacopter (1952) features a flying submarine similar to one planned by the United States Department of Defense four years later in 1956. Other inventions of Tom's have not happened, such as the device for silencing airplane engines that he invents in Tom Swift and His Magnetic Silencer (1941). ## Authorship The character of Tom Swift was conceived about 1910 by Edward Stratemeyer, founder of the Stratemeyer Syndicate, a book-packaging business, although the name "Tom Swift" was first used in 1903 by Stratemeyer in Shorthand Tom the Reporter; Or, the Exploits of a Bright Boy. Stratemeyer invented the series to capitalize on the market for children's science adventures. The Syndicate's authors created the Tom Swift stories by first preparing an outline with the plot elements, followed by drafting and editing the detailed manuscript. The books were published using the house pseudonym "Victor Appleton". Howard Garis wrote most of the volumes of the original series; Stratemeyer's daughter, Harriet Stratemeyer Adams, wrote the last three volumes. The first Tom Swift series ended in 1941. In 1954, Harriet Adams created the Tom Swift, Jr. series, which was published using the pseudonym "Victor Appleton II" as author. The main character Tom Swift, Junior, was described as the son of the original Tom Swift. Most of the stories were outlined and plotted by Adams. The texts were written by various writers, among them William Dougherty, John Almquist, Richard Sklar, James Duncan Lawrence, Tom Mulvey and Richard McKenna. The Tom Swift, Jr., series ended in 1971. A third series was begun in 1981 and lasted until 1984. The rights to the Tom Swift character, along with the Stratemeyer Syndicate, were sold in 1984 to publishers Simon & Schuster. They hired New York City book packaging business Mega-Books to produce further series. Simon & Schuster has published three more Tom Swift series: one from 1991 to 1993;Tom Swift, Young Inventor from 2006 to 2007; and Tom Swift Inventors Academy from 2019 to present—eight volumes as of Depth Perception (March 2022). ## Series The longest-running series of books to feature Tom Swift is the first, which consists of 40 volumes. Tom's son (Tom Swift Jr.) was also the name of the protagonist of the 33 volumes of the Tom Swift Jr. Adventures, the 11 volumes of the third Tom Swift series, the 13 volumes of the fourth, and a half-dozen more for the most recent series, Tom Swift, Young Inventor, for a total of 103 volumes for all the series. In addition to publication in the United States, Tom Swift books have been published extensively in England, and translated into Norwegian, French, Icelandic, and Finnish. ### Original series (1910–1941) In the original series, Tom Swift lives in fictional Shopton, New York. He is the son of Barton Swift, the founder of the Swift Construction Company. Tom's mother is deceased, but the housekeeper, Mrs. Baggert, functions as a surrogate mother. Tom usually shares his adventures with close friend Ned Newton, who eventually becomes the Swift Construction Company's financial manager. For most of the series, Tom dates Mary Nestor. It has been suggested that his eventual marriage to Mary led to the series' demise, as young boys found a married man harder to identify with than a young, single one; however, after the 1929 marriage the series continued for 12 more years and eight further volumes. Regularly appearing characters include Wakefield Damon, an older man, whose dialogue is characterized by frequent use of such whimsical expressions as "Bless my brakeshoes!" and "Bless my vest buttons!" The original Tom Swift has been claimed to represent the early 20th-century conception of inventors. Tom has no formal education after high school; according to critic Robert Von der Osten, Tom's ability to invent is presented as "somehow innate". Tom is not a theorist but a tinkerer and, later, an experimenter who, with his research team, finds practical applications for others' research; Tom does not so much methodically develop and perfect inventions as find them by trial and error. Tom's inventions are not at first innovative. In the first two books of the series, he fixes a motorcycle and a boat, and in the third book he develops an airship, but only with the help of a balloonist. Tom is also at times unsure of himself, asking his elders for help; as Von der Osten puts it, "the early Tom Swift is more dependent on his father and other adults at first and is much more hesitant in his actions. When his airship bangs into a tower, Tom is uncharacteristically nonplussed and needs support." However, as the series progresses, Tom's inventions "show an increasingly independent genius as he develops devices, such as an electric rifle and a photo telephone, further removed from the scientific norm". Some of Tom's inventions are improvements of then-current technologies, while other inventions were not in development at the time the books were published, but have since been developed. ### Second series (1954–1971) In this series, presented as an extension and continuation of the first, the Tom Swift of the original series is now the CEO of Swift Enterprises, a four-mile-square enclosed facility where inventions are conceived and manufactured. Tom's son, Tom Swift Jr., is now the primary inventive genius of the family. Stratemeyer Syndicate employee Andrew Svenson described the new series as based "on scientific fact and probability, whereas the old Toms were in the main adventure stories mixed with pseudo-science". Three PhDs in science were hired as consultants to the series to ensure scientific accuracy. The younger Tom does not tinker with motorcycles; his inventions and adventures extend from deep within the Earth (in Tom Swift and His Atomic Earth Blaster [1954]) to the bottom of the ocean (in Tom Swift and His Diving Seacopter [1956]) to the Moon (in Tom Swift in the Race to the Moon [1958]) and, eventually, the outer Solar System (in Tom Swift and His Cosmotron Express [1970]). Later volumes of the series increasingly emphasized the extraterrestrial "space friends", as they are termed throughout the series. The beings appear as early as the first volume of the series, Tom Swift and His Flying Lab (1954). The Tom Swift Jr., Adventures were less commercially successful than the first series, selling 6 million copies total, compared with sales of 14 million copies for the first series. In contrast to the earlier series, many of Tom Jr.'s inventions are designed to operate in space, and his "genius is unequivocally original as he constructs nuclear-powered flying labs, establishes outposts in space, or designs ways to sail in space on cosmic rays". Unlike his father, Tom Jr. is not just a tinkerer; he relies on scientific and mathematical theories, and, according to critic Robert Von der Osten, "science [in the books] is, in fact, understood to be a set of theories that are developed based on experimentation and scientific discussion. Rather than being opposed to technological advances, such a theoretical understanding becomes essential to invention." Tom Swift Jr.'s Cold War-era adventures and inventions are often motivated by patriotism, as Tom repeatedly defeats the evil agents of the fictional nations "Kranjovia" and "Brungaria", the latter a place that critic Francis Molson describes as "a vaguely Eastern European country, which is strongly opposed to the Swifts and the U.S. Hence, the Swifts' opposition to and competition with the Brungarians is both personal and patriotic." ### Third series (1981–1984) The third Tom Swift series differs from the first two in that the setting is primarily outer space, although Swift Enterprises (located now in New Mexico) is occasionally mentioned. Tom Swift explores the universe in the starship Exedra, using a faster-than-light drive he has reverse-engineered from an alien space probe. He is aided by Benjamin Franklin Walking Eagle, a Native American who is Tom's co-pilot, best friend, and an expert computer technician, and Anita Thorwald, a former rival of Tom's who now works with him as a technician and whose right leg has been rebuilt to contain a miniature computer. This series maintains only an occasional and vague continuity with the two previous series. Tom is called the son of "the great Tom Swift" and said to be "already an important and active contributor to the family business, the giant multimillion-dollar scientific-industrial complex known as Swift Enterprises". However, as critic Francis Molson indicates, it is not explained whether this Tom Swift is the grandson of the famous Tom Swift of the first series or still the Tom Swift Jr. of the second. The Tom Swift of this third series is less of an inventor than his predecessors, and his inventions are rarely the main feature of the plot. Still, according to Molson, "Tom the inventor is not ignored. Perhaps the most impressive of his inventions and the one essential to the series as a whole is the robot he designs and builds, Aristotle, which becomes a winning and likeable character in its own right." The books are slower-paced than the Tom Swift Jr. adventures of the second series, and include realistic, colloquial dialogue. Each volume begins where the last one ended, and the technology is plausible and accurate. ### Fourth series (1991–1993) The fourth series featuring Tom Swift (again a "Jr.") is set mostly on Earth (with occasional voyages to the Moon); Swift Enterprises is now located in California. In the first book, The Black Dragon, it's mentioned that Tom is the son of Tom Swift Sr. and Mary Nestor. The books deal with what Richard Pyle describes as "modern and futuristic concepts" and, as in the third series, feature an ethnically diverse cast of characters. Like the Tom Swift Jr. series, the series portrays Tom as a scientist as well as an inventor whose inventions depend on a knowledge of theory. The series differs from previous versions of the character, however, in that Tom's inventive genius is portrayed as problematic and sometimes dangerous. As Robert Von der Osten argues, Tom's inventions for this series often have unexpected and negative repercussions. > a device to create a miniature black hole which casts him into an alternative universe; a device that trains muscles but also distorts the mind of the user; and a genetic process which, combined with the effect of his black hole, results in a terrifying devolution. Genius here begins to recapitulate earlier myths of the mad scientist whose technological and scientific ambitions are so out of harmony with nature and contemporary science that the results are usually unfortunate. The series features more violence than previous series; in The Negative Zone, Tom blows up a motel room to escape the authorities. There was a derivative of this series featuring Tom Swift and the Hardy Boys called A Hardy Boys & Tom Swift Ultra Thriller that was published from 1992 to 1993, and only had 2 volumes released. Both books dealt with science fictional topics (time travel and aliens landing on earth). ### Fifth series (2006–2007) The fifth series, Tom Swift, Young Inventor, returns Tom Swift to Shopton, New York, with Tom as the son of Tom Swift and Mary Nestor, the names of characters of the original Tom Swift series. The series features inventions that are close to current technology "rather than ultra-futuristic". In several of the books, Tom's antagonist is The Road Back (TRB), an anti-technology terrorist organization. Tom's personal nemesis is Andy Foger, teenage son of his father's former business partner who now owns a competing (and ethically dubious) high-technology company. ### Sixth series (2019) A sixth series, Tom Swift Inventors' Academy, published by Simon and Schuster, debuted in July 2019 with \#1 The Drone Pursuit and \#2 The Sonic Breach. A total of eight books were published, concluding with \#8 Depth Perception in March 2022. ## Other media Parker Brothers produced a Tom Swift board game in 1966, although it was never widely distributed, and the character has appeared in one television show. Various Tom Swift radio programs, television series, and movies were planned and even written, but were either never produced or not released. ### Film and television #### Cancelled films As early as 1914, Edward Stratemeyer proposed making a Tom Swift movie, but no such movie was made. A Tom Swift radio series was proposed in 1946. Two scripts were written, but, for unknown reasons, the series was never produced. Twentieth Century Fox planned a Tom Swift feature movie in 1968, to be directed by Gene Kelly. A script was written and approved, and filming was to have begun during 1969. However, the project was canceled owing to the poor reception of the movies Doctor Dolittle and Star!; a \$500,000 airship that had been built as a prop was rumored to have been sold to a midwest amusement park. Yet another movie was planned in 1974, but, again, was cancelled. #### Television Scripts were written for a proposed television series involving both Tom Swift Jr. and his father, the hero of the original book series. A television pilot show for a series to be called The Adventures of Tom Swift was filmed in 1958, featuring Gary Vinson. However, legal problems prevented the pilot's distribution, and it was never broadcast; no copies of the pilot are known to exist, though the pilot script is available. In 1977, Glen A. Larson wrote an unproduced television pilot show entitled "TS, I Love You: The Further Adventures of Tom Swift". This series was to be combined with a Nancy Drew series, a Hardy Boys series, and a Dana Girls series. Nancy Drew and the Hardy Boys were eventually combined into a one-hour program The Hardy Boys/Nancy Drew Mysteries with alternating episodes. A Tom Swift media project finally came to fruition in 1983 when Willie Aames appeared as Tom Swift along with Lori Loughlin as Linda Craig in a television special, The Tom Swift and Linda Craig Mystery Hour, which was broadcast on July 3. It was a ratings failure. In 2007, digital studio Worldwide Biggies acquired movie rights to Tom Swift and announced plans to release a feature film and video game, followed by a television series. As of 2015, these plans had not come to fruition. Tom Swift appeared in the episode "The Celestial Visitor" from the second season of The CW's Nancy Drew with Tian Richards portraying the character as a Black, gay, billionaire inventor. The episode is a backdoor pilot for a spin-off project titled Tom Swift, in development at The CW. In August 2021, Tom Swift was ordered straight-to-series and premiered on May 31, 2022 on The CW. In February 2022, Ashleigh Murray joined the cast as Zenzi Fullington. Due to poor ratings, the series was cancelled on June 30 that year. ## Controversy Tom Swift and His Electric Rifle (published 1911) depicts Africans as brutish, uncivilized animals, and the white protagonist as their paternal savior. > In the book, as in America today, the black people are rendered as either passive, simple and childlike, or animalistic and capable of unimaginable violence. They are described in the book at various points as "hideous in their savagery, wearing only the loin cloth, and with their kinky hair stuck full of sticks", and as "wild, savage and ferocious ... like little red apes". ## Cultural influence The Tom Swift books have been credited with assisting the success of American science fiction and with establishing the edisonade (stories focusing on brilliant scientists and inventors) as a basic cultural myth. Tom Swift's adventures have been popular since the character's inception in 1910: by 1914, 150,000 copies a year were being sold and a 1929 study found the series to be second in popularity only to the Bible for boys in their early teens. By 2009, Tom Swift books had sold more than 30 million copies worldwide. The success of Tom Swift also paved the way for other Stratemeyer creations, such as The Hardy Boys and Nancy Drew. The series' writing style, which was sometimes adverb heavy, suggested a name for a type of adverbial pun promulgated during the 1950s and 1960s, a type of wellerism known as "Tom Swifties". Originally this kind of pun was called a "Tom Swiftly" in reference to the adverbial usage, but over time has come to be called a "Tom Swifty". Some examples are: "'I lost my crutches,' said Tom lamely"; and "'I'll take the prisoner downstairs', said Tom condescendingly." Tom Swift's fictional inventions have apparently inspired several actual inventions, among them Lee Felsenstein's "Tom Swift Terminal", which "drove the creation of an early personal computer known as the Sol", and the taser. The name "taser" was originally "TSER", for "Tom Swift Electric Rifle". The invention was named for the central device in the story Tom Swift and His Electric Rifle (1911); according to inventor Jack Cover, "an 'A' was added because we got tired of answering the phone 'TSER'." A number of scientists, inventors, and science fiction writers have also credited Tom Swift with inspiring them, including Ray Kurzweil, Robert A. Heinlein, and Isaac Asimov. Gone with the Wind author Margaret Mitchell was also known to have read the first series as a child. The Tom Swift Jr. series was also a source of inspiration to many. Scientist and television presenter Bill Nye said the books helped "make me who I am", and inspired him to launch his own young adult series. Microsoft founders Paul Allen and Bill Gates also read the books as children, as did co-founder of competing company Apple, Steve Wozniak. Wozniak, who cited the series as his inspiration to become a scientist, said the books made him feel "that engineers can save the world from all sorts of conflict and evil". ## See also - Tom Swifty - List of Tom Swift books - Danny Dunn
81,473
Pedro II of Brazil
1,173,824,597
2nd and final Emperor of Brazil (r. 1831–89)
[ "1825 births", "1891 deaths", "19th-century Brazilian people", "Brazilian Roman Catholics", "Brazilian abolitionists", "Brazilian emperors", "Brazilian people of Austrian descent", "Brazilian people of Portuguese descent", "Burials at the Imperial Mausoleum at the Cathedral of Petrópolis", "Commanders Grand Cross of the Order of the Polar Star", "Deaths from pneumonia in France", "Dethroned monarchs", "Exiled royalty", "Extra Knights Companion of the Garter", "Grand Cross of the Legion of Honour", "Grand Crosses of the Order of Saint Stephen of Hungary", "Grand Crosses of the Order of Saint-Charles", "Grand Crosses of the Order of the Star of Romania", "Honorary members of the Saint Petersburg Academy of Sciences", "House of Braganza", "Knights Grand Cross of the Order of the Immaculate Conception of Vila Viçosa", "Knights of Malta", "Knights of the Golden Fleece of Spain", "Knights of the Holy Sepulchre", "Leaders ousted by a coup", "Members of the American Antiquarian Society", "Members of the French Academy of Sciences", "Order of Saint James of the Sword", "Pedro II of Brazil", "People from Rio de Janeiro (city)", "People of the Paraguayan War", "Pretenders to the Brazilian throne", "Princes Imperial of Brazil", "Recipients of the Order of Aviz", "Recipients of the Order of the Medjidie, 1st class", "Recipients of the Order of the Netherlands Lion", "Royal reburials", "Sons of emperors", "Sons of kings", "White Brazilians" ]
Dom Pedro II (2 December 1825 – 5 December 1891), nicknamed the Magnanimous (Portuguese: O Magnânimo), was the second and last monarch of the Empire of Brazil, reigning for over 58 years. He was born in Rio de Janeiro, the seventh child of Emperor Dom Pedro I of Brazil and Empress Dona Maria Leopoldina and thus a member of the Brazilian branch of the House of Braganza (Portuguese: Bragança). His father's abrupt abdication and departure to Europe in 1831 left the five-year-old as emperor and led to a grim and lonely childhood and adolescence, obliged to spend his time studying in preparation for rule. His experiences with court intrigues and political disputes during this period greatly affected his later character; he grew into a man with a strong sense of duty and devotion toward his country and his people, yet increasingly resentful of his role as monarch. Pedro II inherited an empire on the verge of disintegration, but he turned Brazil into an emerging power in the international arena. The nation grew to be distinguished from its Hispanic neighbors on account of its political stability, zealously guarded freedom of speech, respect for civil rights, vibrant economic growth, and form of government—a functional representative parliamentary monarchy. Brazil was also victorious in the Platine War, the Uruguayan War, and the Paraguayan War, as well as prevailing in several other international disputes and domestic tensions. Pedro II steadfastly pushed through the abolition of slavery despite opposition from powerful political and economic interests. A savant in his own right, the Emperor established a reputation as a vigorous sponsor of learning, culture, and the sciences, and he won the respect and admiration of people such as Charles Darwin, Victor Hugo, and Friedrich Nietzsche, and was a friend to Richard Wagner, Louis Pasteur, and Henry Wadsworth Longfellow, among others. There was no desire for a change in the form of government among most Brazilians, but the Emperor was overthrown in a sudden coup d'état that had almost no support outside a clique of military leaders who desired a form of republic headed by a dictator. Pedro II had become weary of emperorship and despaired over the monarchy's future prospects, despite its overwhelming popular support. He did not allow his ouster to be opposed and did not support any attempt to restore the monarchy. He spent the last two years of his life in exile in Europe, living alone on very little money. The reign of Pedro II thus came to an unusual end—he was overthrown while highly regarded by the people and at the pinnacle of his popularity, and some of his accomplishments were soon brought to naught as Brazil slipped into a long period of weak governments, dictatorships, and constitutional and economic crises. The men who had exiled him soon began to see in him a model for the Brazilian Republic. A few decades after his death, his reputation was restored and his remains were returned to Brazil with celebrations nationwide. Historians have regarded the Emperor in an extremely positive light and several have ranked him as the greatest Brazilian. ## Early life ### Birth Pedro was born at 02:30 on 2 December 1825 in the Palace of São Cristóvão, in Rio de Janeiro, Brazil. Named after St. Peter of Alcantara, his name in full was Pedro de Alcântara João Carlos Leopoldo Salvador Bibiano Francisco Xavier de Paula Leocádio Miguel Gabriel Rafael Gonzaga. Through his father, Emperor Dom Pedro I, he was a member of the Brazilian branch of the House of Braganza (Portuguese: Bragança)) and was referred to using the honorific Dom (Lord) from birth. He was the grandson of Portuguese King Dom João VI and nephew of Dom Miguel I. His mother was the Archduchess Maria Leopoldina of Austria, daughter of Franz II, the last Holy Roman Emperor. Through his mother, Pedro was a nephew of Napoleon Bonaparte and first cousin of Emperors Napoleon II of France, Franz Joseph I of Austria-Hungary and Don Maximiliano I of Mexico. The only legitimate male child of Pedro I to survive infancy, he was officially recognized as heir apparent to the Brazilian throne with the title Prince Imperial on 6 August 1826. Empress Maria Leopoldina died on 11 December 1826, a few days after a stillbirth, when Pedro was a year old. Two and a half years later, his father married Princess Amélie of Leuchtenberg. Prince Pedro developed an affectionate relationship with her, whom he came to regard as his mother. Pedro I's desire to restore his daughter Maria II to her Portuguese throne, which had been usurped by his brother Miguel I, as well as his declining political position at home led to his abrupt abdication on 7 April 1831. He and Amélie immediately departed for Europe, leaving behind the Prince Imperial, who became Emperor Dom Pedro II. ### Early coronation Upon leaving the country, Emperor Pedro I selected three people to take charge of his son and remaining daughters. The first was José Bonifácio de Andrada, his friend and an influential leader during Brazilian independence, who was named guardian. The second was Mariana de Verna, who had held the post of aia (governess) since the birth of Pedro II. As a child, the then-Prince Imperial called her "Dadama", as he could not pronounce the word dama (Lady) correctly. He regarded her as his surrogate mother and would continue to call her by her nickname well into adulthood out of affection. The third person was Rafael, an Afro-Brazilian veteran of the Cisplatine War. He was an employee in the Palace of São Cristóvão whom Pedro I deeply trusted and asked to look after his son—a charge that he carried out for the rest of his life. Bonifácio was dismissed from his position in December 1833 and replaced by another guardian. Pedro II spent his days studying, with only two hours set aside for amusements. Intelligent, he was able to acquire knowledge with great ease. However, the hours of study were strenuous and the preparation for his role as monarch was demanding. He had few friends of his age and limited contact with his sisters. All that coupled with the sudden loss of his parents gave Pedro II an unhappy and lonely upbringing. The environment in which he was raised turned him into a shy and needy person who saw books as a refuge and retreat from the real world. The possibility of lowering the young Emperor's age of majority, instead of waiting until he turned 18, had been floated since 1835. His elevation to the throne had led to a troublesome period of endless crises. The regency created to rule on his behalf was plagued from the start by disputes between political factions and rebellions across the nation. Those politicians who had risen to power during the 1830s had by now also become familiar with the pitfalls of rule. Historian Roderick J. Barman stated that by 1840, "they had lost all faith in their ability to rule the country on their own. They accepted Pedro II as an authority figure whose presence was indispensable for the country's survival". When asked by politicians if he would like to assume full powers, Pedro II shyly accepted. On the following day, 23 July 1840, the General Assembly (the Brazilian Parliament) formally declared the 14-year-old Pedro II of age. He was later acclaimed, crowned, and consecrated on 18 July 1841. ## Consolidation ### Imperial authority established Removal of the factious regency brought stability to the government. Pedro II was seen nationwide as a legitimate source of authority, whose position placed him above partisanship and petty disputes. He was, however, still no more than a boy, and a shy, insecure, and immature one. His nature resulted from his broken childhood, when he experienced abandonment, intrigue, and betrayal. Behind the scenes, a group of high-ranking palace servants and notable politicians led by Aureliano Coutinho (later Viscount of Sepetiba) became known as the "Courtier Faction" as they established influence over the young Emperor. Some were very close to him, such as Mariana de Verna and Steward Paulo Barbosa da Silva. Pedro II was deftly used by the Courtiers against their actual or suspected foes. The Brazilian government secured the hand of Princess Teresa Cristina of the Kingdom of the Two Sicilies. She and Pedro II were married by proxy in Naples on 30 May 1843. Upon seeing her in person, the Emperor was noticeably disappointed. Teresa Cristina was short, a bit overweight, and not considered conventionally pretty. He did little to hide his disillusionment. One observer stated that he turned his back to Teresa Cristina, another depicted him as being so shocked that he needed to sit, and it is possible that both occurred. That evening, Pedro II wept and complained to Mariana de Verna, "They have deceived me, Dadama!" It took several hours to convince him that duty demanded that he proceed. The Nuptial Mass, with the ratification of the vows previously taken by proxy and the conferral of the nuptial blessing, occurred on the following day, 4 September. In late 1845 and early 1846, the Emperor made a tour of Brazil's southern provinces, traveling through São Paulo (of which Paraná was a part at this time), Santa Catarina and Rio Grande do Sul. He was buoyed by the warm and enthusiastic responses he received. By then Pedro II had matured physically and mentally. He grew into a man who, at 1.90 meters (6 ft 3 in) tall with blue eyes and blond hair, was seen as handsome. With growth, his weaknesses faded and his strengths of character came to the fore. He became self-assured and learned to be not only impartial and diligent, but also courteous, patient and personable. Barman said that he kept "his emotions under iron discipline. He was never rude and never lost his temper. He was exceptionally discreet in words and cautious in action." Most importantly, this period saw the end of the Courtier Faction. Pedro II began to fully exercise authority and successfully engineered the end of the courtiers' influence by removing them from his inner circle while avoiding any public disruption. ### Abolition of the slave trade and war Pedro II was faced by three crises between 1848 and 1852. The first test came in confronting the trade in illegally imported slaves. This had been banned in 1826 as part of a treaty with the United Kingdom. Trafficking continued unabated, however, and the British government's passage of the Aberdeen Act of 1845 authorized British warships to board Brazilian shipping and seize any found involved in the slave trade. While Brazil grappled with this problem, the Praieira revolt erupted on 6 November 1848. This was a conflict between local political factions within Pernambuco province; it was suppressed by March 1849. The Eusébio de Queirós Law was promulgated on 4 September 1850 which gave the Brazilian government broad authority to combat the illegal slave trade. With this new tool, Brazil moved to eliminate importation of slaves. By 1852 this first crisis was over, and Britain accepted that the trade had been suppressed. The third crisis entailed a conflict with the Argentine Confederation regarding ascendancy over territories adjacent to the Río de la Plata and free navigation of that waterway. Since the 1830s, Argentine dictator Juan Manuel de Rosas had supported rebellions within Uruguay and Brazil. It was only in 1850 that Brazil was able to address the threat posed by Rosas. An alliance was forged between Brazil, Uruguay and disaffected Argentines, leading to the Platine War and the subsequent overthrow of the Argentine ruler in February 1852. Barman said that a "considerable portion of the credit must be ... assigned to the Emperor, whose cool head, tenacity of purpose, and sense of what was feasible proved indispensable." The Empire's successful navigation of these crises considerably enhanced the nation's stability and prestige, and Brazil emerged as a hemispheric power. Internationally, Europeans began to regard the country as embodying familiar liberal ideals, such as freedom of the press and constitutional respect for civil liberties. Its representative parliamentary monarchy also stood in stark contrast to the mix of dictatorships and instability endemic in the other nations of South America during this period. ## Growth ### Pedro II and politics At the beginning of the 1850s, Brazil enjoyed internal stability and economic prosperity. Under the prime ministry of Honório Hermeto Carneiro Leão (then-Viscount and later Marquis of Paraná) the Emperor advanced his own ambitious program: the conciliação (conciliation) and melhoramentos (material developments). Pedro II's reforms aimed to promote less political partisanship, and forward infrastructure and economic development. The nation was being interconnected through railroad, electrical telegraph, and steamship lines, uniting it into a single entity. The general opinion, both at home and abroad, was that these accomplishments had been possible due to Brazil's "governance as a monarchy and the character of Pedro II". Pedro II was neither a British-style figurehead nor an autocrat in the manner of Russian czars. The Emperor exercised power through cooperation with elected politicians, economic interests, and popular support. The active presence of Pedro II on the political scene was an important part of the government's structure, which also included the cabinet, the Chamber of Deputies and the Senate (the latter two formed the General Assembly). He used his participation in directing the course of government as a means of influence. His direction became indispensable, although it never devolved into "one-man rule." In his handling of the political parties, he "needed to maintain a reputation for impartiality, work in accord with the popular mood, and avoid any flagrant imposition of his will on the political scene." The Emperor's more notable political successes were achieved primarily because of the non-confrontational and cooperative manner with which he approached both issues and the partisan figures with whom he had to deal. He was remarkably tolerant, seldom taking offense at criticism, opposition or even incompetence. He did not have the constitutional authority to force acceptance of his initiatives without support, and his collaborative approach towards governing kept the nation progressing and enabled the political system to successfully function. The Emperor respected the prerogatives of the legislature, even when they resisted, delayed, or thwarted his goals and appointments. Most politicians appreciated and supported his role. Many had lived through the regency period, when the lack of an emperor who could stand above petty and special interests led to years of strife between political factions. Their experiences in public life had created a conviction that Pedro II was "indispensable to Brazil's continued peace and prosperity." ### Domestic life The marriage between Pedro II and Teresa Cristina started off badly. With maturity, patience and their first child, Afonso, their relationship improved. Later Teresa Cristina gave birth to more children: Isabel, in 1846; Leopoldina, in 1847; and lastly, Pedro Afonso, in 1848. Both boys died when very young, which devastated the Emperor and completely changed his view of the Empire's future. Despite his affection for his daughters, he did not believe that Princess Isabel, although his heir, would have any chance of prospering on the throne. He felt his successor needed to be male for the monarchy to be viable. He increasingly saw the imperial system as being tied so inextricably to himself, that it would not survive him. Isabel and her sister received a remarkable education, although they were given no preparation for governing the nation. Pedro II excluded Isabel from participation in government business and decisions. Sometime around 1850, Pedro II began having discreet affairs with other women. The most famous and enduring of these relationships involved Luísa Margarida Portugal de Barros, Countess of Barral, with whom he formed a romantic and intimate, though not adulterous, friendship after she was appointed governess to the emperor's daughters in November 1856. Throughout his life, the Emperor held onto a hope of finding a soulmate, something he felt cheated of due to the necessity of a marriage of state to a woman for whom he never felt passion. This is but one instance illustrating his dual identity: one who assiduously carried out his duty as emperor and another who considered the imperial office an unrewarding burden and who was happier in the worlds of literature and science. Pedro II was hard-working and his routine was demanding. He usually woke up at 7:00 and did not sleep before 2:00 in the morning. His entire day was devoted to the affairs of state and the meager free time available was spent reading and studying. The Emperor went about his daily routine dressed in a simple black tail coat, trousers, and cravat. For special occasions he would wear court dress, and he only appeared in full regalia with crown, mantle, and scepter twice each year at the opening and closing of the General Assembly. Pedro II held politicians and government officials to the strict standards which he exemplified. The Emperor adopted a strict policy for the selection of civil servants based on morality and merit. To set the standard, he lived simply, once having said: "I also understand that useless expenditure is the same as stealing from the Nation". Balls and assemblies of the Court ceased after 1852. He also refused to request or allow his civil list amount of Rs 800:000\$000 per year (U.S. \$405,000 or £90,000 in 1840) to be raised from the declaration of his majority until his dethronement almost fifty years later. ### Patron of arts and sciences "I was born to devote myself to culture and sciences," the Emperor remarked in his private journal during 1862. He had always been eager to learn and found in books a refuge from the demands of his position. Subjects which interested Pedro II were wide-ranging, including anthropology, history, geography, geology, medicine, law, religious studies, philosophy, painting, sculpture, theater, music, chemistry, physics, astronomy, poetry, and technology among others. By the end of his reign, there were three libraries in São Cristóvão palace containing more than 60,000 books. A passion for linguistics prompted him throughout his life to study new languages, and he was able to speak and write not only Portuguese but also Latin, French, German, English, Italian, Spanish, Greek, Arabic, Hebrew, Sanskrit, Chinese, Occitan, and Tupi. He became the first Brazilian photographer when he acquired a daguerreotype camera in March 1840. He set up one laboratory in São Cristóvão devoted to photography and another to chemistry and physics. He also had an astronomical observatory constructed. The Emperor considered education to be of national importance and was himself a concrete example of the value of learning. He remarked: "Were I not an Emperor, I would like to be a teacher. I do not know of a task more noble than to direct young minds and prepare the men of tomorrow." His reign saw the creation of the Brazilian Historic and Geographic Institute to promote research and preservation in the historical, geographical, cultural, and social sciences. The Imperial Academy of Music and National Opera and the Pedro II School were also founded, the latter serving as a model for schools throughout Brazil. The Imperial Academy of the Fine Arts, established by his father, received further strengthening and support. Using his civil list income, Pedro II provided scholarships for Brazilian students to study at universities, art schools, and conservatories of music in Europe. He also financed the creation of the Institute Pasteur, helped underwrite the construction of Wagner's Bayreuth Festspielhaus, as well as subscribing to similar projects. His efforts were recognized both at home and abroad. Charles Darwin said of him: "The Emperor does so much for science, that every scientific man is bound to show him the utmost respect". Pedro II became a member of the Royal Society, the Russian Academy of Sciences, The Royal Academies for Science and the Arts of Belgium and the American Geographical Society. In 1875, he was elected to the French Academy of Sciences, an honor previously granted to only two other heads of state: Peter the Great and Napoleon Bonaparte. He exchanged letters with scientists, philosophers, musicians and other intellectuals. Many of his correspondents became his friends, including Richard Wagner, Louis Pasteur, Louis Agassiz, John Greenleaf Whittier, Michel Eugène Chevreul, Alexander Graham Bell, Henry Wadsworth Longfellow, Arthur de Gobineau, Frédéric Mistral, Alessandro Manzoni, Alexandre Herculano, Camilo Castelo Branco, and James Cooley Fletcher. His erudition amazed Friedrich Nietzsche when the two met. Victor Hugo told the Emperor: "Sire, you are a great citizen, you are the grandson of Marcus Aurelius," and Alexandre Herculano called him a "Prince whom the general opinion holds as the foremost of his era because of his gifted mind, and due to the constant application of that gift to the sciences and culture." ### Clash with the British Empire At the end of 1859, Pedro II departed on a trip to provinces north of the capital, visiting Espírito Santo, Bahia, Sergipe, Alagoas, Pernambuco, and Paraíba. He returned in February 1860 after four months. The trip was a huge success, with the Emperor welcomed everywhere with warmth and joy. The first half of the 1860s saw peace and prosperity in Brazil. Civil liberties were maintained. Freedom of speech had existed since Brazil's independence and was strongly defended by Pedro II. He found newspapers from the capital and from the provinces an ideal way to keep track of public opinion and the nation's overall situation. Another means of monitoring the Empire was through direct contacts with his subjects. One opportunity for this was during regular Tuesday and Saturday public audiences, where anyone of any social class, including slaves, could gain admittance and present their petitions and stories. Visits to schools, colleges, prisons, exhibitions, factories, barracks, and other public appearances presented further opportunities to gather first-hand information. This tranquility temporarily disappeared when the British consul in Rio de Janeiro, William Dougal Christie, nearly sparked a war between his nation and Brazil. Christie sent an ultimatum containing bullying demands arising out of two minor incidents at the end of 1861 and beginning of 1862. The first was the sinking of a commercial barque on the coast of Rio Grande do Sul after which its goods were pillaged by local inhabitants. The second was the arrest of drunken British officers who were causing a disturbance in the streets of Rio. The Brazilian government refused to yield, and Christie issued orders for British warships to capture Brazilian merchant vessels as indemnity. Brazil prepared for what was seen as an imminent conflict. Pedro II was the main reason for Brazil's resistance; he rejected any suggestion of yielding. This response came as a surprise to Christie, who changed his tenor and proposed a peaceful settlement through international arbitration. The Brazilian government presented its demands and, upon seeing the British government's position weaken, severed diplomatic ties with Britain in June 1863. ## Paraguayan War ### First Fatherland Volunteer As war with the British Empire threatened, Brazil had to turn its attention to its southern frontiers. Another civil war had begun in Uruguay as its political parties turned against each other. The internal conflict led to the murder of Brazilians and looting of their property in Uruguay. Brazil's government decided to intervene, fearful of giving any impression of weakness in the face of conflict with the British. A Brazilian army invaded Uruguay in December 1864, beginning the brief Uruguayan War, which ended in February 1865. Meanwhile, the dictator of Paraguay, Francisco Solano López, took advantage of the situation to establish his country as a regional power. The Paraguayan Army invaded the Brazilian province of Mato Grosso (the area known after 1977 as the state of Mato Grosso do Sul), triggering the Paraguayan War. Four months later, Paraguayan troops invaded Argentine territory as a prelude to an attack on Rio Grande do Sul. Aware of the anarchy in Rio Grande do Sul and the incapacity and incompetence of its military chiefs to resist the Paraguayan army, Pedro II decided to go to the front in person. Upon receiving objections from the cabinet, the General Assembly and the Council of State, Pedro II pronounced: "If they can prevent me from going as an Emperor, they cannot prevent me from abdicating and going as a Fatherland Volunteer"—an allusion to those Brazilians who volunteered to go to war and became known throughout the nation as the "Fatherland Volunteers". The monarch himself was popularly called the "number-one volunteer". Given permission to leave, Pedro II disembarked in Rio Grande do Sul in July and proceeded from there by land. He travelled overland by horse and wagon, sleeping at night in a campaign tent. In September, Pedro II arrived in Uruguaiana, a Brazilian town occupied by a besieged Paraguayan army. The Emperor rode within rifle-shot of Uruguaiana, but the Paraguayans did not attack him. To avoid further bloodshed, he offered terms of surrender to the Paraguayan commander, who accepted. Pedro II's coordination of the military operations and his personal example played a decisive role in successfully repulsing the Paraguayan invasion of Brazilian territory. Before returning to Rio de Janeiro, he received the British diplomatic envoy Edward Thornton, who apologized on behalf of Queen Victoria and the British Government for the crisis between the empires. The Emperor regarded this diplomatic victory over the most powerful nation of the world as sufficient and renewed friendly relations. ### Total victory and its heavy costs Against all expectations, the war continued for five years. During this period, Pedro II's time and energy were devoted to the war effort. He tirelessly worked to raise and equip troops to reinforce the front lines and to push forward the fitting of new warships for the navy. The rape of women, widespread violence against civilians, ransacking and destruction of properties that had occurred during Paraguay's invasion of Brazilian territory had made a deep impression on him. He warned the Countess of Barral in November 1866 that "the war should be concluded as honor demands, cost what it cost." "Difficulties, setbacks, and war-weariness had no effect on his quiet resolve", said Barman. Mounting casualties did not distract him from advancing what he saw as Brazil's righteous cause, and he stood prepared to personally sacrifice his own throne to gain an honorable outcome. Writing in his journal a few years previously Pedro II remarked: "What sort of fear could I have? That they take the government from me? Many better kings than I have lost it, and to me it is no more than the weight of a cross which it is my duty to carry." At the same time, Pedro II worked to prevent quarrels between the national political parties from impairing the military response. The Emperor prevailed over a serious political crisis in July 1868 resulting from a quarrel between the cabinet and Luís Alves de Lima e Silva (then-Marques and later Duke of Caxias), the commander-in-chief of the Brazilian forces in Paraguay. Caxias was also a politician and was a member of the opposing party to the ministry. The Emperor sided with him, leading to the cabinet's resignation. As Pedro II maneuvered to bring about a victorious outcome in the conflict with Paraguay, he threw his support behind the political parties and factions that seemed to be most useful in the effort. The reputation of the monarchy was harmed and its trusted position as an impartial mediator was severely impacted in the long term. He was unconcerned for his personal position, and regardless of the impact upon the imperial system, he determined to put the national interest ahead of any potential harm caused by such expediencies. His refusal to accept anything short of total victory was pivotal in the outcome. His tenacity was well-paid with the news that López had died in battle on 1 March 1870, bringing the war to a close. Pedro II turned down the General Assembly's suggestion to erect an equestrian statue of him to commemorate the victory and chose instead to use the money to build elementary schools. ## Apogee ### Abolitionism In the 1870s, progress was made in both social and political spheres as segments of society benefited from the reforms and shared in the increasing prosperity. Brazil's international reputation for political stability and investment potential greatly improved. The Empire was seen as a modern and progressive nation unequaled, with the exception of the United States, in the Americas. The economy began growing rapidly and immigration flourished. Railroad, shipping and other modernization projects were adopted. With "slavery destined for extinction and other reforms projected, the prospects for 'moral and material advances' seemed vast." In 1870, few Brazilians opposed slavery and even fewer openly condemned it. Pedro II, who did not own slaves, was one of the few who did oppose slavery. Its abolition was a delicate subject. Slaves were used by all classes, from the richest to the poorest. Pedro II wanted to end the practice gradually to soften the impact to the national economy. With no constitutional authority to directly intervene to abolish slavery, the Emperor would need to use all his skills to convince, influence, and gather support among politicians to achieve his goal. His first open move occurred back in 1850, when he threatened to abdicate unless the General Assembly declared the Atlantic slave trade illegal. Having dealt with the overseas supply of new slaves, Pedro II turned his attention in the early 1860s to removing the remaining source: enslavement of children born to slaves. Legislation was drafted at his initiative, but the conflict with Paraguay delayed discussion of the proposal in the General Assembly. Pedro II openly asked for the gradual eradication of slavery in the speech from the throne of 1867. He was heavily criticized, and his move was condemned as "national suicide." Critics argued "that abolition was his personal desire and not that of the nation." He consciously ignored the growing political damage to his image and to the monarchy in consequence of his support for abolition. Eventually, a bill pushed through by Prime Minister José Paranhos, was enacted as the Law of Free Birth on 28 September 1871, under which all children born to slave women after that date were considered free-born. ### To Europe and North Africa On 25 May 1871, Pedro II and his wife traveled to Europe. He had long desired to vacation abroad. When news arrived that his younger daughter, the 23-year-old Leopoldina, had died in Vienna of typhoid fever on 7 February, he finally had a pressing reason to venture outside the Empire. Upon arriving in Lisbon, Portugal, he immediately went to the Janelas Verdes palace, where he met with his stepmother, Amélie of Leuchtenberg. The two had not seen each other in forty years, and the meeting was emotional. Pedro II remarked in his journal: "I cried from happiness and also from sorrow seeing my Mother so affectionate toward me but so aged and so sick." The Emperor proceeded to visit Spain, Great Britain, Belgium, Germany, Austria, Italy, Egypt, Greece, Switzerland, and France. In Coburg, he visited his daughter's tomb. He found this to be "a time of release and freedom". He traveled under the assumed name "Dom Pedro de Alcântara", insisting upon being treated informally and staying only in hotels. He spent his days sightseeing and conversing with scientists and other intellectuals with whom he shared interests. The European sojourn proved to be a success, and his demeanor and curiosity won respectful notices in the nations which he visited. The prestige of both Brazil and Pedro II were further enhanced during the tour when news came from Brazil that the Law of Free Birth, abolishing the last source of enslavement, had been ratified. The imperial party returned to Brazil in triumph on 31 March 1872. ### Religious Issue Soon after returning to Brazil, Pedro II was faced with an unexpected crisis. The Brazilian clergy had long been understaffed, undisciplined and poorly educated, leading to a great loss of respect for the Catholic Church. The imperial government had embarked upon a program of reform to address these deficiencies. As Catholicism was the state religion, the government exercised a great deal of control over Church affairs, paying clerical salaries, appointing parish priests, nominating bishops, ratifying papal bulls and overseeing seminaries. In pursuing reform, the government selected bishops who satisfied its criteria for education, support for reform and moral fitness. However, as more capable men began to fill the clerical ranks, resentment of government control over the Church increased. The bishops of Olinda and Belém (in the provinces of Pernambuco and Pará, respectively) were two of the new generation of educated and zealous Brazilian clerics. They had been influenced by the ultramontanism, which spread among Catholics in this period. In 1872, they ordered Freemasons expelled from lay brotherhoods. While European Freemasonry often tended towards atheism and anti-clericalism, things were much different in Brazil where membership in Masonic orders was common—although Pedro II himself was not a Freemason. The government headed by the Viscount of Rio Branco tried on two separate occasions to persuade the bishops to repeal, but they refused. This led to their trial and conviction by the Superior Court of Justice. In 1874, they were sentenced four years at hard labor, although the Emperor commuted this to imprisonment only. Pedro II played a decisive role by unequivocally backing the government's actions. He was a conscientious adherent of Catholicism, which he viewed as advancing important civilizing and civic values. While he avoided anything that could be considered unorthodox, he felt free to think and behave independently. The Emperor accepted new ideas, such as Charles Darwin's theory of evolution, of which he remarked that "the laws that he [Darwin] has discovered glorify the Creator". He was moderate in his religious beliefs, but could not accept disrespect to civil law and government authority. As he told his son-in-law: "[The government] has to ensure that the constitution is obeyed. In these proceedings there is no desire to protect masonry; but rather the goal of upholding the rights of the civilian power." The crisis was resolved in September 1875 after the Emperor grudgingly agreed to grant full amnesty to the bishops and the Holy See annulled the interdicts. ### To the United States, Europe, and Middle East Once again the Emperor traveled abroad, this time going to the United States. He was accompanied by his faithful servant Rafael, who had raised him from childhood. Pedro II arrived in New York City on 15 April 1876, and set out from there to travel throughout the country; going as far as San Francisco in the west, New Orleans in the south, Washington, D.C., and north to Toronto, Canada. The trip was "an unalloyed triumph", Pedro II making a deep impression on the American people with his simplicity and kindness. He then crossed the Atlantic, where he visited Denmark, Sweden, Finland, Russia, the Ottoman Empire, Greece, the Holy Land, Egypt, Italy, Austria, Germany, France, Britain, the Netherlands, Switzerland, and Portugal. He returned to Brazil on 22 September 1877. Pedro II's trips abroad made a deep psychological impact. While traveling, he was largely freed of the restrictions imposed by his office. Under the pseudonym "Pedro de Alcântara", he enjoyed moving about as an ordinary person, even taking a train journey solely with his wife. Only while touring abroad could the Emperor shake off the formal existence and demands of the life he knew in Brazil. It became more difficult to reacclimate to his routine as head of state upon returning. Upon his sons' early deaths, the Emperor's faith in the monarchy's future had evaporated. His trips abroad now made him resentful of the emperorship assigned to him at the age of five. If he previously had no interest in securing the throne for the next generation, he now had no desire to keep it going during his own lifetime. ## Decline and fall ### Decline During the 1880s, Brazil continued to prosper and social diversity increased markedly, including the first organized push for women's rights. On the other hand, letters written by Pedro II reveal a man grown world-weary with age and having an increasingly alienated and pessimistic outlook. He remained respectful of his duty and was meticulous in performing the tasks demanded of the imperial office, albeit often without enthusiasm. Because of his increasing "indifference towards the fate of the regime" and his lack of action in support of the imperial system once it was challenged, historians have attributed the "prime, perhaps sole, responsibility" for the dissolution of the monarchy to the Emperor himself. After their experience of the perils and obstacles of government, the political figures who had arisen during the 1830s saw the Emperor as providing a fundamental source of authority essential for governing and for national survival. These elder statesmen began to die off or retire from government until, by the 1880s, they had almost entirely been replaced by a newer generation of politicians who had no experience of the early years of Pedro II's reign. They had only known a stable administration and prosperity and saw no reason to uphold and defend the imperial office as a unifying force beneficial to the nation. To them, Pedro II was merely an old and increasingly sick man who had steadily eroded his position by taking an active role in politics for decades. Before he had been above criticism, but now his every action and inaction prompted meticulous scrutiny and open criticism. Many young politicians had become apathetic toward the monarchic regime and, when the time came, they would do nothing to defend it. Pedro II's achievements went unremembered and unconsidered by the ruling elites. By his very success, the Emperor had made his position seem unnecessary. The lack of an heir who could feasibly provide a new direction for the nation also diminished the long-term prospects of the Brazilian monarchy. The Emperor loved his daughter Isabel, but he considered the idea of a female successor as antithetical to the role required of Brazil's ruler. He viewed the death of his two sons as being a sign that the Empire was destined to be supplanted. Resistance to accepting a female ruler was also shared by the political establishment. Even though the Constitution allowed female succession to the throne, Brazil was still very traditional, and only a male successor was thought capable as head of state. ### Abolition of slavery and coup d'état By June 1887, the Emperor's health had considerably worsened and his personal doctors suggested going to Europe for medical treatment. While in Milan he passed two weeks between life and death, even being anointed. While on a bed recovering, on 22 May 1888 he received news that slavery had been abolished in Brazil. With a weak voice and tears in his eyes, he said, "Great people! Great people!" Pedro II returned to Brazil and disembarked in Rio de Janeiro in August 1888. The "whole country welcomed him with an enthusiasm never seen before. From the capital, from the provinces, from everywhere, arrived proofs of affection and veneration." With the devotion expressed by Brazilians upon the return of the Emperor and the Empress from Europe, the monarchy seemed to enjoy unshakable support and to be at the height of its popularity. The nation enjoyed great international prestige during the final years of the Empire, and it had become an emerging power within the international arena. Predictions of economic and labor disruption caused by the abolition of slavery failed to materialize and the 1888 coffee harvest was successful. The end of slavery had resulted in an explicit shift of support to republicanism by rich and powerful coffee farmers who held great political, economic, and social power in the country. Republicanism was an elitist creed which never flourished in Brazil, with little support in the provinces. The combination of republican ideas and the dissemination of positivism among the army's lower and medium officer ranks led to indiscipline among the corps and became a serious threat to the monarchy. They dreamed of a dictatorial republic, which they believed would be superior to the monarchy. Although there was no desire in Brazil among the majority of the population to change the form of government, the civilian republicans began pressuring army officers to overthrow the monarchy. They launched a coup d'état, arrested Prime Minister Afonso Celso, Viscount of Ouro Preto and instituted the republic on 15 November 1889. The few people who witnessed what occurred did not realize that it was a rebellion. Historian Lídia Besouchet noted that "[r]arely has a revolution been so minor." During the ordeal, Pedro II showed no emotion as if unconcerned about the outcome. He dismissed all suggestions for quelling the rebellion that politicians and military leaders put forward. When he heard the news of his deposition he simply commented: "If it is so, it will be my retirement. I have worked too hard and I am tired. I will go rest then." He and his family were sent into exile in Europe on 17 November. ## Exile and legacy ### Last years Teresa Cristina died three weeks after their arrival in Europe, and Isabel and her family moved to another place while Pedro settled first in Cannes and later in Paris. Pedro's last couple of years were lonely and melancholic, as he lived in modest hotels without money and writing in his journal of dreams in which he was allowed to return to Brazil. He never supported a restoration of the monarchy, once stating that he had no desire "to return to the position which I occupied, especially not by means of conspiracy of any sort." One day he caught an infection that progressed quickly into pneumonia. Pedro rapidly declined and died at 00:35 on 5 December 1891 surrounded by his family. His last words were "May God grant me these last wishes—peace and prosperity for Brazil". While the body was being prepared, a sealed package in the room was found, and next to it a message written by the Emperor himself: "It is soil from my country, I wish it to be placed in my coffin in case I die away from my fatherland." Isabel wished to hold a discreet and private burial ceremony, but she eventually agreed to the French government's request for a state funeral. On 9 December, thousands of mourners attended the ceremony at La Madeleine. Aside from Pedro's family, these included: Francesco II, former king of the Two Sicilies; Isabel II, former queen of Spain; Philippe, comte de Paris; and other members of European royalty. Also present were General Joseph Brugère, representing President Sadi Carnot; the presidents of the Senate and the Chamber of Deputies as well as their members; diplomats; and other representatives of the French government. Nearly all members of the Institut de France were in attendance. Other governments from the Americas and Europe sent representatives, as did the Ottoman Empire, Persia, China, and Japan. Following the services, the coffin was taken in procession to the railway station to begin its trip to Portugal. Around 300,000 people lined the route under incessant rain and cold. The journey continued on to the Church of São Vicente de Fora near Lisbon, where the body of Pedro was interred in the Royal Pantheon of the House of Braganza on 12 December. The Brazilian republican government, "fearful of a backlash resulting from the death of the Emperor", banned any official reaction. Nevertheless, the Brazilians were far from indifferent to Pedro's demise, and "repercussions in Brazil were also immense, despite the government's effort to suppress. There were demonstrations of sorrow throughout the country: shuttered business activity, flags displayed at half-staff, black armbands on clothes, death knells, religious ceremonies." Masses were held in memory of Pedro throughout Brazil, and he and the monarchy were praised in the eulogies that followed. ### Legacy After his fall, Brazilians remained attached to the former Emperor, who was still a popular and highly praised figure. This view was even stronger among those of African descent, who equated the monarchy with freedom because of his and his daughter Isabel's part in the abolition of slavery. The continued support for the deposed monarch is largely credited to a generally held and unextinguished belief that he was a truly "wise, benevolent, austere and honest ruler", said historian Ricardo Salles. The positive view of Pedro II, and nostalgia for his reign, only grew as the nation quickly fell into a series of economic and political crises which Brazilians attributed to the Emperor's overthrow. Strong feelings of guilt manifested among republicans, and these became increasingly evident upon the Emperor's death in exile. They praised Pedro II, who was seen as a model of republican ideals, and the imperial era, which they believed should be regarded as an example to be followed by the young republic. In Brazil, the news of the Emperor's death "aroused a genuine sense of regret among those who, without sympathy for a restoration, acknowledged both the merits and the achievements of their deceased ruler." His remains, as well as those of his wife, were returned to Brazil in 1921 in time for the centenary of the Brazilian independence. The government granted Pedro II dignities befitting a head of state. A national holiday was declared and the return of the Emperor as a national hero was celebrated throughout the country. Thousands attended the main ceremony in Rio de Janeiro where, according to historian Pedro Calmon, the "elderly people cried. Many knelt down. All clapped hands. There was no distinction between republicans and monarchists. They were all Brazilians." This homage marked the reconciliation of Republican Brazil with its monarchical past. Historians have expressed high regard for Pedro II and his reign. The scholarly literature dealing with him is vast and, with the exception of the period immediately after his ouster, overwhelmingly positive, and even laudatory. He has been regarded by several historians in Brazil as the greatest Brazilian. In a manner similar to methods which were used by republicans, historians point to the Emperor's virtues as an example to be followed, although none go so far as to advocate a restoration of the monarchy. Historian Richard Graham noted that "[m]ost twentieth-century historians, moreover, have looked back on the period [of Pedro II's reign] nostalgically, using their descriptions of the Empire to criticize—sometimes subtly, sometimes not—Brazil's subsequent republican or dictatorial regimes." ## Titles and honors ### Titles and styles The Emperor's full style and title were "His Imperial Majesty Dom Pedro II, Constitutional Emperor and Perpetual Defender of Brazil". ### Honors National Honors Emperor Pedro II was Grand Master of the following Brazilian Orders: - Order of Our Lord Jesus Christ - Order of Saint Benedict of Aviz - Order of Saint James of the Sword - Order of the Southern Cross - Order of Pedro I - Order of the Rose Foreign Honors - Grand Cross of the Austro-Hungarian Order of Saint Stephen - Grand Cordon of the Belgian Order of Leopold - Grand Cross of the Romanian Order of the Star - Knight of the Danish Order of the Elephant - Knight of the Order of Saint Januarius of the Two Sicilies - Grand Cross of the Order of Saint Ferdinand and of Merit of the Two Sicilies - Grand Cross of the French Légion d'honneur - Grand Cross of the Greek Order of the Redeemer - Grand Cross of the Dutch Order of the Netherlands Lion - Knight of the Spanish Order of the Golden Fleece - Stranger Knight of the British Order of the Garter - Grand Cross of the Order of Malta - Grand Cross of the Order of the Holy Sepulchre - Senator Grand Cross with Collar of the Sacred Military Constantinian Order of Saint George of Parma - Grand Cross of the Portuguese Order of the Immaculate Conception of Vila Viçosa - Grand Cross of the Portuguese Order of the Tower and Sword - Knight of the Prussian Order of the Black Eagle - Knight 1st Class of all Russian orders of chivalry - Knight of the Sardinian Order of the Most Holy Annunciation - Knight of the Swedish Royal Order of the Seraphim - Commander Grand Cross of the Swedish Order of the Polar Star - Member 1st Class of the Ottoman Order of the Medjidie - Knight of the House Order of Fidelity of Baden - Knight of the Order of Berthold the First of Baden - Knight of the Bavarian Order of Saint Hubert - Grand Cross of the Order of Ernest the Pious - Grand Cross of the Order of the White Falcon of Saxe-Weimar - Knight of the Saxon Order of the Rue Crown - Grand Cross with Collar of the Imperial Order of the Mexican Eagle - Grand Cross of the Order of Saint Charles of Monaco ## Genealogy ### Ancestry The ancestry of Emperor Pedro II: ### Issue ## See also - Dom Pedro aquamarine, named after Pedro II and his father, is the world's largest cut aquamarine gem. ## References and further reading ### In Portuguese
195,781
Australian magpie
1,171,661,390
Medium-sized black and white passerine bird
[ "Artamidae", "Birds described in 1801", "Birds of Australia", "Birds of Indonesia", "Birds of Papua New Guinea", "Birds of Victoria (state)", "Taxa named by John Latham (ornithologist)" ]
The Australian magpie (Gymnorhina tibicen) is a black and white passerine bird native to Australia and southern New Guinea, and introduced to New Zealand. Although once considered to be three separate species, it is now considered to be one, with nine recognised subspecies. A member of the Artamidae, the Australian magpie is placed in its own genus Gymnorhina and is most closely related to the black butcherbird (Melloria quoyi). It is not closely related to the European magpie, which is a corvid. The adult Australian magpie is a fairly robust bird ranging from 37 to 43 cm (14.5 to 17 in) in length, with black and white plumage, gold brown eyes and a solid wedge-shaped bluish-white and black bill. The male and female are similar in appearance, but can be distinguished by differences in back markings. The male has pure white feathers on the back of the head and the female has white blending to grey feathers on the back of the head. With its long legs, the Australian magpie walks rather than waddles or hops and spends much time on the ground. Described as one of Australia's most accomplished songbirds, the Australian magpie has an array of complex vocalisations. It is omnivorous, with the bulk of its varied diet made up of invertebrates. It is generally sedentary and territorial throughout its range. Common and widespread, it has adapted well to human habitation and is a familiar bird of parks, gardens and farmland in Australia and New Guinea. This species is commonly fed by households around Australia, but in spring (and occasionally in autumn) a small minority of breeding magpies (almost always males) become aggressive, swooping and attacking those who approach their nests. Research has shown that magpies can recognise at least 100 different people, and may be less likely to swoop individuals they have befriended. Over 1,000 Australian magpies were introduced into New Zealand from 1864 to 1874, but have subsequently been accused of displacing native birds and are now treated as a pest species. Introductions also occurred in the Solomon Islands and Fiji, where the birds are not considered an invasive species. The Australian magpie is the mascot of several Australian and New Zealand sporting teams, including the Collingwood Magpies, the Western Suburbs Magpies, Port Adelaide Magpies and, in New Zealand, the Hawke's Bay Magpies. ## Taxonomy and nomenclature The Australian magpie was first described in the scientific literature by English ornithologist John Latham in 1801 as Coracias tibicen, the type collected in the Port Jackson region. Its specific epithet derived from the Latin tibicen "flute-player" or "piper" in reference to the bird's melodious call. An early recorded vernacular name is piping poller, written on a painting by Thomas Watling, one of a group known collectively as the Port Jackson Painter, sometime between 1788 and 1792. Other names used include piping crow-shrike, piping shrike, piper, maggie, flute-bird and organ-bird. The term bell-magpie was proposed to help distinguish it from the European magpie but failed to gain wide acceptance. Tarra-won-nang, or djarrawunang, wibung, and marriyang were names used by the local Eora and Darug inhabitants of the Sydney Basin. Booroogong and garoogong were Wiradjuri words and Victorian terms included carrak (Jardwadjali), kuruk (Western Victorian languages), kiri (Dhauwurd Wurrung language) and kurikari (Wuluwurrung). Among the Kamilaroi, it is burrugaabu, galalu, or guluu. In Western Australia it is known as warndurla among the Yindjibarndi people of the central and western Pilbara, and koorlbardi amongst the south west Noongar peoples. In South Australia, where it is the State emblem, it is the kurraka (Kaurna), murru (Narungga), urrakurli (Adnyamathanha), goora (Barngarla), konlarru (Ngarrindjeri) and tuwal (Bunganditj). The bird was named for its similarity in colouration to the European magpie; it was a common practice for early settlers to name plants and animals after European counterparts. However, the European magpie is a member of the Corvidae, while its Australian counterpart is placed in the family Artamidae (although both are members of a broad corvid lineage). The Australian magpie's affinities with butcherbirds and currawongs were recognised early on and the three genera were placed in the family Cracticidae in 1914 by John Albert Leach after he had studied their musculature. American ornithologists Charles Sibley and Jon Ahlquist recognised the close relationship between woodswallows and the butcherbirds in 1985, and combined them into a Cracticini clade, in the Artamidae. The Australian magpie is placed in its own monotypic genus Gymnorhina which was introduced by the English zoologist George Robert Gray in 1840. The name of the genus is from the Ancient Greek gumnos for "naked" or "bare" and rhis, rhinos "nostrils". Some authorities such as Glen Storr in 1952 and Leslie Christidis and Walter Boles in their 2008 checklist, have placed the Australian magpie in the butcherbird genus Cracticus, arguing that its adaptation to ground-living is not enough to consider it a separate genus. A molecular genetic study published in a 2013 showed that the Australian magpie is a sister taxon to the black butcherbird (Melloria quoyi) and that the two species are in turn sister to a clade that includes the other butcherbirds in the genus Cracticus. The ancestor to the two species is thought to have split from the other butcherbirds between 8.3 and 4.2 million years ago, during the late Miocene to early Pliocene, while the two species themselves diverged sometime during the Pliocene (5.8–3.0 million years ago). The Australian magpie was subdivided into three species in the literature for much of the twentieth century—the black-backed magpie (G. tibicen), the white-backed magpie (G. hypoleuca), and the western magpie (G. dorsalis). They were later noted to hybridise readily where their territories crossed, with hybrid grey or striped-backed magpies being quite common. This resulted in them being reclassified as one species by Julian Ford in 1969, with most recent authors following suit. ### Subspecies There are currently thought to be nine subspecies of the Australian magpie, although there are large zones of overlap with intermediate forms between the taxa. There is a tendency for birds to become larger with increasing latitude, the southern subspecies being larger than those further north, except the Tasmanian form which is small. The original form, known as the black-backed magpie and classified as Gymnorhina tibicen, has been split into four black-backed races: - G. tibicen tibicen, the nominate form, is a large subspecies found in southeastern Queensland, from the vicinity of Moreton Bay through eastern New South Wales to Moruya, New South Wales almost to the Victorian border. It is coastal or near-coastal and is restricted to east of the Great Dividing Range. - G. tibicen terraereginae, found from Cape York and the Gulf Country southwards across Queensland to the coast between Halifax Bay in the north and south to the Mary River, and central and western New South Wales and into northern South Australia, is a small to medium-sized subspecies. The plumage is the same as that of subspecies tibicen, although the female has a shorter black tip to the tail. The wings and tarsus are shorter and the bill proportionally longer. It was originally described by Gregory Mathews in 1912, its subspecies name a Latin translation, terra "land" reginae "queen's" of "Queensland". Hybridisation with the large white-backed subspecies tyrannica occurs in northern Victoria and southeastern New South Wales; intermediate forms have black bands of varying sizes in white-backed area. Three-way hybridisation occurs between Bega and Batemans Bay on the New South Wales south coast. - G. tibicen eylandtensis, the Top End magpie, is found from the Kimberley in northern Western Australia, across the Northern Territory through Arnhem Land and Groote Eylandt and into the Gulf Country. It is a small subspecies with a long and thinner bill, with birds of Groote Eylandt possibly even smaller than mainland birds. It has a narrow black terminal tailband, and a narrow black band; the male has a large white nape, the female pale grey. This form was initially described by H. L. White in 1922. It intergrades with subspecies terraereginae southeast of the Gulf of Carpentaria. - G. tibicen longirostris, the long-billed magpie, is found across northern Western Australia, from Shark Bay into the Pilbara. Named in 1903 by Alex Milligan, it is a medium-sized subspecies with a long thin bill. Milligan speculated the bill may have been adapted for the local conditions, slim fare meaning the birds had to pick at dangerous scorpions and spiders. There is a broad area of hybridisation with the western dorsalis in southern central Western Australia from Shark Bay south to the Murchison River and east to the Great Victoria Desert. The white-backed magpie, originally described as Gymnorhina hypoleuca by John Gould in 1837, has also been split into races: - G. tibicen tyrannica, a very large white-backed form found from Twofold Bay on the New South Wales far south coast, across southern Victoria south of the Great Dividing Range through to the Coorong in southeastern South Australia. It was first described by Schodde and Mason in 1999. It has a broad black tail band. - G. tibicen telonocua, found from Cowell south into the Eyre and Yorke Peninsulas in southern South Australia, as well as the southwestern Gawler Ranges. Described by Schodde and Mason in 1999, its subspecific name is an anagram of leuconota "white-backed". It is very similar to tyrannica, differing in having a shorter wing and being lighter and smaller overall. The bill is relatively short compared with other magpie subspecies. Intermediate forms are found in the Mount Lofty Ranges and on Kangaroo Island. - G. tibicen hypoleuca now refers to a small white-backed subspecies with a short compact bill and short wings, found on King and Flinders Islands, as well as Tasmania. - The western magpie, G. tibicen dorsalis was originally described as a separate species by A. J. Campbell in 1895 and is found in the fertile south-west corner of Western Australia. The adult male has a white back and most closely resembles subspecies telonocua, though it is a little larger with a longer bill and the black tip of its tail plumage is narrower. The female is unusual in that it has a scalloped black or brownish-black mantle and back; the dark feathers there are edged with white. This area appears a more uniform black as the plumage ages and the edges are worn away. Both sexes have black thighs. - The New Guinean magpie, G. tibicen papuana, is a little-known subspecies found in southern New Guinea. The adult male has a mostly white back with a narrow black stripe, and the female a blackish back; the black feathers here are tipped with white similar to subspecies dorsalis. It has a long deep bill resembling that of subspecies longirostris. Genetically it is closely related to a western lineage of Australian magpies comprising subspecies dorsalis, longirostris and eylandtensis, suggesting their ancestors occupied in savannah country that was a land bridge between New Guinea and Australia and was submerged around 16,500 years ago. ## Description The adult magpie ranges from 37 to 43 cm (14.5 to 17 in) in length with a 65–85 cm (25.5–33.5 in) wingspan, and weighing 220–350 g (7.8–12.3 oz). Its robust wedge-shaped bill is bluish-white bordered with black, with a small hook at the tip. The black legs are long and strong. The plumage is pure glossy black and white; both sexes of all subspecies have black heads, wings and underparts with white shoulders. The tail has a black terminal band. The nape is white in the male and light greyish-white in the female. Mature magpies have dull red eyes, in contrast to the yellow eyes of currawongs and white eyes of Australian ravens and crows. The main difference between the subspecies lies in the "saddle" markings on the back below the nape. Black-backed subspecies have a black saddle and white nape. White-backed subspecies have a wholly white nape and saddle. The male Western Australian subspecies dorsalis is also white-backed, but the equivalent area in the female is scalloped black. Juveniles have lighter greys and browns amidst the starker blacks and whites of their plumage; two- or three-year-old birds of both sexes closely resemble and are difficult to distinguish from adult females. Immature birds have dark brownish eyes until around two years of age. Australian magpies generally live to around 25 years of age, though ages of up to 30 years have been recorded. The reported age of first breeding has varied according to area, but the average is between the ages of three and five years. Well-known and easily recognisable, the Australian magpie is unlikely to be confused with any other species. The pied butcherbird has a similar build and plumage, but has white underparts unlike the former species' black underparts. The magpie-lark is a much smaller and more delicate bird with complex and very different banded black and white plumage. Currawong species have predominantly dark plumage and heavier bills. ### Vocalisations One of Australia's most highly regarded songbirds, the Australian magpie has a wide variety of calls, many of which are complex. Pitch may vary as much as four octaves, and the bird can mimic over 35 species of native and introduced bird species, as well as dogs and horses. Magpies have even been noted to mimic human speech when living in close proximity to humans. Its complex, musical, warbling call is one of the most familiar Australian bird sounds. In Denis Glover's poem "The Magpies", the mature magpie's call is described as quardle oodle ardle wardle doodle, one of the most famous lines in New Zealand poetry, and as waddle giggle gargle paddle poodle, in the children's book Waddle Giggle Gargle by Pamela Allen. The bird has been known to mimic environmental sounds as well, including the noises made by emergency vehicles during the New South Wales wildfire state of emergency for Australian bushfire. When alone, a magpie may make a quiet musical warbling; these complex melodious warbles or subsongs are pitched at 2–4 KHz and do not carry for long distances. These songs have been recorded up to 70 minutes in duration and are more frequent after the end of the breeding season. Pairs of magpies often take up a loud musical calling known as carolling to advertise or defend their territory; one bird initiates the call with the second (and sometimes more) joining in. Often preceded by warbling, carolling is pitched between 6 and 8 kHz and has 4–5 elements with slurring indistinct noise in between. Birds will adopt a specific posture by tilting their heads back, expanding their chests, and moving their wings backwards. A group of magpies will sing a short repetitive version of carolling just before dawn (dawn song), and at twilight after sundown (dusk song), in winter and spring. Fledgling and juvenile magpies emit a repeated short and loud (80 dB), high-pitched (8 kHz) begging call. Magpies may indulge in beak-clapping to warn other species of birds. They employ several high pitched (8–10 kHz) alarm or rallying calls when intruders or threats are spotted. Distinct calls have been recorded for the approach of eagles and monitor lizards. ## Distribution and habitat The Australian magpie is found in the Trans-Fly region of southern New Guinea, between the Oriomo River and Muli Strait, and across most of Australia, bar the tip of Cape York, the Gibson and Great Sandy Deserts, and the southwest of Tasmania. Birds taken mainly from Tasmania and Victoria were introduced into New Zealand by local Acclimatisation Societies of Otago and Canterbury in the 1860s, with the Wellington Acclimatisation Society releasing 260 birds in 1874. White-backed forms are spread on both the North and eastern South Island, while black-backed forms are found in the Hawke's Bay region. Magpies were introduced into New Zealand to control agricultural pests, and were therefore a protected species until 1951. They are thought to affect native New Zealand bird populations such as the tūī and kererū, sometimes raiding nests for eggs and nestlings, although studies by Waikato University have cast doubt on this, and much blame on the magpie as a predator in the past has been anecdotal only. Introductions also occurred in the Solomon Islands and Sri Lanka, although the species has failed to become established. It has become established in western Taveuni in Fiji, however. The Australian magpie prefers open areas such as grassland, fields and residential areas such as parks, gardens, golf courses, and streets, with scattered trees or forest nearby. Birds nest and shelter in trees but forage mainly on the ground in these open areas. It has also been recorded in mature pine plantations; birds only occupy rainforest and wet sclerophyll forest in the vicinity of cleared areas. In general, evidence suggests the range and population of the Australian magpie has increased with land-clearing, although local declines in Queensland due to a 1902 drought, and in Tasmania in the 1930s have been noted; the cause for the latter is unclear but rabbit baiting, pine tree removal, and spread of the masked lapwing (Vanellus miles) have been implicated. ## Behaviour The Australian magpie is almost exclusively diurnal, although it may call into the night, like some other members of the Artamidae. Natural predators of magpies include various species of monitor lizard and the barking owl. Birds are often killed on roads or electrocuted by powerlines, or poisoned after killing and eating house sparrows or mice, rats or rabbits targeted with baiting. The Australian raven may take nestlings left unattended. On the ground, the Australian magpie moves around by walking, and is the only member of the Artamidae to do so; woodswallows, butcherbirds and currawongs all tend to hop with legs parallel. The magpie has a short femur (thigh bone), and long lower leg below the knee, suited to walking rather than running, although birds can run in short bursts when hunting prey. The magpie is generally sedentary and territorial throughout its range, living in groups occupying a territory, or in flocks or fringe groups. A group may occupy and defend the same territory for many years. Much energy is spent defending a territory from intruders, particularly other magpies, and different behaviours are seen with different opponents. The sight of a raptor results in a rallying call by sentinel birds and subsequent coordinated mobbing of the intruder. Magpies place themselves either side of the bird of prey so that it will be attacked from behind should it strike a defender, and harass and drive the raptor to some distance beyond the territory. A group will use carolling as a signal to advertise ownership and warn off other magpies. In the negotiating display, the one or two dominant magpies parade along the border of the defended territory while the rest of the group stand back a little and look on. The leaders may fluff their feathers or caroll repeatedly. In a group strength display, employed if both the opposing and defending groups are of roughly equal numbers, all magpies will fly and form a row at the border of the territory. The defending group may also resort to an aerial display where the dominant magpies, or sometimes the whole group, swoop and dive while calling to warn an intruding magpie's group. A wide variety of displays are seen, with aggressive behaviours outnumbering pro-social ones. Crouching low and uttering quiet begging calls are common signs of submission. The manus flutter is a submissive display where a magpie will flutter the primary feathers in its wings. A magpie, particularly a juvenile, may also fall, roll over on its back and expose its underparts. Birds may fluff up their flank feathers as an aggressive display or preceding an attack. Young birds display various forms of play behaviour, either by themselves or in groups, with older birds often initiating the proceedings with juveniles. These may involve picking up, manipulating or tugging at various objects such as sticks, rocks or bits of wire, and handing them to other birds. A bird may pick up a feather or leaf and flying off with it, with other birds pursuing and attempting to bring down the leader by latching onto its tail feathers. Birds may jump on each other and even engage in mock fighting. Play may even take place with other species such as blue-faced honeyeaters and Australasian pipits. A 2022 study showed cooperative behaviour, along with a moderate level of problem-solving, when magpies (G. tibicen) assisted one another to remove tracking devices placed on their bodies in a specially-designed harness by researchers for conservation purposes. This was the first recorded example of birds acting in this way to remove tracking devices, a form of rescue behaviour. ### Breeding Magpies have a long breeding season which varies in different parts of the country; in northern parts of Australia they will breed between June and September, but not commence until August or September in cooler regions, and may continue until January in some alpine areas. The nest is a bowl-shaped structure made of sticks and lined with softer material such as grass and bark. Near human habitation, synthetic material may be incorporated. Nests are built exclusively by females and generally placed high up in a tree fork, often in an exposed position. The trees used are most commonly eucalypts, although a variety of other native trees as well as introduced pine, Crataegus, and elm have been recorded. Other bird species, such as the yellow-rumped thornbill (Acanthiza chrysorrhoa), willie wagtail (Rhipidura leucophrys), southern whiteface (Aphelocephala leucopsis), and (less commonly) noisy miner (Manorina melanocephala), often nest in the same tree as the magpie. The first two species may even locate their nest directly beneath a magpie nest, while the diminutive striated pardalote (Pardalotus striatus) has been known to make a burrow for breeding into the base of the magpie nest itself. These incursions are all tolerated by the magpies. The channel-billed cuckoo (Scythrops novaehollandiae) is a notable brood parasite in eastern Australia; magpies will raise cuckoo young, which eventually outcompete the magpie nestlings. The Australian magpie produces a clutch of two to five light blue or greenish eggs, which are oval in shape and about 30 by 40 mm (1.2 by 1.6 in). The chicks hatch synchronously around 20 days after incubation begins; like all passerines, the chicks are altricial—they are born pink, naked, and blind with large feet, a short broad beak and a bright red throat. Their eyes are fully open at around 10 days. Chicks develop fine downy feathers on their head, back and wings in the first week, and pinfeathers in the second week. The black and white colouration is noticeable from an early stage. Nestlings are usually fed exclusively by the female, though the male magpie will feed his partner. Individual males do feed nestlings and fledglings, to varying degrees, from sporadic to equal frequency to the female. The Australian magpie is known to engage in cooperative breeding, and helper birds will assist in feeding and raising young. This does vary from region to region, and with the size of the group—the behaviour is rare or non-existent in pairs or small groups. Juvenile magpies begin foraging on their own three weeks after leaving the nest, and mostly feeding themselves by six months old. Some birds continue begging for food until eight or nine months of age, but are usually ignored. Birds reach adult size by their first year. The age at which young birds disperse varies across the country, and depends on the aggressiveness of the dominant adult of the corresponding sex; males are usually evicted at a younger age. Many leave at around a year old, but the age of departure may range from eight months to four years. ### Feeding The Australian magpie is omnivorous, eating various items located at or near ground level including invertebrates such as earthworms, millipedes, snails, spiders and scorpions as well as a wide variety of insects—cockroaches, ants, earwigs, beetles, cicadas, moths and caterpillars and other larvae. Insects, including large adult grasshoppers, may be seized mid-flight. Skinks, frogs, mice and other small animals as well as grain, tubers, figs and walnuts have also been noted as components of their diet. It has even learnt to safely eat the poisonous cane toad by flipping it over and consuming the underparts. Predominantly a ground feeder, the Australian magpie paces open areas methodically searching for insects and their larvae. One study showed birds were able to find scarab beetle larvae by sound or vibration. Birds use their bills to probe into the earth or otherwise overturn debris in search of food. Smaller prey are swallowed whole, although magpies rub off the stingers of bees and wasps and irritating hairs of caterpillars before swallowing. ## Swooping Magpies are ubiquitous in urban areas all over Australia, and have become accustomed to people. A small percentage of birds become highly aggressive during breeding season from late August to late November – early December or occasionally late February to late April – early May, and will swoop and sometimes attack passersby. Attacks begin as the eggs hatch, increase in frequency and severity as the chicks grow, and tail off as the chicks leave the nest. Magpie attacks occur in most parts of Australia, though Tasmanian magpies are much less aggressive than their mainland counterparts. Magpie attacks can cause injuries, typically wounds to the head. Being unexpectedly swooped while cycling can result in loss of control of the bicycle, which may cause injury or even fatal accidents. Magpies may engage in an escalating series of behaviours to drive off intruders. Least threatening are alarm calls and distant swoops, where birds fly within several metres from behind and perch nearby. Next in intensity are close swoops, where a magpie will swoop in from behind or the side and audibly "snap" their beaks or even peck or bite at the face, neck, ears or eyes. More rarely, a bird may dive-bomb and strike the intruder's (usually a cyclist's) head with its chest. A magpie may rarely attack by landing on the ground in front of a person and lurching up and landing on the victim's chest and pecking at the face and eyes. ### Targets The percentage of magpies that swoop has been difficult to estimate but is significantly less than 9%. Almost all attacking birds (around 99%) are male, and they are generally known to attack pedestrians at around 50 m (160 ft) from their nest, and cyclists at around 100 m (330 ft). There appears to be some specificity in choice of attack targets, with the majority of individuals specializing on either pedestrians or cyclists. Younger people, lone people, and people travelling quickly (i.e., runners and cyclists) appear to be targeted most often by swooping magpies. Anecdotal evidence suggests that if a magpie sees a human trying to rescue a chick that has fallen from its nest, the bird will view this help as predation, and will become more aggressive to humans from then on. Some attacks have indirectly been fatal. For example, in 2021, a Brisbane woman tripped and fell onto her infant while attempting to avoid a swooping, and the infant died. ### Prevention Magpies are a protected native species in Australia, so it is illegal to kill or harm them. However, some states provide exceptions for a magpie that attacks a human, allowing a particularly aggressive bird to be killed. Such a provision is made, for example, in section 54 of the South Australian National Parks and Wildlife Act. More commonly, an aggressive bird will be caught and relocated to an unpopulated area. Magpies have to be moved a considerable distance, as almost all are able to find their way home from distances of less than 25 km (16 mi). Removing the nest is of no use, as birds will breed again and possibly be more aggressive the second time around. If it is necessary to walk near the nest, wearing a broad-brimmed or legionnaire's hat or using an umbrella will deter attacking birds, but beanies and bicycle helmets are of little value, as birds attack the sides of the head and neck. Magpies prefer to swoop at the back of the head; therefore, keeping the magpie in sight at all times can discourage the bird. A basic disguise such as sunglasses worn on the back of the head may fool the magpie as to where a person is looking. Eyes painted on hats or helmets will deter attacks on pedestrians but not cyclists. Cyclists can deter attack by attaching a long pole with a flag to a bike, and the use of cable ties on helmets has become common and appears to be effective. Some claim that hand-feeding magpies can reduce the risk of swooping. Magpies will become accustomed to being fed by humans, and although they are wild, will return to the same place looking for handouts. The idea is that humans thereby appear less of a threat to the nesting birds. Although this has not been studied systematically, there are reports of its success. ## Cultural references The Australian magpie featured in aboriginal folklore around Australia. The Yindjibarndi people of the Pilbara in the northwest of the country used the bird as a signal for sunrise, awakening them with its call. They were also familiar with its highly territorial nature, and it features in a song in their Burndud, or songs of customs. It was a totem bird of the people of the Illawarra region south of Sydney. Under the name piping shrike, the white-backed magpie was declared the official emblem of the Government of South Australia in 1901 by Governor Tennyson, and has featured on the South Australian flag since 1904. The magpie is a commonly used emblem of sporting teams in Australia, and its brash, cocky attitude has been likened to the Australian psyche. Such teams tend to wear uniforms with black and white stripes. The Collingwood Football Club adopted the magpie from a visiting South Australian representative team in 1892. The Port Adelaide Magpies would likewise adopt the black and white colours and Magpie name in 1902. Other examples include Brisbane's Souths Logan Magpies and Sydney's Western Suburbs Magpies. Disputes over who has been the first club to adopt the magpie emblem have been heated at times. Another club, Glenorchy Football Club of Tasmania, was forced to change uniform design when placed in the same league as another club (Claremont Magpies) with the same emblem. In New Zealand, the Hawke's Bay Rugby Union team, from Napier, New Zealand, is also known as the magpies. One of the best-known New Zealand poems is "The Magpies" by Denis Glover, with its refrain "Quardle oodle ardle wardle doodle", imitating the sound of the bird – and the popular New Zealand comic Footrot Flats features a magpie character by the name of Pew. Other magpies depicted in fiction include: Magpie in Colin Thiele's 1974 children's book Magpie Island; Miss Magpie in The Adventures of Blinky Bill; and Penguin the magpie in Penguin Bloom. The sculpture Big Swoop in central Canberra was installed in Garema Place on 16 March 2022. The Australian Magpie won the inaugural Australian Bird of the Year poll conducted by Guardian Australia and BirdLife Australia in late 2017. The Australian magpie won the 2017 contest with 19,926 votes (13.3%), narrowly ahead of the Australian white ibis. The magpie slumped to the \#4 place in the 2019 poll, and to the \#9 place in the 2021 poll. The voting rules changed in all three years of the Bird of the Year poll, which may have affected the results. The magpie also won a 2023 ABC Science poll for Australia's favourite animal sound. ## See also - Australian magpies in New Zealand - Birds of Australia ## Explanatory notes
3,146,917
M-28 Business (Ishpeming–Negaunee, Michigan)
1,143,398,050
State trunkline highway business loop in Michigan, United States
[ "M-28 (Michigan highway)", "State highways in Michigan", "Transportation in Marquette County, Michigan" ]
Business M-28 (Bus. M-28) is a state trunkline highway serving as a business route that runs for approximately 4.9 miles (7.9 km) through the downtown districts of Ishpeming and Negaunee in the US state of Michigan. The trunkline provides a marked route for traffic diverting from U.S. Highway 41 (US 41) and M-28 through the two historic iron-mining communities. It is one of three business loops for M-numbered highways in the state of Michigan. There have previously been two other Bus. M-28 designations for highways in Newberry and Marquette. The trunkline was originally a section of US 41/M-28 and M-35. Before the 1930s, the main highways ran through the two downtown areas when US 41/M-28 was relocated to run near Teal Lake. The former routing had various names over the years. It was designated as an alternate route of the main highways, using both the US 41A/M-28A and Alt. US 41/Alt. M-28 designations before it was designated as Bus. M-28 in 1958. M-35 continued to run through downtown Negaunee along a section of the highway until the 1960s. A rerouting in 1999 moved the trunkline designation along Lakeshore Drive in Ishpeming, and a streetscape project rebuilt the road in Negaunee in 2005. ## Route description There are three business routes in the state of Michigan derived from M-numbered highways. The other two are for M-32 in Hillman and for M-60 in Niles. In the past, two other business routes for M-28 existed in Newberry (1936–1953) and Marquette (1974–1981), but they have since been retired. The extant Bus. M-28 designation remains for the loop through Ishpeming and Negaunee. ### Ishpeming Bus. M-28 begins at a signalized intersection on US 41/M-28 and the Lake Superior Circle Tour (LSCT) with Lakeshore Drive in the city of Ishpeming. The trunkline runs south along Lakeshore Drive under the tracks of the Lake Superior and Ishpeming Railroad (LS&I) and southeasterly towards Lake Bancroft. South of the lake, Bus. M-28 turns east on Division Street. Traffic along the highway here can view the towers of the Cliffs Shaft Mine Museum; the museum is dedicated to telling the story of underground iron ore mining in the region. Division Street carries the Bus. M-28 designation into the central business district of Ishpeming, where it runs past local businesses, Ishpeming High School and the original Ishpeming City Hall. On the east side of downtown, both the central machine shops and the research labs for Cleveland-Cliffs Iron Company are located on Division Street. Continuing east, the trunkline follows Ready Street over hills and through a residential area to the Ishpeming–Negaunee city line. ### Negaunee In Negaunee, the routing uses a street named County Road east from the city line. County Road passes Jackson Park, location of the first iron ore discovery in the area. The iron mined from the region supplied half of the nation's supply between 1850 and 1900. South of downtown Negaunee, Bus. M-28 turns north along the west fork of Silver Street. The street runs north under an overpass that carries Rail Street, a former rail line into downtown Negaunee. The trunkline turns east on Jackson Street, running next to the Negaunee City Hall, which was built in 1914–1915 when the city's population was increasing and iron production was peaking. The building still houses the city's offices, police station and library. The business loop follows Jackson Street east to Division Street, where the street curves slightly and becomes Main Street. Bus. M-28 follows Main Street one block to the intersection with Teal Lake Avenue. Turning north, the trunkline follows Teal Lake Avenue through residential areas of town past the Negaunee Middle School and up over a hill. On the opposite side of the hill next to Teal Lake Bluff, the business loop intersects Arch Street, which carries traffic to Negaunee High School to the west or the football field complex to the east. Negaunee High School was the site of the former Mather B Mine Complex. The administration building for the mine was converted to its present educational use in 1986. Bus. M-28 continues along Teal Lake Avenue past the football field and under the LS&I tracks where it ends at another signalized intersection with US 41/M-28/LSCT by Teal Lake. The total length of Bus. M-28 is 4.873 miles (7.842 km). ### Traffic counts The Michigan Department of Transportation (MDOT) publishes traffic data for the highways it maintains. On Lakeshore Drive in Ishpeming, MDOT stated that 5,619 vehicles on average used the roadway daily in 2019. Along Division Street, traffic drops to 2,711 vehicles before increasing to 3,254 vehicles along the section on Silver Street in Negaunee. Traffic decreases along Jackson and Main streets to 1,762 vehicles on an average day. Traffic is heaviest along Teal Lake Avenue, at 6,810 vehicles. ## History The state highway system was created on May 13, 1913, with the passage of the State Reward Trunk Line Highway Act. The state first signposted these highways by July 1, 1919, and the roadways that make up Bus. M-28 were originally a portion of M-15. Later when the United States Numbered Highway System was created on November 11, 1926, the highway was redesignated as a part of US 41 and part of M-28. The main highway was moved with the construction of a northerly bypass of Ishpeming and Negaunee in 1937. The business loop was not designated Bus. M-28 permanently and marked on state maps until 1958. It was internally designated US 41A/M-28A before being redesignated Alt. US 41/Alt. M-28. or Bus. US 41/Bus. M-28. This dual designation later was mirrored by the other Marquette County business route, Bus. US 41. When M-35 was routed through downtown Negaunee, it joined Bus. M-28 northward from the east fork of Silver Street on to US 41/M-28. Construction of the Empire Mine in 1963 necessitated the relocation of the highway from Palmer to Negaunee. This routing was moved to bypass the city in 1968. From this point on, Bus. M-28 has not shared its routing with any other state trunklines. In 1969, the Michigan Department of State Highways petitioned the American Association of State Highway Officials (AASHO) to approve a Bus. US 41 designation for the trunkline. Action on the request was deferred by AASHTO's U.S. Route Numbering Subcommittee, and then denied the following year. The western end of Bus. M-28 was rerouted on June 4, 1999, when the City of Ishpeming petitioned MDOT to reroute the highway along Lakeshore Drive to US 41/M-28. Previously, it ran along Greenwood Street and North Lake Road and met US 41/M-28 in the West Ishpeming neighborhood of Ishpeming Township. MDOT in a partnership with the City of Negaunee upgraded Teal Lake Avenue between Arch and Rock streets in a streetscaping project to provide a "pedestrian refuge area". This work entailed reconstruction of the retaining wall, curbing and gutters in 2005. Arch Street is the access to Negaunee High School, and this section of Bus. M-28 is near the athletic field complex in Negaunee. The project budgeted \$120,200 with \$24,200 from the City of Negaunee (equivalent to \$ and \$ respectively in ). ## Major intersections ## See also - Bus. M-28 in Newberry - Bus. US 41 in Marquette, formerly also Bus. M-28
4,879,958
Artur Phleps
1,156,027,716
Waffen-SS officer
[ "1881 births", "1944 deaths", "Academic staff of Carol I National Defence University", "Austro-Hungarian Army officers", "Austro-Hungarian military personnel of World War I", "Commanders of the Order of the Star of Romania", "Gebirgsjäger of World War II", "German people who died in Soviet detention", "German prisoners of war in World War II held by the Soviet Union", "Grand Crosses of the Order of the Crown (Romania)", "Missing in action of World War II", "Nazi war criminals", "Officers of the Order of the Star of Romania", "People from Sibiu County", "Recipients of the Czechoslovak War Cross", "Recipients of the Gold German Cross", "Recipients of the Knight's Cross of the Iron Cross with Oak Leaves", "Recipients of the Order of the Yugoslav Crown", "Recipients of the clasp to the Iron Cross, 2nd class", "Romanian Land Forces generals", "Romanian people of German descent", "SS and Police Leaders", "SS-Obergruppenführer", "Waffen-SS personnel killed in action", "Yugoslavia in World War II" ]
Artur Gustav Martin Phleps (29 November 1881 – 21 September 1944) was an Austro-Hungarian, Romanian and German army officer who held the rank of SS-Obergruppenführer und General der Waffen-SS (lieutenant general) in the Waffen-SS during World War II. At the post-war Nuremberg trials, the Waffen-SS – of which Phleps was a senior officer – was declared to be a criminal organisation due to its major involvement in war crimes and crimes against humanity. An Austro-Hungarian Army officer before and during World War I, Phleps specialised in mountain warfare and logistics, and had been promoted to Oberstleutnant (lieutenant colonel) by the end of the war. During the interwar period he joined the Romanian Army, reaching the rank of General-locotenent (major general), and also became an adviser to King Carol. After he spoke out against the government, he asked to be dismissed from the army after being sidelined. In 1941 he left Romania and joined the Waffen-SS as an SS-Standartenführer (colonel) under his mother's maiden name of Stolz. Seeing action on the Eastern Front as a regimental commander with the SS Motorised Division Wiking, he later raised and commanded the 7th SS Volunteer Mountain Division Prinz Eugen, raised the 13th Waffen Mountain Division of the SS Handschar (1st Croatian), and commanded the V SS Mountain Corps. Units under his command committed many crimes against the civilian population of the Independent State of Croatia, German-occupied territory of Serbia and Italian governorate of Montenegro. His final appointment was as plenipotentiary general in south Siebenbürgen (Transylvania) and the Banat, during which he organised the evacuation of the Volksdeutsche (ethnic Germans) of Siebenbürgen to the Reich. In addition to the Knight's Cross of the Iron Cross, Phleps was awarded the German Cross in Gold, and after he was killed in September 1944, he was awarded the Oak Leaves to his Knight's Cross. ## Early life Phleps was born in Birthälm (Biertan), near Hermannstadt in Siebenbürgen, then a part of the Austro-Hungarian Empire (modern-day Transylvania, Romania). At the time, Siebenbürgen was densely populated by ethnic Germans, commonly referred to as Transylvanian Saxons. He was the third son of a surgeon, Dr. Gustav Phleps and Sophie (née Stolz), the daughter of a peasant. Both families had lived in Siebenbürgen for centuries. After finishing the Lutheran Realschule school in Hermannstadt, Phleps entered the Imperial and Royal cadet school in Pressburg (modern-day Slovakia) in 1900, and on 1 November 1901 was commissioned as a Leutnant (lieutenant) in the 3rd Regiment of the Tiroler Kaiserjäger (mountain infantry). In 1903, Phleps was transferred to the 11th Feldjäger (rifle) Battalion in Güns (in modern-day Hungary), and in 1905 was accepted into the Theresian Military Academy in Wiener Neustadt. He completed his studies in two years, and was endorsed as suitable for service in the General Staff. Following promotion to Oberleutnant (first lieutenant) he was transferred to the staff of the 13th Infantry Regiment at Esseg in Slavonia, then to the 6th Infantry Division in Graz. This was followed by a promotion to Hauptmann (captain) in 1911, along with a position on the staff of the XV Army Corps in Sarajevo. There, he specialised in mobilisation and communications, in the difficult terrain of Bosnia and Herzegovina. ## World War I At the outbreak of World War I, Phleps was serving with the staff of the 32nd Infantry Division in Budapest. His division was involved in the early stages of the Serbian campaign, during which Phleps was transferred to the operations staff of the Second Army. This Army was soon withdrawn from the Serbian front and deployed via the Carpathian Mountains to the Austro-Hungarian province of Galicia (modern-day Poland and Ukraine), to defend against a successful offensive by the Russian Imperial army. The Second Army continued to fight the Russians in and around the Carpathians through the winter of 1914–1915. In 1915 Phleps was again transferred, this time to Armeegruppe Rohr commanded by General der Kavallerie (General) Franz Rohr von Denta, which was formed in the Austrian Alps, in response to the Italian declaration of war in May 1915. Armeegruppe Rohr became the basis for the formation of the 10th Army, which was headquartered in Villach. Phleps subsequently became the deputy quartermaster of the 10th Army, responsible for organising the supply of the troops fighting the Italians in the mountains. On 1 August 1916, Phleps was promoted to Major. Later that month, King Ferdinand of Romania led the Kingdom of Romania in joining the Triple Entente, subsequently invading Phleps' homeland of Siebenbürgen. On 27 August, Phleps became the chief of staff of the 72nd Infantry Division, which was involved in Austro-Hungarian operations to repel the Romanian invasion. He remained in this theatre of operations for the next two years, ultimately serving as the chief quartermaster of the German 9th Army, and was awarded the Iron Cross 2nd Class, on 27 January 1917. In 1918 he returned to the mountains when he was transferred to Armeegruppe Tirol, and ended the war as an Oberstleutnant (lieutenant colonel) and chief quartermaster for the entire Alpine Front. ## Between the wars After the war the Austro-Hungarian Empire was dissolved, and Phleps returned to his homeland, which had become part of the Kingdom of Romania under the Treaty of Trianon. He joined the Romanian Army and was appointed commander of the Saxon National Guard, a militia formed of the German-speaking people of Siebenbürgen. In this role he opposed the Hungarian communist revolutionary government of Béla Kun, which fought against Romania in 1919. During a battle at the Tisza river against Kun's forces, Phleps disobeyed direct orders and was subsequently court-martialled. The trial concluded that he had saved the Romanian forces through his actions, and he was promoted to Oberst (colonel). He commanded the 84th Infantry Regiment, then joined the general army headquarters and started teaching logistics at the Romanian War Academy in Bucharest. He attended the V Army Corps staff college in Brașov, and published a book titled Logistics: Basics of Organisation and Execution in 1926, which became the standard work on logistics for the Romanian Army. Ironically, after the book was published, Phleps failed his first general's examination on the topic of logistics. He commanded various Romanian units, including the 1st Brigade of the vânători de munte (mountain ranger troops), while serving also as a military advisor to King Carol II in the 1930s. Phleps reached the rank of General-locotenent (major general) despite his reported disdain for the corruption, intrigue and hypocrisy of the royal court. After criticising the government's policy and publicly calling King Carol a liar when another general tried to twist his words, he was transferred to the reserves in 1940 and finally dismissed from service at his own request in 1941. ## World War II ### SS Motorised Division Wiking In November 1940, with the support of the leader of the Volksgruppe in Rumänien (ethnic Germans in Romania), Andreas Schmidt, Phleps wrote to the key Waffen-SS recruiting officer SS-Brigadeführer und Generalmajor der Waffen SS (Brigadier) Gottlob Berger offering his services to the Third Reich. He subsequently asked for permission to leave Romania to join the Wehrmacht, and this was approved by the recently installed Romanian Conducător (leader), the dictator General Ion Antonescu. Phleps volunteered for the Waffen-SS instead, enlisting under his mother's maiden name of Stolz. According to the historian Hans Bergel, Phleps joined the Waffen-SS because Volksdeutsche were not permitted to join the Wehrmacht. He was appointed an SS-Standartenführer (colonel) by Reichsführer-SS Heinrich Himmler and joined the SS Motorised Division Wiking, where he commanded Dutch, Flemish, Danish, Norwegian, Swedish and Finnish volunteers. When Hilmar Wäckerle, the commander of SS-Regiment Westland, was killed in action near Lvov in late June 1941, Phleps took over command of that regiment. He distinguished himself in fighting at Kremenchuk and Dnipropetrovsk in the Ukraine, commanded his own Kampfgruppe, became a confidant of Generalmajor (Brigadier General) Hans-Valentin Hube, commander of the 16th Panzer Division, and was subsequently promoted to SS-Oberführer (senior colonel). In July 1941 he was awarded the 1939 clasp to his Iron Cross (1914) 2nd Class and then the Iron Cross (1939) 1st Class. ### 7th SS Volunteer Mountain Division Prinz Eugen On 30 December 1941, Generalfeldmarschall (Field Marshal) Wilhelm Keitel advised Himmler that Adolf Hitler had authorised the raising of a seventh Waffen-SS division from the Volksdeutsche (ethnic Germans) of Yugoslavia. In the meantime, Phleps reverted to his birth name from his mother's maiden name. Two weeks later, SS-Brigadeführer und Generalmajor der Waffen SS Phleps was selected to organise the new division. On 1 March 1942, the division was officially designated the SS-Freiwilligen-Division "Prinz Eugen". Phleps was promoted to SS-Gruppenführer und Generalleutnant der Waffen SS (major general) on 20 April 1942. After recruitment, formation and training in the Banat region in October 1942, the two regiments and supporting arms were deployed into the southwestern part of the German-occupied territory of Serbia as an anti-Partisan force. Headquartered in Kraljevo, with its two mountain infantry regiments centred on Užice and Raška, the division continued its training. Some artillery batteries, the anti-aircraft battalion, the motorcycle battalion and cavalry squadron continued to form in the Banat. During his time with the 7th SS Division, Phleps was referred to as "Papa Phleps" by his troops. In early October 1942, the division commenced Operation Kopaonik, targeting the Chetnik force of Major Dragutin Keserović in the Kopaonik Mountains. The operation ended with little success, since the Chetniks had forewarning of the operation and were able to avoid contact. After a quiet winter, in January 1943 Phleps deployed the division to the Independent State of Croatia (NDH) to participate in Case White. Between 13 February and 9 March 1943 he was responsible for the initial aspects of raising the 13th Waffen Mountain Division of the SS Handschar (1st Croatian) in the NDH in addition to his duties commanding the 7th SS Division. In his strongly apologetic divisional history of the division which he later commanded, Otto Kumm claims that his division captured Bihać and Bosanski Petrovac, killed over 2,000 Partisans and captured nearly 400 during Case White. After a short rest and refit in April, the division was committed to Case Black in May and June 1943, during which it advanced from the Mostar area into the Italian governorate of Montenegro killing, according to Kumm, 250 Partisans and capturing over 500. The historian Thomas Casagrande notes that all German units fighting partisans routinely counted the civilians they had murdered as partisans. Therefore, it can be assumed that the reported number of inflicted casualties included many civilians. The division played a decisive role during the fighting. Although Himmler had already planned to award Phleps the Knight's Cross of the Iron Cross for his role in organising the 7th SS Division, it was for the achievements of his division during Case Black that Phleps received the award. Phleps was also portrayed in the SS-magazine Das Schwarze Korps. He received the Knight's Cross in July 1943, while being also promoted to Obergruppenführer und General der Waffen-SS (lieutenant general), and placed in command of the V SS Mountain Corps. In May 1943, Phleps became frustrated by the failure of his Italian allies to cooperate with German operations, which was demonstrated in his reputation for forthright speech. During a meeting with his Italian counterpart in Podgorica, Montenegro, Phleps called the Italian corps commander General Ercole Roncaglia a "lazy macaroni". Phleps scolded his Wehrmacht interpreter, Leutnant Kurt Waldheim for toning down his language, saying "Listen Waldheim, I know some Italian and you are not translating what I am telling this so-and-so". On another occasion, he threatened to shoot Italian sentries who were delaying his passage through a checkpoint. On 15 May 1943, Phleps handed over command of the division to SS-Brigadeführer und Generalmajor der Waffen SS Karl von Oberkamp. While under Phleps' command, the division committed many crimes against civilian population of the NDH, especially during Case White and Case Black. These included "burning villages, massacre of inhabitants, torture and murder of captured partisans", thence the division thereby developed a distinctive reputation for cruelty. These charges have been denied by Kumm, among others. Still, the divisional orders routinely called for the annihilation of hostile civilian population and documents by the Waffen-SS themselves show that these orders were regularly put into practice. For example, Himmler's police representative in the NDH, SS-Brigadeführer und Generalmajor der Polizei Konstantin Kammerhofer, reported on 15 July 1943 that units of the 7th SS Division had shot the Muslim population of Kosutica, about 40 men, women, and children gathered in a "church". The division claimed that "bandits" in the village had opened fire, but the police could not discover any traces of combat. Such incidents, which jeopardized the plan to raise a Muslim SS division, led to a dispute between Kammerhofer and Phleps' successor Oberkamp. Himmler ordered Phleps to intervene, and he reported on 7 September 1943 that he could not discover anything wrong with the shootings in Kosutica and that Kammerhofer and Oberkamp had resolved their dispute. The war crimes committed by the 7th SS Division became the subject of international controversy when Waldheim's service in the Balkans became public in the mid-1980s, during his successful bid for the Austrian presidency. ### V SS Mountain Corps The formations under the command of V SS Mountain Corps varied during Phlep's command. In July 1944, it consisted of the 118th Jäger Division and 369th (Croatian) Infantry Division in addition to the 7th SS and 13th SS divisions. Throughout Phlep's command, the corps was under the overall control of 2nd Panzer Army and conducted anti-Partisan operations throughout the NDH and Montenegro. These operations included Operations Kugelblitz (ball lightning) and Schneesturm (blizzard), which were part of a major offensive in eastern Bosnia in December 1943, but they were only a limited success. Phleps had met personally with Hitler to discuss the planning for Operation Kugelblitz. Due to the unreliable nature of the troops loyal to the NDH government, Phleps utilised Chetnik forces as auxiliaries, stating to a visiting officer that he could not disarm the Chetniks unless the NDH government provided him with the same strength in reliable troops. In January 1944, due to fears that the Western Allies would invade along the Dalmatian coastline and islands, V SS Mountain Corps forced the mass evacuation of male civilians between the ages of 17 and 50 from that area. Phleps was criticised by both NDH and German authorities for the harshness with which the evacuation was carried out. During the first six months of 1944, elements of the V SS Mountain Corps were involved in Operation Waldrausch (Forest Fever) in central Bosnia, Operation Maibaum (Maypole) in eastern Bosnia, and Operation Rösselsprung (Knight's Move), the attempt to capture or kill the Partisan leader Josip Broz Tito. On 20 June 1944, Phleps was awarded the German Cross in Gold. In September, he was appointed plenipotentiary general of German occupation troops in south Siebenbürgen and the Banat, organising the flight of the Volksdeutsche of north Siebenbürgen ahead of the advancing Soviet Red Army. ## Death and aftermath Following the 23 August 1944 King Michael's Coup, while en route to a meeting with Himmler in Berlin, Phleps and his entourage made a detour to reconnoitre the situation near Arad, Romania after receiving reports of Soviet advances in that area. Accompanied only by his adjutant and his driver, and unaware of the presence of Red Army units in the vicinity, he entered Șimand, a village approximately 20 kilometres (12 mi) north of Arad, on the afternoon of 21 September 1944. Soviet forces were already in the village, and Phleps and his men were captured and brought in for interrogation. When the building in which they were held was attacked by German aircraft later that afternoon, the prisoners tried to escape and were shot by their guards. Bergel suspects that Phleps had been set up by Hungarian army officers who had found out that he knew of plans for Hungary to switch sides as Romania had done shortly before. Phleps' personal effects, including his identity card, tags and decorations, were found by a Hungarian patrol and handed over to German authorities on 29 September 1944. Phleps had been listed as missing in action since 22 September 1944 when he did not show up for his meeting with Himmler, who had issued a warrant for his arrest. Phleps was posthumously awarded the Oak Leaves to his Knight's Cross on 24 November 1944, which was presented to his son, SS-Obersturmführer (First Lieutenant) Dr.med. Reinhart Phleps, a battalion doctor serving in the 7th SS Division. Soon after his death, the 13th Gebirgsjäger Regiment of the 7th SS Division was given the cuff title Artur Phleps in his honour. Phleps was married; his wife's name was Grete and in addition to their son Reinhart, they had a daughter, Irmingard. One of Phleps' brothers became a doctor, and the other was a professor at the Danzig technical university, now Gdańsk University of Technology. ## Accusations of war crimes Although no longer in command of the division, Phleps was accused by the Yugoslav authorities of war crimes in association with the atrocities committed by 7th SS Division in the area of Nikšić in Montenegro during Case Black. At the Nuremberg trials on 6 August 1946, a document from the Yugoslav State Commission for Crimes of Occupiers and their Collaborators regarding the crimes of the 7th SS Division was quoted as follows: > At the end of May 1943 the division came to Montenegro to the area of Niksic in order to take part in the fifth enemy offensive in conjunction with the Italian troops. [...] The officers and men of the SS division Prinz Eugen committed crimes of an outrageous cruelty on this occasion. The victims were shot, slaughtered and tortured, or burnt to death in burning houses. [...] It has been established from the investigations entered upon that 121 persons, mostly women, and including 30 persons aged 60–92 years and 29 children of ages ranging from 6 months to 14 years, were executed on this occasion in the horrible manner narrated above. The villages [and then follows the list of the villages] were burnt down and razed to the ground. [...] For all of these most serious War Crimes those responsible besides the actual culprits—the members of the SS Division Prinz Eugen—are all superior and all subordinate commanders as the persons issuing and transmitting the orders for murder and devastation. Among others the following war criminals are known: SS Gruppenfuehrer and Lieutenant General of the Waffen-SS Phleps; Divisional Commander, Major General of the Waffen-SS Karl von Oberkamp; Commander of the 13th Regiment, later Divisional Commander, Major General Gerhard Schmidhuber... The post-war Nuremberg trials made the declaratory judgement that the Waffen-SS was a criminal organisation due to its major involvement in war crimes and crimes against humanity, including the killing of prisoners-of-war and atrocities committed in occupied countries. ## Awards Phleps received the following awards during his service: - Austrian Military Merit Medal (Signum Laudis) - in Bronze with war decoration and swords on 13 October 1914 - in Silver with war decoration on 15 March 1916 - Austrian Military Merit Cross 3rd Class with war decoration and swords on 3 July 1915 - Decoration for Services to the Red Cross 2nd Class with war decoration on 23 October 1915 - Prussian Iron Cross (1914) 2nd Class on 27 January 1917 - Austrian Order of the Iron Crown 3rd Class with war decoration and swords on 24 April 1917 - Officers cross of the Order of Franz Joseph with war decoration and swords on 23 July 1918 - Order of the Star of Romania - Officers cross with swords on ribbon of military merit on 12 March 1920 - Commanders cross on 28 February 1933 - Czechoslovak War Cross on 1 March 1928 - Order of the Yugoslav Crown 2nd Class in 1933 - Bulgarian Order of Military Merit 2nd Class on 26 April 1934 - Romanian Order of the Crown - Commander on 1 January 1927 - Grand Cross on 10 May 1939 - Clasp to the Iron Cross (1939) 2nd Class on 10 July 1941 - Iron Cross 1st Class on 26 July 1941 - Infantry Assault Badge in Bronze on 7 November 1943 - German Cross in Gold on 20 June 1944 as SS-Obergruppenführer und General der Waffen-SS in the V SS Mountain Corps - Knight's Cross of the Iron Cross with Oak Leaves - Knight's Cross on 4 July 1943 as SS-Brigadeführer und Generalmajor der Waffen SS and commander of SS-Division "Prinz Eugen" - 670th Oak Leaves on 24 November 1944 (posthumously) as SS-Obergruppenführer und General der Waffen-SS, commanding general of the V SS Mountain Corps and Higher SS and Police Leader as well as commander-in-chief in Siebenbürgen.
34,453,297
Missing My Baby
1,142,829,442
1992 song performed by Selena
[ "1990s ballads", "1992 songs", "Contemporary R&B ballads", "Selena songs", "Songs written by A. B. Quintanilla" ]
"Missing My Baby" is a song released by American singer Selena on her third studio album Entre a Mi Mundo (1992). It was composed by A.B. Quintanilla—her brother and principal record producer, whose intention was to showcase Selena's diverse musical abilities. Selena included it on the album to help her cross over into the English-speaking market. Critics praised her emotive enunciation in the song. After Selena was murdered in 1995, a remix version by R&B group Full Force appeared on her fifth studio album Dreaming of You, which was originally intended to be her full-length English-language debut album. A posthumous music video made for VH1 was released to promote the triple box-set Anthology (1998). "Missing My Baby" is a mid-tempo R&B ballad influenced by urban and soul music. The lyrics describe the love felt by the narrator, who reminisces of rhapsodic events she has shared with her lover. In some parts of the song, the narrator experiences loneliness and anguish because of the absence of her boyfriend. Although never intended to be released as a single, the track peaked at number 22 on the US Rhythmic Top 40 chart in 1995 after Selena's death. ## Background and development "Missing My Baby" was written by Selena's brother and the song's principal record producer A.B. Quintanilla III, who wrote it in a week, and three weeks later, in late 1991, it was recorded at Sun Valley, Los Angeles. It was created for Selena's 1992 album Entre a Mi Mundo, to showcase her diverse musical abilities and to add to the album's variety of musical styles, which include Mexican pop and traditional Mexican songs, whereas "Missing My Baby" is in the style of contemporary R&B. After the release of Selena's full-length Spanish albums Selena (1989) and Ven Conmigo (1990), which included Tejano and other Mexican pop styles, she decided that her next recording would feature an English-language song. She believed that such a song would convince EMI Records' chairman Charles Koppelman that she was ready to release a crossover album. EMI had wanted her to acquire a larger fan base before launching her crossover career. In spite of this, Selena included the song on Entre a Mi Mundo. In 1995, during the recording sessions for Dreaming of You, which was intended to be Selena's full-length English-language debut album, EMI Latin wanted R&B group Full Force to perform a remixed version of the song for the album after the group saw a video of Selena's live performance and expressed interest in working with her. EMI then flew Quintanilla III and Selena out to meet with the group at their Brooklyn recording studio. Full Force agreed to add backing vocals, which they recorded the majority of in two days to replace Selena's backing vocals. Selena also re-recorded her lead vocals on the final verse and choruses for a new key change on the remix version, which EMI approved for the album. After Selena was murdered by Yolanda Saldívar on March 31, 1995, Full Force agreed to complete their production on the remix version, including singing the remaining backing vocal parts that had originally been intended for Selena to record. The group invited the Quintanilla family to hear the finished result in the studio, which left them in tears. ## Composition "Missing My Baby" is a mid-tempo R&B ballad with influences of urban and soul music. It is in the key of D major, at 144 beats per minute in common time. The recording incorporates melisma, with sung poetry during the downtempo part of the song. The melody is accompanied by backing vocals, and instrumentation is provided by an electric piano, drums, a keyboard, a synthesizer and strings. Contemporary music critics praised Selena's emotive enunciation, which emphasized the song's title and central theme. R&B group Full Force were the backing vocalists for the remix version of "Missing My Baby". J.R. Reynolds, formerly of Billboard, called "Missing My Baby" a "dreamy ballad" with an "R&B-styled melody under Selena's pop vocals". Ramiro Burr of the Austin American-Statesman described it as a soul ballad. Jerry Johnston of the Deseret News thought that Selena displayed a "Leslie Gore [sic] baby-voice" in "Missing My Baby" and that she "displays a wonderful suppleness in her voice". The Virginian-Pilot said that the song was built on hooks that recall Diana Ross's "Missing You", which is a tribute to Marvin Gaye, and the Beach Boys' "Good to My Baby". The song begins with a drum solo before the other instruments enter to form the musical foundation. Selena sings to her absent lover about how much she misses him, saying that he is "always on [her] mind" and that she feels lonely when he is not with her. Three times she sings, "I often think of the happy times we spent together/And I just can't wait to tell you that I love you". In the chorus, she sings of wanting to hold him tight and feel his heartbeat. ## Critical reception and legacy "Missing My Baby" received positive reviews from critics. Vibe magazine reported that Full Force was awarded gold and platinum discs for "Missing My Baby" and "Techno Cumbia", and described "Missing My Baby" as giving a "hint of her aspirations". After the remix version appeared on the 1995 album Dreaming of You, the Hi XD said that it was the best English-language song on the album. Chris Riemenschneider and John T. Davis of the Austin American-Statesman wrote that "Missing My Baby can sound as fluffy as the Big M's "Crazy for You". Cary Clack of the San Antonio Express-News wrote that "Missing My Baby" was played on non-Tejano radio stations and that he thought it might become a posthumous hit, while commenting that the recording "displays [Selena's] wonderful vocal and emotional range". However, Mario Tarradell of The Dallas Morning News believed that "Missing My Baby" and other tracks were added to Entre a Mi Mundo "for good measure". "Missing My Baby" was one of the first Selena songs to be played on radio stations after she was murdered by Yolanda Saldívar, her friend and former manager of her Selena Etc. boutiques. A music video of the song, incorporating footage from Selena's personal home videos, was released for VH1 in 1998 to promote the triple box-set Anthology. Billboard reported that the video was the 47th most played music video for that channel in the week ending 5 April 1998. ## Chart performance ## Personnel Credits from the album's liner notes: - Selena – lead vocals, backing vocals - Full Force – remix and additional production, backing vocals - Ricky Vela – keyboards - Suzette Quintanilla – drums - A.B. Quinatnilla – writing, production
54,057,829
Colin Robert Chase
1,153,701,231
American academic (1935–1984)
[ "1935 births", "1984 deaths", "Academic staff of the University of Toronto", "American academics of English literature", "Anglo-Saxon studies scholars", "Deaths from cancer", "Harvard University alumni", "Johns Hopkins University alumni", "Place of death missing", "Saint Louis University alumni", "University of Toronto alumni", "Writers from Denver" ]
Colin Robert Chase (February 5, 1935 – October 13, 1984) was an American academic. An associate professor of English at the University of Toronto, he was known for his contributions to the studies of Old English and Anglo-Latin literature. His best-known work, The Dating of Beowulf, challenged the accepted orthodoxy of the dating of the Anglo-Saxon poem Beowulf—then thought to be from the latter half of the eighth century—and left behind what was described in A Beowulf Handbook as "a cautious and necessary incertitude". Born in Denver, Chase was one of three sons of a newspaper executive and a Pulitzer Prize-winning playwright, Mary Coyle Chase. Chase's two brothers became actors; he considered such a career, but ultimately studied English literature, classics, and philosophy. He received his Bachelor of Arts from Harvard University, Master of Arts from Saint Louis and Johns Hopkins Universities, and Ph.D. from the University of Toronto in 1971, the same year the university named him an assistant professor. In addition to The Dating of Beowulf, Chase penned Two Alcuin Letter-Books—a scholarly collection of twenty-four letters by the eighth-century scholar Alcuin. He also wrote some eight articles and chapters, contributed to the Dictionary of the Middle Ages, and for nearly a decade writing the Beowulf section of "This Year's Work in Old English Studies" for the Old English Newsletter. Chase died of cancer in 1984, shortly before his anticipated promotion to full professor. ## Early life and education Colin Robert Chase was born in Denver, Colorado, on February 5, 1935. His father, Robert Lamont Chase, was a newspaper executive, and his mother, Mary Coyle Chase, a playwright who went on to win the Pulitzer Prize for Drama in 1945 for her play, Harvey. Colin Chase had two brothers, Michael Lamont Chase and Barry Jerome "Jerry" Chase. All three pursued an interest in acting. Michael Chase attended the Carnegie Institute of Technology School of Drama, and was a member of the cast of the Barter Theatre in Abingdon, Virginia. Jerry Chase acted in plays and movies, including one of his mother's plays when 14 years old, and wrote the play Cinderella Wore Combat Boots. Colin Chase, meanwhile, nearly pursued an acting career, and would later perform in campus stage productions. Chase grew up in Denver, where he attended Teller Elementary School. The success of his mother's play Harvey led to some bullying in fourth grade, leading his mother to write a guest column about it in the Dunkirk Evening Observer. He obtained his Bachelor of Arts from Harvard University in 1956, and studied classics and philosophy for five years at a Jesuit seminary. In 1962 he received a Master of Arts from Saint Louis University, and in 1964 he received a second master's degree from Johns Hopkins University; he matriculated at the University of Toronto the same year, became a part-time instructor there in 1967, and completed his Ph.D. in 1971. His dissertation was entitled Panel Structure in Old English Poetry. ## Career Chase became an assistant professor at the University of Toronto in 1971, the same year he completed his PhD. Four years later he was promoted to associate professor. At the university he taught a wide variety of classes and had many doctoral students. He was a faculty member of St. Michael's College and the Centre for Medieval Studies; from 1977 until 1984, he chaired the Centre's Medieval Latin Committee. Much of Chase's work was on Old English and Anglo-Latin literature, and he focused his research on the pre-conquest literature of England. He was particularly known for his 1981 edited collection The Dating of Beowulf, and from 1976 served as the chief reviewer of the Beowulf section of "The Year's Work in Old English Studies" in the Old English Newsletter. Chase's other major publication was a 1975 scholarly edition of Two Alcuin Letter-Books, which collected twenty-four letters written by the eighth-century scholar Alcuin. Collected for Wulfstan, Archbishop of York, two centuries after Alcuin's death, the letters were preserved in a manuscript from the Cotton collection at the British Library, and many were apparently intended as didactic messages rather than personal correspondence; others were "model letters" including 'thank you' notes and 'get well' cards, likely to help students learn how to compose letters in Latin. Chase also wrote eight articles, and contributed to three videos made by the Toronto Media Centre, most popularly The Sutton Hoo ship-burial, about the Anglo-Saxon ship-burial unearthed at Sutton Hoo in Suffolk. He additionally served as an administrative committee member at the early stages of the project to revise Jack Ogilvy's Books Known to the English and create a reference work mapping the sources that influenced the literary culture of Anglo-Saxon England. The Dating of Beowulf was credited with challenging the accepted orthodoxy over the date that the epic poem was composed. The Old English poem, surviving in a single manuscript from the turn of the millennium, attracted considerable interest after its first modern publication in 1815, and spawned what was termed in A Beowulf Handbook as a "bewildering debate about perhaps the most vexing problems in Beowulf scholarship: when was the poem composed, where, by whom, for whom?" Chase's introduction, "Opinions on the Date of Beowulf, 1815–1980"—which one reviewer termed "an essay commendable both for its balance and its economy"—traced a century and a half of academic discourse over the first of these questions, which, having started with a first tentative date of the poem of shortly after the fourth century, had by 1980 consistently settled on a date in the latter half of the eighth century. Each chapter used a different approach, such as historical, metrical, stylistic, and codicological, to try to date the poem. Chase's attempt at dating looked at the poem's balanced attitude towards heroic culture, reflecting both appreciation and admonition, to suggest that "Beowulf was written at a time when heroic culture could be treated fully and positively but without romanticizing, by an author neither afraid nor infatuated." Given the paucity of material with which to trace the evolution of historical perspectives, Chase turned to the better-known lives of the saints from the period. Seeing early lives which appeared "to avoid and even suppress significant exploitation" of heroic culture and values, and later lives that moved "towards a celebration of heroic values in a way that has been fully integrated with Anglo-Saxon culture", Chase suggested that "Beowulf is likely to have been written neither early, in the eighth century, nor late, in the tenth, but in the rapidly changing and chaotic ninth". Other chapters, meanwhile, by scholars such as Peter Clemoes and Kevin Kiernan, suggested a date for the poem as early as the eighth century, and as late as the eleventh. In the book's wake came what was described in A Beowulf Handbook as "a cautious and necessary incertitude". An anonymous reviewer of the book termed it "one of the most important inconclusions in the study of Old English", and declared that "henceforth every discussion of the poem and its period will begin with reference to this volume." Chase died in 1984, while his promotion to full professor was underway. At the time he was working on a study of the lives of the saints and had started a new series of editions of the lives of the pre-conquest saints. The scholar Paul E. Szarmach wrote that Chase "taught us much by his scholarship and by his personal example, and we are in great measure diminished". The Centre for Medieval Studies at the University of Toronto, matched by the Ontario Student Opportunity Trust Fund, awards the Colin Chase Memorial Bursary each year in Chase's memory. The scholarship goes to "a graduate student in the Centre for Medieval Studies, on the basis of academic excellence and financial need". ## Personal life and death Chase had a wife, Joyce (), and five children: Deirdre, Robert, Tim, Mary, and Patrick. He was a deacon in the Roman Catholic Church, and participated in its training program. He died of cancer in 1984. His wife died in 2003, also of cancer. ## Publications ### Books \* Includes two chapters written by Chase: : \* : \* ### Chapters \* Abstract published as \* Republished as ### Articles ### Reviews ### Other This Year's Work in Old English Studies Dictionary of the Middle Ages
12,321,977
Shale oil extraction
1,172,557,215
Process for extracting oil from oil shale
[ "Arab inventions", "Oil shale technology", "Petroleum production" ]
Shale oil extraction is an industrial process for unconventional oil production. This process converts kerogen in oil shale into shale oil by pyrolysis, hydrogenation, or thermal dissolution. The resultant shale oil is used as fuel oil or upgraded to meet refinery feedstock specifications by adding hydrogen and removing sulfur and nitrogen impurities. Shale oil extraction is usually performed above ground (ex situ processing) by mining the oil shale and then treating it in processing facilities. Other modern technologies perform the processing underground (on-site or in situ processing) by applying heat and extracting the oil via oil wells. The earliest description of the process dates to the 10th century. In 1684, Great Britain granted the first formal extraction process patent. Extraction industries and innovations became widespread during the 19th century. The industry shrank in the mid-20th century following the discovery of large reserves of conventional oil, but high petroleum prices at the beginning of the 21st century have led to renewed interest, accompanied by the development and testing of newer technologies. As of 2010, major long-standing extraction industries are operating in Estonia, Brazil, and China. Its economic viability usually requires a lack of locally available crude oil. National energy security issues have also played a role in its development. Critics of shale oil extraction pose questions about environmental management issues, such as waste disposal, extensive water use, waste water management, and air pollution. ## History In the 10th century, the Assyrian physician Masawaih al-Mardini (Mesue the Younger) wrote of his experiments in extracting oil from "some kind of bituminous shale". The first shale oil extraction patent was granted by the British Crown in 1684 to three people who had "found a way to extract and make great quantities of pitch, tarr, and oyle out of a sort of stone". Modern industrial extraction of shale oil originated in France with the implementation of a process invented by Alexander Selligue in 1838, improved upon a decade later in Scotland using a process invented by James Young. During the late 19th century, plants were built in Australia, Brazil, Canada, and the United States. The 1894 invention of the Pumpherston retort, which was much less reliant on coal heat than its predecessors, marked the separation of the oil shale industry from the coal industry. China (Manchuria), Estonia, New Zealand, South Africa, Spain, Sweden, and Switzerland began extracting shale oil in the early 20th century. However, crude oil discoveries in Texas during the 1920s and in the Middle East in the mid 20th century brought most oil shale industries to a halt. In 1944, the US recommenced shale oil extraction as part of its Synthetic Liquid Fuels Program. These industries continued until oil prices fell sharply in the 1980s. The last oil shale retort in the US, operated by Unocal Corporation, closed in 1991. The US program was restarted in 2003, followed by a commercial leasing program in 2005 permitting the extraction of oil shale and oil sands on federal lands in accordance with the Energy Policy Act of 2005. As of 2010, shale oil extraction is in operation in Estonia, Brazil, and China. In 2008, their industries produced about 930,000 tonnes (17,700 barrels per day) of shale oil. Australia, the US, and Canada have tested shale oil extraction techniques via demonstration projects and are planning commercial implementation; Morocco and Jordan have announced their intent to do the same. Only four processes are in commercial use: Kiviter, Galoter, Fushun, and Petrosix. ## Processing principles Shale oil extraction process decomposes oil shale and converts its kerogen into shale oil—a petroleum-like synthetic crude oil. The process is conducted by pyrolysis, hydrogenation, or thermal dissolution. The efficiencies of extraction processes are often evaluated by comparing their yields to the results of a Fischer Assay performed on a sample of the shale. The oldest and the most common extraction method involves pyrolysis (also known as retorting or destructive distillation). In this process, oil shale is heated in the absence of oxygen until its kerogen decomposes into condensable shale oil vapors and non-condensable combustible oil shale gas. Oil vapors and oil shale gas are then collected and cooled, causing the shale oil to condense. In addition, oil shale processing produces spent oil shale, which is a solid residue. Spent shale consists of inorganic compounds (minerals) and char—a carbonaceous residue formed from kerogen. Burning the char off the spent shale produces oil shale ash. Spent shale and shale ash can be used as ingredients in cement or brick manufacture. The composition of the oil shale may lend added value to the extraction process through the recovery of by-products, including ammonia, sulfur, aromatic compounds, pitch, asphalt, and waxes. Heating the oil shale to pyrolysis temperature and completing the endothermic kerogen decomposition reactions require a source of energy. Some technologies burn other fossil fuels such as natural gas, oil, or coal to generate this heat and experimental methods have used electricity, radio waves, microwaves, or reactive fluids for this purpose. Two strategies are used to reduce, and even eliminate, external heat energy requirements: the oil shale gas and char by-products generated by pyrolysis may be burned as a source of energy, and the heat contained in hot spent oil shale and oil shale ash may be used to pre-heat the raw oil shale. For ex situ processing, oil shale is crushed into smaller pieces, increasing surface area for better extraction. The temperature at which decomposition of oil shale occurs depends on the time-scale of the process. In ex situ retorting processes, it begins at 300 °C (570 °F) and proceeds more rapidly and completely at higher temperatures. The amount of oil produced is the highest when the temperature ranges between 480 and 520 °C (900 and 970 °F). The ratio of oil shale gas to shale oil generally increases along with retorting temperatures. For a modern in situ process, which might take several months of heating, decomposition may be conducted at temperatures as low as 250 °C (480 °F). Temperatures below 600 °C (1,110 °F) are preferable, as this prevents the decomposition of limestone and dolomite in the rock and thereby limits carbon dioxide emissions and energy consumption. Hydrogenation and thermal dissolution (reactive fluid processes) extract the oil using hydrogen donors, solvents, or a combination of these. Thermal dissolution involves the application of solvents at elevated temperatures and pressures, increasing oil output by cracking the dissolved organic matter. Different methods produce shale oil with different properties. ## Classification of extraction technologies Industry analysts have created several classifications of the technologies used to extract shale oil from oil shale. By process principles: Based on the treatment of raw oil shale by heat and solvents the methods are classified as pyrolysis, hydrogenation, or thermal dissolution. By location: A frequently used distinction considers whether processing is done above or below ground, and classifies the technologies broadly as ex situ (displaced) or in situ (in place). In ex situ processing, also known as above-ground retorting, the oil shale is mined either underground or at the surface and then transported to a processing facility. In contrast, in situ processing converts the kerogen while it is still in the form of an oil shale deposit, following which it is then extracted via oil wells, where it rises in the same way as conventional crude oil. Unlike ex situ processing, it does not involve mining or spent oil shale disposal aboveground as spent oil shale stays underground. By heating method: The method of transferring heat from combustion products to the oil shale may be classified as direct or indirect. While methods that allow combustion products to contact the oil shale within the retort are classified as direct, methods that burn materials external to the retort to heat another material that contacts the oil shale are described as indirect By heat carrier: Based on the material used to deliver heat energy to the oil shale, processing technologies have been classified into gas heat carrier, solid heat carrier, wall conduction, reactive fluid, and volumetric heating methods. Heat carrier methods can be sub-classified as direct or indirect. The following table shows extraction technologies classified by heating method, heat carrier and location (in situ or ex situ). By raw oil shale particle size: The various ex situ processing technologies may be differentiated by the size of the oil shale particles that are fed into the retorts. As a rule, gas heat carrier technologies process oil shale lumps varying in diameter from 10 to 100 millimeters (0.4 to 3.9 in), while solid heat carrier and wall conduction technologies process fines which are particles less than 10 millimeters (0.4 in) in diameter. By retort orientation: "Ex-situ" technologies are sometimes classified as vertical or horizontal. Vertical retorts are usually shaft kilns where a bed of shale moves from top to bottom by gravity. Horizontal retorts are usually horizontal rotating drums or screws where shale moves from one end to the other. As a general rule, vertical retorts process lumps using a gas heat carrier, while horizontal retorts process fines using solid heat carrier. By complexity of technology: In situ technologies are usually classified either as true in situ processes or modified in situ processes. True in situ processes do not involve mining or crushing the oil shale. Modified in situ processes involve drilling and fracturing the target oil shale deposit to create voids in the deposit. The voids enable a better flow of gases and fluids through the deposit, thereby increasing the volume and quality of the shale oil produced. ## Ex situ technologies ### Internal combustion Internal combustion technologies burn materials (typically char and oil shale gas) within a vertical shaft retort to supply heat for pyrolysis. Typically raw oil shale particles between 12 millimetres (0.5 in) and 75 millimetres (3.0 in) in size are fed into the top of the retort and are heated by the rising hot gases, which pass through the descending oil shale, thereby causing decomposition of the kerogen at about 500 °C (932 °F) . Shale oil mist, evolved gases and cooled combustion gases are removed from the top of the retort then moved to separation equipment. Condensed shale oil is collected, while non-condensable gas is recycled and used to carry heat up the retort. In the lower part of the retort, air is injected for the combustion which heats the spent oil shale and gases to between 700 °C (1,292 °F) and 900 °C (1,650 °F). Cold recycled gas may enter the bottom of the retort to cool the shale ash. The Union A and Superior Direct processes depart from this pattern. In the Union A process, oil shale is fed through the bottom of the retort and a pump moves it upward. In the Superior Direct process, oil shale is processed in a horizontal, segmented, doughnut-shaped traveling-grate retort. Internal combustion technologies such as the Paraho Direct are thermally efficient, since combustion of char on the spent shale and heat recovered from the shale ash and evolved gases can provide all the heat requirements of the retort. These technologies can achieve 80–90% of Fischer assay yield. Two well-established shale oil industries use internal combustion technologies: Kiviter process facilities have been operated continuously in Estonia since the 1920s, and a number of Chinese companies operate Fushun process facilities. Common drawbacks of internal combustion technologies are that the combustible oil shale gas is diluted by combustion gases and particles smaller than 10 millimeters (0.4 in) can not be processed. Uneven distribution of gas across the retort can result in blockages when hot spots cause particles to fuse or disintegrate. ### Hot recycled solids Hot recycled solids technologies deliver heat to the oil shale by recycling hot solid particles—typically oil shale ash. These technologies usually employ rotating kiln or fluidized bed retorts, fed by fine oil shale particles generally having a diameter of less than 10 millimeters (0.4 in); some technologies use particles even smaller than 2.5 millimeters (0.10 in). The recycled particles are heated in a separate chamber or vessel to about 800 °C (1,470 °F) and then mixed with the raw oil shale to cause the shale to decompose at about 500 °C (932 °F). Oil vapour and shale oil gas are separated from the solids and cooled to condense and collect the oil. Heat recovered from the combustion gases and shale ash may be used to dry and preheat the raw oil shale before it is mixed with the hot recycle solids. In the Galoter and Enefit processes, the spent oil shale is burnt in a separate furnace and the resulting hot ash is separated from the combustion gas and mixed with oil shale particles in a rotating kiln. Combustion gases from the furnace are used to dry the oil shale in a dryer before mixing with hot ash. The TOSCO II process uses ceramic balls instead of shale ash as the hot recycled solids. The distinguishing feature of the Alberta Taciuk Process (ATP) is that the entire process occurs in a single rotating multi–chamber horizontal vessel. Because the hot recycle solids are heated in a separate furnace, the oil shale gas from these technologies is not diluted with combustion exhaust gas. Another advantage is that there is no limit on the smallest particles that the retort can process, thus allowing all the crushed feed to be used. One disadvantage is that more water is used to handle the resulting finer shale ash. ### Conduction through a wall These technologies transfer heat to the oil shale by conducting it through the retort wall. The shale feed usually consists of fine particles. Their advantage lies in the fact that retort vapors are not combined with combustion exhaust. The Combustion Resources process uses a hydrogen–fired rotating kiln, where hot gas is circulated through an outer annulus. The Oil-Tech staged electrically heated retort consists of individual inter-connected heating chambers, stacked atop each other. Its principal advantage lies in its modular design, which enhances its portability and adaptability. The Red Leaf Resources EcoShale In-Capsule Process combines surface mining with a lower-temperature heating method similar to in situ processes by operating within the confines of an earthen structure. A hot gas circulated through parallel pipes heats the oil shale rubble. An installation within the empty space created by mining would permit rapid reclamation of the topography. A general drawback of conduction through a wall technologies is that the retorts are more costly when scaled-up due to the resulting large amount of heat conducting walls made of high-temperature alloys. ### Externally generated hot gas In general, externally generated hot gas technologies are similar to internal combustion technologies in that they also process oil shale lumps in vertical shaft kilns. Significantly, though, the heat in these technologies is delivered by gases heated outside the retort vessel, and therefore the retort vapors are not diluted with combustion exhaust. The Petrosix and Paraho Indirect employ this technology. In addition to not accepting fine particles as feed, these technologies do not utilize the potential heat of combusting the char on the spent shale and thus must burn more valuable fuels. However, due to the lack of combustion of the spent shale, the oil shale does not exceed 500 °C (932 °F) and significant carbonate mineral decomposition and subsequent CO<sub>2</sub> generation can be avoided for some oil shales. Also, these technologies tend to be the more stable and easier to control than internal combustion or hot solid recycle technologies. ### Reactive fluids Kerogen is tightly bound to the shale and resists dissolution by most solvents. Despite this constraint, extraction using especially reactive fluids has been tested, including those in a supercritical state. Reactive fluid technologies are suitable for processing oil shales with a low hydrogen content. In these technologies, hydrogen gas (H<sub>2</sub>) or hydrogen donors (chemicals that donate hydrogen during chemical reactions) react with coke precursors (chemical structures in the oil shale that are prone to form char during retorting but have not yet done so). Reactive fluid technologies include the IGT Hytort (high-pressure H<sub>2</sub>) process, donor solvent processes, and the Chattanooga fluidized bed reactor. In the IGT Hytort oil shale is processed in a high-pressure hydrogen environment. The Chattanooga process uses a fluidized bed reactor and an associated hydrogen-fired heater for oil shale thermal cracking and hydrogenation. Laboratory results indicate that these technologies can often obtain significantly higher oil yields than pyrolysis processes. Drawbacks are the additional cost and complexity of hydrogen production and high-pressure retort vessels. ### Plasma gasification Several experimental tests have been conducted for the oil-shale gasification by using plasma technologies. In these technologies, oil shale is bombarded by radicals (ions). The radicals crack kerogen molecules forming synthetic gas and oil. Air, hydrogen or nitrogen are used as plasma gas and processes may operate in an arc, plasma arc, or plasma electrolysis mode. The main benefit of these technologies is processing without using water. ## In situ technologies In situ technologies heat oil shale underground by injecting hot fluids into the rock formation, or by using linear or planar heating sources followed by thermal conduction and convection to distribute heat through the target area. Shale oil is then recovered through vertical wells drilled into the formation. These technologies are potentially able to extract more shale oil from a given area of land than conventional ex situ processing technologies, as the wells can reach greater depths than surface mines. They present an opportunity to recover shale oil from low-grade deposits that traditional mining techniques could not extract. John Fell experimented with in situ extraction, at Newnes, In Australia, during 1921, with some success, but his ambitions were well ahead of technologies available at the time. During World War II a modified in situ extraction process was implemented without significant success in Germany. One of the earliest successful in situ processes was underground gasification by electrical energy (Ljungström method)—a process exploited between 1940 and 1966 for shale oil extraction at Kvarntorp in Sweden. Prior to the 1980s, many variations of the in situ process were explored in the United States. The first modified in situ oil shale experiment in the United States was conducted by Occidental Petroleum in 1972 at Logan Wash, Colorado. Newer technologies are being explored that use a variety of heat sources and heat delivery systems. ### Wall conduction Wall conduction in situ technologies use heating elements or heating pipes placed within the oil shale formation. The Shell in situ conversion process (Shell ICP) uses electrical heating elements for heating the oil shale layer to between 340 and 370 °C (650 and 700 °F) over a period of approximately four years. The processing area is isolated from surrounding groundwater by a freeze wall consisting of wells filled with a circulating super-chilled fluid. Disadvantages of this process are large electrical power consumption, extensive water use, and the risk of groundwater pollution. The process was tested since the early 1980s at the Mahogany test site in the Piceance Basin. 270 cubic meters (1,700 bbl) of oil were extracted in 2004 at a 9-by-12-meter (30 by 40 ft) testing area. In the CCR Process proposed by American Shale Oil, superheated steam or another heat transfer medium is circulated through a series of pipes placed below the oil shale layer to be extracted. The system combines horizontal wells, through which steam is passed, and vertical wells, which provide both vertical heat transfer through refluxing of converted shale oil and a means to collect the produced hydrocarbons. Heat is supplied by combustion of natural gas or propane in the initial phase and by oil shale gas at a later stage. The Geothermic Fuels Cells Process (IEP GFC) proposed by Independent Energy Partners extracts shale oil by exploiting a high-temperature stack of fuel cells. The cells, placed in the oil shale formation, are fueled by natural gas during a warm-up period and afterward by oil shale gas generated by its own waste heat. ### Externally generated hot gas Externally generated hot gas in situ technologies use hot gases heated above-ground and then injected into the oil shale formation. The Chevron CRUSH process, which was researched by Chevron Corporation in partnership with Los Alamos National Laboratory, injects heated carbon dioxide into the formation via drilled wells and to heat the formation through a series of horizontal fractures through which the gas is circulated. General Synfuels International has proposed the Omnishale process involving injection of super-heated air into the oil shale formation. Mountain West Energy's In Situ Vapor Extraction process uses similar principles of injection of high-temperature gas. ### ExxonMobil Electrofrac ExxonMobil's in situ technology (ExxonMobil Electrofrac) uses electrical heating with elements of both wall conduction and volumetric heating methods. It injects an electrically conductive material such as calcined petroleum coke into the hydraulic fractures created in the oil shale formation which then forms a heating element. Heating wells are placed in a parallel row with a second horizontal well intersecting them at their toe. This allows opposing electrical charges to be applied at either end. ### Volumetric heating The Illinois Institute of Technology developed the concept of oil shale volumetric heating using radio waves (radio frequency processing) during the late 1970s. This technology was further developed by Lawrence Livermore National Laboratory. Oil shale is heated by vertical electrode arrays. Deeper volumes could be processed at slower heating rates by installations spaced at tens of meters. The concept presumes a radio frequency at which the skin depth is many tens of meters, thereby overcoming the thermal diffusion times needed for conductive heating. Its drawbacks include intensive electrical demand and the possibility that groundwater or char would absorb undue amounts of the energy. Radio frequency processing in conjunction with critical fluids is being developed by Raytheon together with CF Technologies and tested by Schlumberger. Microwave heating technologies are based on the same principles as radio wave heating, although it is believed that radio wave heating is an improvement over microwave heating because its energy can penetrate farther into the oil shale formation. The microwave heating process was tested by Global Resource Corporation. Electro-Petroleum proposes electrically enhanced oil recovery by the passage of direct current between cathodes in producing wells and anodes located either at the surface or at depth in other wells. The passage of the current through the oil shale formation results in resistive Joule heating. ## Shale oil The properties of raw shale oil vary depending on the composition of the parent oil shale and the extraction technology used. Like conventional oil, shale oil is a complex mixture of hydrocarbons, and it is characterized using bulk properties of the oil. Shale oil usually contains large quantities of olefinic and aromatic hydrocarbons. Shale oil can also contain significant quantities of heteroatoms. A typical shale oil composition includes 0.5–1% of oxygen, 1.5–2% of nitrogen and 0.15–1% of sulfur, and some deposits contain more heteroatoms. Mineral particles and metals are often present as well. Generally, the oil is less fluid than crude oil, becoming pourable at temperatures between 24 and 27 °C (75 and 81 °F), while conventional crude oil is pourable at temperatures between −60 and 30 °C (−76 and 86 °F); this property affects shale oil's ability to be transported in existing pipelines. Shale oil contains polycyclic aromatic hydrocarbons which are carcinogenic. It has been described that raw shale oil has a mild carcinogenic potential which is comparable to some intermediate refinery products, while upgraded shale oil has lower carcinogenic potential as most of the polycyclic aromatics are believed to broken down by hydrogenation. Although raw shale oil can be immediately burnt as a fuel oil, many of its applications require that it be upgraded. The differing properties of the raw oils call for correspondingly various pre-treatments before it can be sent to a conventional oil refinery. Particulates in the raw oil clog downstream processes; sulfur and nitrogen create air pollution. Sulfur and nitrogen, along with the arsenic and iron that may be present, also destroy the catalysts used in refining. Olefins form insoluble sediments and cause instability. The oxygen within the oil, present at higher levels than in crude oil, lends itself to the formation of destructive free radicals. Hydrodesulfurization and hydrodenitrogenation can address these problems and result in a product comparable to benchmark crude oil. Phenols can be first be removed by water extraction. Upgrading shale oil into transport fuels requires adjusting hydrogen–carbon ratios by adding hydrogen (hydrocracking) or removing carbon (coking). Before World War II, most shale oil was upgraded for use as transport fuels. Afterwards, it was used as a raw material for chemical intermediates, pure chemicals and industrial resins, and as a railroad wood preservative. As of 2008, it is primarily used as a heating oil and marine fuel, and to a lesser extent in the production of various chemicals. Shale oil's concentration of high-boiling point compounds is suited for the production of middle distillates such as kerosene, jet fuel and diesel fuel. Additional cracking can create the lighter hydrocarbons used in gasoline. ## Economics The dominant question for shale oil production is under what conditions shale oil is economically viable. According to the United States Department of Energy, the capital costs of a 100,000 barrels per day (16,000 m<sup>3</sup>/d) ex-situ processing complex are \$3–10 billion. The various attempts to develop oil shale deposits have succeeded only when the shale-oil production cost in a given region is lower than the price of petroleum or its other substitutes. According to a survey conducted by the RAND Corporation, the cost of producing shale oil at a hypothetical surface retorting complex in the United States (comprising a mine, retorting plant, upgrading plant, supporting utilities, and spent oil shale reclamation), would be in a range of \$70–95 per barrel (\$440–600/m<sup>3</sup>), adjusted to 2005 values. Assuming a gradual increase in output after the start of commercial production, the analysis projects a gradual reduction in processing costs to \$30–40 per barrel (\$190–250/m<sup>3</sup>) after achieving the milestone of 1 billion barrels (160×10^<sup>6</sup> m<sup>3</sup>). The United States Department of Energy estimates that the ex-situ processing would be economic at sustained average world oil prices above \$54 per barrel and in-situ processing would be economic at prices above \$35 per barrel. These estimates assume a return rate of 15%. Royal Dutch Shell announced in 2006 that its Shell ICP technology would realize a profit when crude oil prices are higher than \$30 per barrel (\$190/m<sup>3</sup>), while some technologies at full-scale production assert profitability at oil prices even lower than \$20 per barrel (\$130/m<sup>3</sup>). To increase the efficiency of oil shale retorting and by this the viability of the shale oil production, researchers have proposed and tested several co-pyrolysis processes, in which other materials such as biomass, peat, waste bitumen, or rubber and plastic wastes are retorted along with the oil shale. Some modified technologies propose combining a fluidized bed retort with a circulated fluidized bed furnace for burning the by-products of pyrolysis (char and oil shale gas) and thereby improving oil yield, increasing throughput, and decreasing retorting time. Other ways of improving the economics of shale oil extraction could be to increase the size of the operation to achieve economies of scale, use oil shale that is a by-product of coal mining such as at Fushun China, produce specialty chemicals as by Viru Keemia Grupp in Estonia, co-generate electricity from the waste heat and process high grade oil shale that yields more oil per shale processed. A possible measure of the viability of oil shale as an energy source lies in the ratio of the energy in the extracted oil to the energy used in its mining and processing (Energy Returned on Energy Invested, or EROEI). A 1984 study estimated the EROEI of the various known oil shale deposits as varying between 0.7 and 13.3; Some companies and newer technologies assert an EROEI between 3 and 10. According to the World Energy Outlook 2010, the EROEI of ex-situ processing is typically 4 to 5 while of in-situ processing it may be even as low as 2. To increase the EROEI, several combined technologies were proposed. These include the usage of process waste heat, e.g. gasification or combustion of the residual carbon (char), and the usage of waste heat from other industrial processes, such as coal gasification and nuclear power generation. The water requirements of extraction processes are an additional economic consideration in regions where water is a scarce resource. ## Environmental considerations Mining oil shale involves a number of environmental impacts, more pronounced in surface mining than in underground mining. These include acid drainage induced by the sudden rapid exposure and subsequent oxidation of formerly buried materials, the introduction of metals including mercury into surface-water and groundwater, increased erosion, sulfur-gas emissions, and air pollution caused by the production of particulates during processing, transport, and support activities. In 2002, about 97% of air pollution, 86% of total waste and 23% of water pollution in Estonia came from the power industry, which uses oil shale as the main resource for its power production. Oil-shale extraction can damage the biological and recreational value of land and the ecosystem in the mining area. Combustion and thermal processing generate waste material. In addition, the atmospheric emissions from oil shale processing and combustion include carbon dioxide, a greenhouse gas. Environmentalists oppose production and usage of oil shale, as it creates even more greenhouse gases than conventional fossil fuels. Experimental in situ conversion processes and carbon capture and storage technologies may reduce some of these concerns in the future, but at the same time they may cause other problems, including groundwater pollution. Among the water contaminants commonly associated with oil shale processing are oxygen and nitrogen heterocyclic hydrocarbons. Commonly detected examples include quinoline derivatives, pyridine, and various alkyl homologues of pyridine (picoline, lutidine). Water concerns are sensitive issues in arid regions, such as the western US and Israel's Negev Desert, where plans exist to expand oil-shale extraction despite a water shortage. Depending on technology, above-ground retorting uses between one and five barrels of water per barrel of produced shale-oil. A 2008 programmatic environmental impact statement issued by the US Bureau of Land Management stated that surface mining and retort operations produce 2 to 10 U.S. gallons (7.6 to 37.9 L; 1.7 to 8.3 imp gal) of waste water per 1 short ton (0.91 t) of processed oil shale. In situ processing, according to one estimate, uses about one-tenth as much water. Environmental activists, including members of Greenpeace, have organized strong protests against the oil shale industry. In one result, Queensland Energy Resources put the proposed Stuart Oil Shale Project in Australia on hold in 2004. ## See also - Oil shale in China - Oil shale in Estonia - Oil shale in Jordan - Oil shale geology - Oil shale reserves
17,354,002
2006 Football League Championship play-off final
1,170,271,720
2006 UK football match
[ "2005–06 Football League Championship", "2006 Football League play-offs", "EFL Championship play-off finals", "Leeds United F.C. matches", "May 2006 sports events in the United Kingdom", "Watford F.C. matches" ]
The 2006 Football League Championship play-off Final was an association football match which was played on 21 May 2006 at the Millennium Stadium, Cardiff, between Leeds United and Watford. The match was to determine the third and final team to gain promotion from the Football League Championship, the second tier of English football, to the FA Premiership. Reading and Sheffield United, the top two teams of the 2005–06 Football League Championship season, gained automatic promotion to the Premiership, while the clubs placed from third to sixth place in the table took part in play-off semi-finals. Third-placed Watford defeated sixth-placed Crystal Palace in the first semi-final, while fifth-placed Leeds United beat fourth-placed Preston North End. The winners of these semi-finals competed for the final place for the 2006–07 season in the Premiership. Winning the final was estimated to be worth up to £40 million to the successful team. The final was refereed by Mike Dean and was watched by a crowd of 64,736. It was the last play-off final to be held at the Millennium Stadium, as the new Wembley Stadium was completed in time for the 2007 final. Watford won the match 3–0, with opening goalscorer Jay DeMerit named man of the match. Leeds goalkeeper Neil Sullivan scored an own goal to make the score 2–0 to Watford after 60 minutes, and the final goal was a penalty kick scored by Darius Henderson. The following season, Leeds's manager Kevin Blackwell was sacked in September, with the club second from bottom, and was replaced by Dennis Wise. The club went into administration the following May and were deducted ten points; they finished the season bottom of the league and they were relegated to the third tier of English football for the first time in the club's history. Watford struggled in the Premiership and they were relegated back to the Championship after ending the season bottom of the league, ten points below safety. ## Route to the final Watford finished the regular 2005–06 season in third place in the Football League Championship, the second tier of the English football league system, two places and three points ahead of Leeds United. Both therefore missed out on the two automatic places for promotion to the Premiership and instead took part in the play-offs, along with Preston North End and Crystal Palace, to determine the third promoted team. Watford finished nine points behind Sheffield United (who were promoted in second place) and twenty-five behind league winners Reading. Leeds had won just one of their final ten league games. Their play-off semi-final opponents were Preston North End with the first leg taking place at Elland Road, Leeds, on 5 May 2006. The match ended 1–1: Preston took the lead with a goal from David Nugent in the 48th minute before Eddie Lewis equalised for Leeds with a free kick in the 74th minute. Billy Davies, the Preston manager, commented after the match: "it is tremendous to come here in front of their biggest crowd of the season and get what is a fantastic result. It is a case of job done." The second leg took place three days later at Deepdale, Preston's home ground. After a goalless first half, a header from Rob Hulse and a low strike from Frazer Richardson saw Leeds secure a 2–0 win on the day and a 3–1 aggregate victory. Leeds were reduced to nine men when both Stephen Crainey and Richard Cresswell were sent off in the second half, with six of their team-mates also shown a yellow card. Watford's opposition for their play-off semi-final was Crystal Palace, with the first leg held at Selhurst Park on 6 May 2006. After a goalless first half, a strike from Marlon King one minute into the second half opened the scoring for Watford. Ben Foster, on loan from Manchester United, made a fingertip save to deny a header from Crystal Palace's Tony Popovic, before a curling free kick from Ashley Young made it 2–0. Five minutes from full time, Matthew Spring scored to give Watford a 3–0 win. The second leg at Watford's Vicarage Road took place three days later. The home side's manager Aidy Boothroyd was sent to the stand after an altercation with Fitz Hall which resulted in a mass brawl on the pitch. The match finished 0–0, giving Watford a 3–0 aggregate victory and a place in the final. ## Match ### Background Leeds United were making their second play-off final appearance: they had lost 2–1 in a replay of the 1987 final to Charlton Athletic after the two legs ended in an aggregate draw. Watford had also previously participated in one play-off final, having beaten Bolton Wanderers 2–0 at the old Wembley Stadium in the 1999 final. During the regular season, the match between the two teams at Vicarage Road in October 2005 was a goalless draw, while Leeds won the return fixture at Elland Road 2–1 the following February. Watford's King was the Championship's leading scorer with 21 goals; his team-mate Darius Henderson was the second-highest scorer with 15. David Healy and Hulse were the top marksmen for Leeds, with 12 each, followed by Robbie Blake on 11. Leeds United had last played in the top tier of English football in the 2003–04 season, when they were relegated after finishing nineteenth in the league. Watford had played in the Championship since being relegated from the Premiership in the 1999–2000 season. Boothroyd was the first-team coach at Leeds United until he left in March 2005 to take the Watford manager's role. In doing so, at the age of 34, he became the youngest manager in the Football League. His playing career had been ended while he was at Peterborough United when he suffered a broken leg in a tackle by Shaun Derry, then playing for Notts County; Derry was in the starting line-up for Leeds for the play-off final. Boothroyd had been promoted from academy football by the Leeds manager Kevin Blackwell, who himself had experienced failure in the play-off final three years earlier. Then, he was assistant to Neil Warnock whose Sheffield United team lost 3–0 to Wolverhampton Wanderers. The referee for the match was Mike Dean who was representing the Cheshire Football Association. He had been selected to referee the 2006 FA Cup Final between Liverpool and West Ham United, but was later replaced as he lived in the Wirral. While the Football Association were adamant that they had "complete faith in Dean's refereeing ability, integrity and impartiality", they felt his connection to the Wirral "might lead to comment and debate which could place him under undue additional pressure". It was the last play-off final to be held at the Millennium Stadium, as the new Wembley Stadium was completed in time for the 2007 final. The pitch was in a poor condition following rugby union's Heineken Cup Final which had been hosted at the stadium the previous day. Winning the play-off final was estimated to be worth up to £40 million to the successful team. The chief executive of Watford, Mark Ashton, did not underestimate the impact of promotion: "In my opinion it surpasses the riches of the Champions League – it is the richest football game on the planet". Both Crainey and Cresswell were unavailable for Leeds as they were suspended following their dismissals in the second leg of the semi-final. Paul Butler returned from injury and was included in Leeds's starting eleven, along with Hulse and Healy; Blake was named amongst the substitutes. Watford's Clarke Carlisle was out with a hip injury, and Henderson was selected in place of Al Bangura. The match was broadcast in the UK on Sky Sports 1. Watford adopted a 4–4–2 formation, while Leeds played 4–5–1 with Hulse playing as the lone striker. Watford were considered narrow favourites to win the match by bookmakers. Both teams wore black armbands in memory of Queens Park Rangers youth player Kiyan Prince, who was stabbed to death outside his school four days before the match. ### Summary The match kicked off around 3 p.m. on 21 May 2006 in front of a Millennium Stadium crowd of 64,736 under a closed roof because of rainy conditions. Two minutes in, Henderson's header from a Watford corner was weak, and was deflected off Butler. On eight minutes, the Watford goalkeeper Foster failed to catch a long Leeds throw-in allowing a Derry shot, but Lloyd Doyley diverted the strike wide. In the 14th minute, Young's shot from 20 yards (18 m) from a King pass went wide of the Leeds goalpost. Eleven minutes later, Watford took the lead through Jay DeMerit. Losing his marker Hulse, DeMerit scored with a 5-yard (4.6 m) header from Young's corner. It was the American defender's third goal of his first season in English football after failing to be drafted into Major League Soccer. Leeds had a penalty claim just before half-time, when Foster appeared to foul Hulse, but it was rejected by Dean. Soon after, from a diagonal free kick, Sean Gregan's header at the far post went outside the Watford post. In stoppage time, a Leeds free kick from a central position 35 yards (32 m) out was struck high by Eddie Lewis, and the first half ended 1–0 to Watford. At half-time, Leeds made their first substitution of the match with Blake coming on for Richardson; Watford made no changes. Henderson's half-volley was saved by Neil Sullivan before an own goal from the Leeds goalkeeper made it 2–0 to Watford in the 57th minute: James Chambers, who received a long throw-in, turned and shot; the ball was deflected off Lewis, hit the Leeds post and went in off Sullivan. Eight minutes later, Healy's strike was kept out by Foster and in the 70th minute, Derry's header from a corner was cleared off the goal line by Watford's Chambers. From the resulting corner, a 20-yard (18 m) shot from Lewis was saved by Foster. Five minutes later, King's free kick went over the crossbar before an injury to one of the assistant referees meant the fourth official Chris Foy was required to replace him. Lewis then cleared a Malky Mackay header off the line before Watford made it 3–0 in the 84th minute. Spring made a run forward and passed to King who was fouled by Derry; the resulting penalty was converted by Henderson. No further goals were scored, and the match ended 3–0 with Watford returning to the top tier of English football for the first time since 2000. ### Details ## Post-match Boothroyd, who was to become the youngest manager in the Premiership, was circumspect: "This is just the end of the beginning ... We will start as relegation favourites next season, like this season." He was confident that his club could maintain their top-flight status the following season however, saying: "We won't go down ... I think the best way to sum this up is that I think we are now a model for other clubs that don't have a great deal of money. But with good organisation, preparation and a fantastic work ethic ... We will take that ethic with us into the Premiership and we won't go down." Boothroyd also paid his respects to his former club: "I have a great deal of sympathy for Leeds and Kevin Blackwell ... They're a massive club and I'm sure they will bounce back." The Watford chairman Graham Simpson opined: "That was the most tortuous 90 minutes I've ever endured. You cannot enjoy it, until afterwards anyway." Blackwell was downcast but equitable in defeat: "It's a terrible place to come and lose and feel as though you've achieved nothing ... We lacked a spark and were second to the ball all over the park. We deserved to lose." He added: "We're very disappointed – although not as disappointed as when we dropped out of the Premiership and lost all our players." Hulse was defiant: "Right now I am gutted ... We will take a break, refocus for next season and come back stronger." Watford's former manager Graham Taylor suggested the match was "by no means a classic" and urged the club to maintain "the spirit that has been fostered throughout this season". Rick Broadbent, writing for the Irish Independent proposed that Leeds "never gave it a go at Cardiff, fading with a whimper". DeMerit was named man of the match. The BBC described the match as "a frantic play-off final". Stuart James, writing in The Guardian, suggested that "Leeds were crushed" and that they had failed to deal with Watford's "high-tempo approach" nor with their threat from set pieces. Eurosport observed that Watford had switched from their "normally attractive footballing principles to use the long ball into the channels" as a direct result of the condition of the playing surface at the Millennium Stadium. Louise Taylor, writing in The Guardian, concurred: "the ball repeatedly flew high through the air, conveniently bypassing midfield before crashing towards the corners, as long throws were launched into the 'mixer' and three goals were scored from set pieces." Derry had not seen the footage of the game for a decade when, in 2016, he commented on his foul to concede the late penalty: "That was a desperate footballer making a desperate lunge on a desperate day ... There’s no other way to describe it" and described the loss as the "lowest point of my career". The following season, Blackwell was sacked by Leeds on 20 September 2006, with the club second from bottom. He was eventually replaced more than a month later by Dennis Wise and assistant manager Gus Poyet, who were incumbent at Swindon Town. The following May, Leeds went into administration via a company voluntary arrangement and were deducted ten points. This ensured the club finished the season bottom of the 2006–07 Football League Championship and they were relegated to the third tier of English football for the first time in the club's history. Watford's next season saw them struggle in the Premiership, and they were relegated back to the Championship on 21 April 2007. They ended the season bottom of the league, ten points below safety.
17,781,033
Forksville Covered Bridge
1,112,105,640
Bridge over Loyalsock Creek, Pennsylvania
[ "Bridges completed in 1850", "Bridges in Sullivan County, Pennsylvania", "Burr Truss bridges in the United States", "Covered bridges in Sullivan County, Pennsylvania", "Covered bridges on the National Register of Historic Places in Pennsylvania", "National Register of Historic Places in Sullivan County, Pennsylvania", "Road bridges on the National Register of Historic Places in Pennsylvania", "Tourist attractions in Sullivan County, Pennsylvania", "Wooden bridges in Pennsylvania" ]
The Forksville Covered Bridge is a Burr arch truss covered bridge over Loyalsock Creek in the borough of Forksville, Sullivan County, in the U.S. state of Pennsylvania. It was built in 1850 and is 152 feet 11 inches (46.61 m) in length. The bridge was placed on the National Register of Historic Places in 1980. The Forksville bridge is named for the borough it is in, which in turn is named for its location at the confluence or "forks" of the Little Loyalsock and Loyalsock Creeks. Pennsylvania had the first covered bridge in the United States and the most such bridges in both the 19th and 21st centuries. They were a transition between stone and metal bridges, with the roof and sides protecting the wooden structure from weather. The Forksville bridge is a Burr arch truss type, with a load-bearing arch sandwiching multiple vertical king posts, for strength and rigidity. The building of the Forksville bridge was supervised by the 18-year-old Sadler Rogers, who used his hand-carved model of the structure. It served as the site of a stream gauge from 1908 to 1913 and is still an official Pennsylvania state highway bridge. The United States Department of Transportation Federal Highway Administration uses it as the model of a covered bridge "classic gable roof", and it serves as the logo of a Pennsylvania insurance company. The bridge was restored in 1970 and 2004 and is still in use, with average daily traffic of 240 vehicles in 2014. Despite the restorations, as of 2009 the bridge structure's sufficiency rating on the National Bridge Inventory was only 17.7 percent and its condition was deemed "basically intolerable requiring high priority of corrective action". It is one of three remaining covered bridges in Sullivan County, and according to Susan M. Zacher's The Covered Bridges of Pennsylvania: A Guide, its location "over the rocky Loyalsock Creek" is "one of the most attractive settings in the state." ## Overview The covered bridge is in the borough of Forksville on Bridge Street, a spur of State Route 4012, just west of Pennsylvania Route 154. It is about 0.2 miles (300 m) south of Pennsylvania Route 87 and 2.0 miles (3 km) north of Worlds End State Park on PA 154. Forksville Covered Bridge is its official name on the National Register of Historic Places (NRHP). Sullivan County is located in north central Pennsylvania, about 123 miles (198 km) northwest of Philadelphia and 195 miles (314 km) east-northeast of Pittsburgh. The bridge is just upstream of the confluence of the Little Loyalsock and Loyalsock Creeks. This was known as the "forks of the Loyalsock" and gave Forks Township its name when the township was incorporated in 1833, while still part of Lycoming County. Sullivan County was formed from part of Lycoming County on March 14, 1847, and the bridge was built in 1850. The name of the bridge comes from the community of Forksville, which is on land first settled in 1794, was laid out as a village in 1854, and was incorporated as a borough from part of Forks Township on December 22, 1880. ## History ### Background The first covered bridge in the United States was built in 1800 over the Schuylkill River in Philadelphia, Pennsylvania. According to Zacher, the first covered bridges of the Burr arch truss design were also built in the state. Pennsylvania is estimated to have once had at least 1,500 covered bridges and is believed to have had the most in the country between 1830 and 1875. In 2001, Pennsylvania had more surviving historic covered bridges than any other state, with 221 remaining in 40 of its 67 counties. Covered bridges were a transition between stone and metal bridges, the latter made of cast-iron or steel. In 19th-century Pennsylvania, lumber was an abundant resource for bridge construction, but did not last long when exposed to the elements. The roof and enclosed sides of covered bridges protected the structural elements, allowing some of these bridges to survive for well over a century. A Burr arch truss consists of a load-bearing arch sandwiching multiple king posts, resulting in stronger and more rigid structure than one made of either element alone. ### Construction and description Although there were 30 covered bridges in Sullivan County in 1890, only five were left by 1954, and as of 2011 only three remain: Forksville, Hillsgrove, and Sonestown. All three are Burr arch truss covered bridges and were built in 1850. The Forksville Covered Bridge was built for Sullivan County by Sadler Rogers (or Rodgers), a native of Forksville who was only 18 at the time. He hand-carved a model of the bridge before work began and used it to supervise construction. Rogers built the Forksville and Hillsgrove bridges across Loyalsock Creek, with the latter about 5 miles (8.0 km) downstream of the former. Although most sources do not list the builder of the Sonestown bridge, a 1997 newspaper article on the remaining Sullivan County covered bridges reported that Rodgers had designed it too. The Forksville Covered Bridge was added to the NRHP on July 24, 1980, in a Multiple Property Submission of seven Covered Bridges of Bradford, Sullivan and Lycoming Counties. The 2009 National Bridge Inventory (NBI) lists the covered bridge as 152 feet 11 inches (46.6 m) long, with a roadway 12 feet 2 inches (3.7 m) wide, and a maximum load of 3.0 short tons (2.7 metric tons). According to the NRHP, the bridge's "road surface width" is 15 feet (4.6 m), which is only sufficient for a single lane of traffic. As of 2011, each portal has a small sign reading "1850 Sadler Rogers" at the top, above a sign with the posted clearance height of 8.0 feet (2.4 m), and a "No Trucks Allowed" sign hanging below these. The covered bridge rests on the original stone abutments, which have since been reinforced with concrete. The bridge deck, which is now supported by steel beams, is made of "very narrow crosswise planks". Wheel guards on the deck separate the roadway from the pedestrian walkways on either side and protect the sides, which are covered with vertical planks almost to the eaves. The bridge has long, narrow windows with wooden shutters: the south side has four windows, and the north side has three. An opening between the eaves and the siding runs the length of the bridge on both sides. The bridge is supported by a Burr arch truss of 16 panels, with wooden beams. The gable roof is sheet metal and is used as the model illustration of a "classic gable roof" for a covered bridge by the U.S. Department of Transportation Federal Highway Administration's Turner-Fairbank Highway Research Center. ### Restoration and use In the 19th century the Forksville Covered Bridge survived major floods on March 1, 1865, and June 1, 1889, that destroyed other bridges in the West Branch Susquehanna River valley. Between about 1870 and 1890, logging in the Loyalsock Creek watershed produced lumber rafts that floated beneath the bridge. These rafts, each containing 5,000–30,000 board feet (12–70 m<sup>3</sup>) of lumber, were carried down the Loyalsock to its mouth at Montoursville, and some continued on the West Branch Susquehanna River beyond. The rafts ended when the eastern hemlock were all clearcut. From 1908 to 1913, there was a stream gauge on the bridge. Twice a day, the creek height was read on a chain 21.88 feet (6.67 m) long on the bridge's upstream side, and discharge measurements were taken on the downstream side. At the time it served as a "single span, wooden, covered highway bridge". The bridge survived another major flood on November 16, 1926, when a dam broke upstream but was "badly damaged" by an ice jam on January 23, 1959, in a flood that left blocks of ice weighing up to 500 pounds (230 kg) in the streets of Forksville. The Forksville Covered Bridge was restored in 1970 with what the NRHP nomination form describes as "all kinds of odd repairs". The restoration work was completed by T. Corbin Lewis of Hillsgrove Township, a retired electrical contractor, whose low bid of \$48,000 was accepted over a Baltimore, Maryland, firm's \$185,000 bid. The restoration was supervised by the Pennsylvania Department of Transportation (PennDOT), which owns and maintains the bridge. The repair involved minor work on the "steel floor beams and stringers", which had been added years before. An entirely new wooden deck was installed, with wheel guards (wooden curbs) to channel vehicle traffic to the center and to protect the pedestrian walkways on the sides. Windows were cut in the bridge's sides for the first time, and steel girders were "added to support the bridge's understructure." Attitudes towards covered bridges in Sullivan County changed considerably in the last half of the 20th century. Two of the five bridges remaining in 1954 were razed by 1970, when PennDOT considered tearing down the Forksville bridge too. It was renovated rather than razed because of its historic nature and appeal to tourists. The Forksville Covered Bridge was added to the NRHP in 1980, and the Pennsylvania Historical and Museum Commission now forbids the destruction of any covered bridge on the NRHP in the state and has to approve any renovation work. The NBI says the bridge was "reconstructed" in 2004 but does not give further details. The entire bridge has been reinforced with steel girders, including vertical beams. In 2006 the red bridge was repainted, which took about three weeks. In 2015, the bridge was briefly closed for \$162,000 in waterproofing and concrete repairs to its abutments. The bridge's condition was described as "good" in the 1980 NRHP form, Zacher's 1994 book, and the Evans' 2001 book. However, the 2009 Federal Highway Administration National Bridge Inventory found the sufficiency rating of the bridge structure to be 17.7 percent. It found that the bridge's foundations were "determined to be stable for calculated scour conditions" but that the railing "does not meet currently acceptable standards". Its overall condition was deemed "basically intolerable requiring high priority of corrective action"; the 2006 NBI estimated the cost to improve the bridge at \$463,000. The bridge was decorated with lights for Christmas in 1992. In 2010, Forksville had 145 residents. The Forksville Covered Bridge is heavily used, as it is the most direct and shortest route from PA 154, at the eastern end, to Forksville and its general store, which are at the western end. The posted speed limit is 15 miles per hour (24 km/h), and its average daily traffic was 240 vehicles in 2014. The bridge is used as the logo of the Farmers & Mechanics Mutual Insurance Company, which was founded in Sullivan County in 1877. In addition to its utility, the bridge is appreciated for its history and beauty. In 1970 a long-time Forksville resident spoke of the bridge's connection to the past: "When you stand quiet on the bridge and the woods are still, you can almost hear the horses clomping over the wooden deck as they did in years gone by; you can almost see the youngsters who climbed the rafters of the bridge to 'skinny-dip' in the creek below". Zacher's 1994 The Covered Bridges of Pennsylvania: A Guide describes the bridge's location "over the rocky Loyalsock Creek" as "one of the most attractive settings in the state." ## Bridge data The following table is a comparison of published measurements of length, width and load recorded by four different sources using different methods, as well as the name cited for the bridge and its builder. The NBI measures bridge length between the "backwalls of abutments" or pavement grooves and the roadway width as "the most restrictive minimum distance between curbs or rails". The NRHP form was prepared by the Pennsylvania Historical and Museum Commission (PHMC), which surveyed county engineers, historical and covered bridge societies, and others for all the covered bridges in the commonwealth. The Evans visited every covered bridge in Pennsylvania in 2001 and measured each bridge's length (portal to portal) and width (at the portal) for their book. The data in Zacher's book was based on a 1991 survey of all covered bridges in Pennsylvania by the PHMC and PennDOT, aided by local government and private agencies. The article uses primarily the NBI and NRHP data, as they are national programs. ## See also - List of bridges on the National Register of Historic Places in Pennsylvania ## Note
39,725,920
The Chase (American game show)
1,172,600,485
American television quiz show
[ "2010s American game shows", "2013 American television series debuts", "2015 American television series endings", "2020s American game shows", "2021 American television series debuts", "American Broadcasting Company original programming", "American television series based on British television series", "American television series revived after cancellation", "English-language television shows", "Game Show Network original programming", "Quiz shows", "Television series by ITV Studios" ]
The Chase is an American television quiz show adapted from the British program of the same name. It premiered on August 6, 2013, on the Game Show Network (GSN). It was hosted by Brooke Burns and featured Mark Labbett as the "chaser" (referred to on air exclusively by his nickname "The Beast"). A revival of the show premiered on January 7, 2021, on ABC. It is hosted by Sara Haines and initially featured as the chasers Jeopardy! champions James Holzhauer (who was a contestant on the GSN version), Ken Jennings, and Brad Rutter. Labbett returned as a chaser in June 2021, before stepping down in 2022 along with Jennings. In their place are Buzzy Cohen, Brandon Blackwell, and Victoria Groce. The U.S. version of the show follows the same general format as the U.K. version, but with teams of three contestants instead of four. The game is a quiz competition in which contestants attempt to win money by challenging a trivia expert known as the chaser. Each contestant participates in an individual "chase" called the Cash Builder, in which they attempt to answer as many questions as possible in 60 seconds to earn as much money as possible to contribute to a prize fund for the team. The contestant must then answer enough questions to stay ahead of the chaser in a head-to-head competition scored on a game board; otherwise, they lose their winnings and are out. The contestants who successfully complete their individual chases without being caught advance to the Final Chase, in which they answer questions as a team playing for an equal share of the prize fund accumulated throughout the episode. ## Gameplay ### Cash Builder and individual chases Three new contestants participate in each episode. Each contestant attempts to win money for their team by answering as many questions correctly as possible during a one-minute "Cash Builder" round, earning money per correct answer (\$5,000 on the GSN version, \$25,000 in the first season of the ABC version, and \$10,000 in the second and third seasons). (During GSN celebrity episodes, each contestant is credited the value of one correct answer at the outset.) After the Cash Builder, the contestant participates in a head-to-head "chase" against the chaser. Both sides answer a series of questions, with the contestant attempting to move the money down a game board and into the team bank without being caught. The contestant and chaser stand at opposite ends of the board, which has seven spaces, and the contestant chooses a starting position. They may begin three steps ahead of the chaser, requiring five correct answers and play for the money earned in the Cash Builder. Alternatively, they may accept one of two offers from the chaser: either start one step closer to the chaser and play for a higher amount or start one step farther away and play for a lower amount. The lower offer can be zero or even negative, depending on the result of the Cash Builder. On occasion, a contestant is presented with a "Super Offer" to play for even higher stakes with a head start of only one step. Once the contestant chooses a starting position, the host asks a series of questions with three answer options, and the contestant and chaser secretly lock in their answers on keypads. After either side locks in a choice, the other must do the same within five seconds or be locked out for that turn. A correct answer by either side moves them one space down the board and toward the bank, while a miss or lockout leaves them where they are. If the contestant successfully moves the money into the bank, they advance to the Final Chase and their money is added to the team bank; if the chaser catches up, the contestant is eliminated and their money is forfeited. If all three contestants fail to win their individual chases, the team selects one contestant to play the Final Chase alone for a dollar amount divided evenly among the team (\$15,000 on the GSN version, an amount offered by the chaser in the first season of the ABC version, and \$60,000 in the third season). During GSN celebrity episodes, contestants who are caught leave with \$5,000 for their respective charities. ### The Final Chase The team randomly chooses one of two question sets for themselves, with the other set put aside for the chaser, and have two minutes to give as many correct answers as possible. Contestants must respond as soon as they are called on and must ring in before they can either respond or pass a question. Any answer given by a contestant who has not rung in is considered incorrect. Ringing in is not required if only one contestant is playing the Final Chase. Every correct answer moves the team one step ahead on the game board and they are given a head start of one step per team member participating in the round. Team members may not discuss or confer on any questions during this portion. If all three team members lose their respective individual chases, they choose one member to play the Final Chase alone (on behalf of the whole team, with a one-step head start). On the first season of the ABC version, this player did not receive a head start. After the contestants have completed their Final Chase, the chaser then has two minutes to catch the team by answering questions from the unused set in the same manner. If he passes or misses a question, the clock briefly stops and the team is given a chance to discuss it and offer an answer. A correct response pushes the chaser back one step, or moves the team ahead by one if he is still at the starting line. If the chaser fails to catch the team before time runs out, the participating members receive equal shares of the bank; otherwise, they leave with nothing. During celebrity episodes, if the chaser catches the team before time runs out, all three members receive \$5,000 each. ## Chasers - Mark Labbett (2013–15; 2021–22). Labbett appeared on British quiz shows University Challenge, Fifteen to One, The Syndicate and Who Wants to Be a Millionaire? He was the runner-up on The People's Quiz, the runner-up on Brain of Britain, and part of a winning team on Only Connect. Labbett was one of the original chasers on the UK version of the show, appearing in every season since its inception, as well as being one of six on the Australian version of the show. He was the sole chaser on the show when it initially aired in 2013 on Game Show Network, and was joined by Holzhauer, Jennings and Rutter when it was rebooted on ABC. Labbett confirmed in February 2022 that his contract had not been renewed for the show's third season. He is nicknamed "The Beast" and "The Transatlantic Giant". - James Holzhauer (2021–). Holzhauer is a 32-episode champion on Jeopardy! and the winner of the 2019 Jeopardy! Tournament of Champions. He holds the record for highest winnings on a single Jeopardy! episode, and is also a former contestant on GSN's version of The Chase, winning \$58,333 as part of a three-person team. Holzhauer is the third highest-earning American game show contestant of all time. He is nicknamed "The High Roller". - Ken Jennings (2021–22). Jennings is the record holder for the longest winning streak on Jeopardy!, with 74 wins. He is the winner of the Jeopardy! The Greatest of All Time tournament. He also appeared on Are You Smarter than a 5th Grader?, Grand Slam, Who Wants to Be a Millionaire, and 1 vs. 100. Many consider Jennings the greatest quiz show contestant of all time, and he is the highest-earning quiz show contestant in U.S. history. He is nicknamed "The Professor". - Brad Rutter (2021–). Rutter is the second-highest earning quiz show contestant in the U.S. and the highest earner in Jeopardy! history, winning over \$5 million. Rutter never lost an episode during regular Jeopardy! game play. His only loss was during the Greatest of All Time tournament, in which he finished third. Rutter also appeared on Million Dollar Mindgame, winning \$600,000 between six contestants collectively. He is nicknamed "The Buzzsaw". - Victoria Groce (2022–). Groce is best known for ending the 19-day winning streak of Jeopardy! player David Madden. In other quizzing endeavours, she has won multiple academic competitions and placed within the top ten at the annual World Quiz Championships in multiple years. She is nicknamed "The Queen". - Brandon Blackwell (2022–). Blackwell participated on Jeopardy!, Who Wants to Be a Millionaire, The Million Second Quiz, and the British quiz show University Challenge. He is nicknamed "The Lightning Bolt". - Buzzy Cohen (2022–). Dubbed "Mr. Personality" by Alex Trebek, Cohen is an AAU National Champion, the winner of the 2017 Jeopardy Tournament of Champions, and a captain for the Jeopardy! All Star Games. He is nicknamed "The Stunner". ## Production ### GSN (2013–2015) The Chase originated in the United Kingdom, premiering on ITV in 2009. As the series became increasingly popular in the UK, Fox ordered two pilot episodes in April 2012 to be taped in London for consideration to be added to the network's US programming lineup. Bradley Walsh, presenter of the British version, was featured as the show's host, while UK chaser Mark "The Beast" Labbett and Jeopardy! champion Brad Rutter were the chasers. After Fox passed up the opportunity to add the series to its lineup, Game Show Network (GSN), in conjunction with ITV Studios America, picked up the series with an eight-episode order on April 9, 2013, and announced Brooke Burns as the show's host and Labbett as the chaser on May 29, 2013. Dan Patrick had originally been considered as the host. The first season premiered on August 6, 2013. Even though the show had not yet premiered at the time, the network ordered a second season of eight episodes on July 1, 2013, which premiered on November 5, 2013. Citing the series' status as a "ratings phenom,” GSN eventually announced plans to renew it for a third season, which premiered in the summer of 2014. During the third season, the series also premiered its first celebrity edition with celebrity contestants playing for charity. GSN proceeded to renew the series for a fourth season before the end of season three; this new season began airing January 27, 2015. After the seventh episode of the season, the series went on another hiatus; new episodes from the fourth season resumed airing July 16, 2015. The final episode of the fourth season aired on December 11, 2015, concluding the show's original run after four seasons and 51 episodes. Episodes from the first two seasons are available on Netflix. ### ABC (2021–present) On July 20, 2020, it was reported that ABC was casting for a US revival of The Chase. Three days later, Deadline Hollywood reported that the network was in talks to cast Rutter and fellow Jeopardy! champions Ken Jennings and James Holzhauer to serve as the chasers. Holzhauer, Jennings and Rutter had recently competed on Jeopardy! The Greatest of All Time, a primetime Jeopardy! tournament aired on ABC in January, while Holzhauer was also a previous contestant on the GSN version, which led to him appearing on Jeopardy! in 2019. On November 2, 2020, it was reported that ABC had ordered The Chase to series for a nine-episode run, with Sara Haines of ABC's daytime talk show The View as host and Jennings, Holzhauer and Rutter each rotating as the Chaser. On the show, Jennings is nicknamed "The Professor," Holzhauer as "The High Roller" and Rutter as "The Buzzsaw." The revival premiered on January 7, 2021. The season premiere was dedicated to Alex Trebek, who died on November 8, 2020, at the age of 80, and hosted Jeopardy! when Jennings, Holzhauer and Rutter were contestants. On April 7, 2021, the revival was renewed for a second season, which premiered on June 6, 2021. In May 2021, it was reported that Labbett would be rejoining the American version of the series as the ABC version's fourth chaser. The second half of the second season premiered on January 5, 2022. On March 15, 2022, the revival was confirmed for a third season, featuring Victoria Groce, Brandon Blackwell, and Buzzy Cohen as new chasers, alongside returning chasers Rutter and Holzhauer. It premiered on May 3, 2022. The second half of the third season premiered on January 5, 2023, and resumed on June 29, 2023. ## Reception ### Critical reception The Chase was generally well received by critics. Michael Tyminski of Manhattan Digest reviewed the series positively, calling it "a breath of fresh air" and praising Burns and Labbett in their respective roles. Tyminski added that while each question's level of difficulty is not always on par with those on other quiz shows such as Jeopardy!, the show avoids a "painfully slow pace." Similarly, John Teti of The A.V. Club called the show a "pretty good adaptation" of its UK counterpart. While he preferred the British version of the show, saying that it had "a more varied cast and stronger production values,” Teti felt that the American version "still holds its own." The Chase was also ranked ninth on Douglas Pucci's (of TV Media Insights) list of best new television shows of 2013. The Chase was one of two GSN originals (the other being The American Bible Challenge) to be honored at the 41st Daytime Emmy Awards in 2014 with an Emmy nomination for Outstanding Game Show, Jeopardy! was the eventual winner. Two years later, Burns received an Emmy nomination at the 43rd Daytime Emmy Awards for Outstanding Game Show Host, losing to Craig Ferguson of Celebrity Name Game. Writing for Decider, Joel Keller stated that the ABC version "could be slightly faster-paced, but the excitement of people going head-to-head with three of the best quiz show contestants in American television history is something game show aficionados can really sink their teeth into." Linda Maleh of TV Insider was critical of some elements of the revival, but still noted, "A chance to face off with some of the most well known trivia buffs is a good premise for a game show, it just needs to cut the fat." ### Ratings The Chase became one of the highest-rated original programs in GSN's history. The series debuted to 511,000 total viewers during its premiere while maintaining 90% of its audience with 461,000 total viewers during the second episode airing that night. On January 28, 2014, The Chase set a new series high for total viewers and adults 18–49, with 827,000 and 234,000 viewers, respectively. Although the season three premiere fell in the ratings from its series high, earning 494,000 viewers with only 73,000 in the 18–49 demographic, the premiere of the fourth season saw a sizeable rise over the previous season's premiere, earning 749,000 total viewers. With a strong lead-in from Celebrity Wheel of Fortune, the 2021 ABC version premiered to a 0.9/5 rating/share and 6.2 million viewers. The second season of the 2021 version premiered to 4.07 million viewers. The third season of the 2021 version premiered to 2.29 million viewers. ## Mobile version On December 18, 2013, Barnstorm Games released a mobile version of the game for iOS and Android. The only differences between the app and the show are that four choices are presented for questions in the Cash Builder and Final Chase rounds and that no Final Chase is played if all players are caught in their individual chases. The app features Labbett (referred to by his "Beast" nickname) as a simulated chaser and can be played by up to four people.
50,935,426
Boeing CH-47 Chinook in Australian service
1,143,798,037
Australian military heavy-lift helicopters
[ "Aircraft in Royal Australian Air Force service", "Australian Army aviation" ]
The Australian Defence Force has operated Boeing CH-47 Chinook heavy-lift helicopters for most of the period since 1974. Thirty four of the type have entered Australian service, comprising twelve CH-47C variants, eight CH-47Ds and fourteen CH-47Fs. The helicopters have been operated by both the Royal Australian Air Force (RAAF) and Australian Army. An initial order of eight Chinooks for the RAAF was placed in 1962, but soon cancelled in favour of more urgent priorities. The Australian military still required helicopters of this type, and twelve CH-47C Chinooks were ordered in 1970. The CH-47s entered service with the RAAF in December 1974. The eleven surviving Chinooks were retired in 1989 as a cost-saving measure, but it was found that the Australian Defence Force's other helicopters could not replace their capabilities. As a result, four of the CH-47Cs were upgraded to CH-47D standard, and returned to service in 1995 with the Australian Army. The Army acquired two more CH-47Ds in 2000 and another pair in 2012. The CH-47Ds were replaced with seven new CH-47F aircraft during 2015, and another three were delivered in 2016. A further four CH-47Fs were ordered in 2021, with two being delivered that year and two others arriving in 2022. The Chinooks have mainly been used to support the Australian Army, though they have performed a wide range of other tasks. Three Chinooks took part in the Iraq War during 2003, when they transported supplies and Australian special forces. A detachment of two Chinooks was also deployed to Afghanistan during the northern spring and summer months for each year between 2006 and 2007 and 2008 to 2013, seeing extensive combat. Two of the CH-47s deployed to Afghanistan were destroyed in crashes. The helicopters have also frequently been assigned to assist recovery efforts following natural disasters and undertook a range of civilian construction tasks while being operated by the RAAF. ## Acquisition During the early 1960s the Australian Army and Royal Australian Air Force (RAAF) considered new types of tactical transport aircraft to replace the RAAF's obsolete Douglas Dakotas. The Army wanted a simple and rugged aircraft that could be purchased immediately, and pressed for the acquisition of de Havilland Canada DHC-4 Caribous. The RAAF regarded the Caribou as inadequate for the intended role and preferred a more sophisticated aircraft, leading to delays in the selection process. This disagreement ended in September 1962. As part of the expansion of the military in response to Indonesia's policy of "confrontation" with its neighbours, the RAAF was directed by the Australian Government to conduct an urgent evaluation of short takeoff and landing aircraft and heavy-lift helicopters that could be purchased to improve the Army's tactical mobility. An Air Staff Requirement was established in October that year for a project to acquire eight heavy-lift helicopters and introduce them into service by 1971. A team of seven RAAF officers headed by Group Captain Charles Read, the director of operational requirements, was immediately dispatched to the United States and assessed the Sikorsky S-61, Boeing Vertol 107-II and CH-47 Chinook helicopters. The team judged the Chinook to be clearly the most suitable of these types, and recommended that several be acquired; this was in line with the Army's preference. The Government subsequently accepted a recommendation made by the RAAF to acquire a package of twelve Caribou fixed-wing aircraft and eight Chinooks, and placed an order for these aircraft within weeks of the evaluation. The Chinook order was cancelled by the Government when it was learned that it would take several years for the helicopters to be delivered, and the RAAF's orders of Caribous and Bell UH-1 Iroquois tactical transport helicopters were expanded instead. The Australian military continued to consider options to acquire heavy-lift helicopters throughout the 1960s, and a formal program to achieve this goal was initiated by the RAAF in 1969. The Federal Government's Cabinet approved the acquisition of twelve such helicopters in August that year. At this time the helicopters were intended to be deployed to South Vietnam as part of the Australian contribution to the Vietnam War. A team of nine Air Force and Army officers travelled to the United States in October 1969 to evaluate the Sikorsky CH-53 Sea Stallion and Chinook. The team, which was led by Group Captain Peter Raw, recommended that CH-53s be ordered as the type had superior flying characteristics. Senior RAAF officers and the Army were not pleased with this outcome, and the Air Board rejected Raw's report. Read, who was now an air vice-marshal and deputy chief of the air staff, was directed to review the choice of helicopters, and again recommended that Chinooks be acquired. He justified this choice on the grounds that the Chinook could carry more cargo than the CH-53 and was better suited for operations in the mountains of the Australian-administered Territory of Papua and New Guinea. The Government believed that both types met the RAAF's requirements, but a project to acquire Chinooks would be lower risk than purchasing CH-53s. As a result, an order for twelve CH-47C Chinooks was announced on 19 August 1970. It was planned to rotate the helicopters in and out of service, six being available at any time. The order was suspended later in 1970 when a series of engine problems affected the United States Army's CH-47Cs, but was reinstated in March 1972 after these issues were resolved. The total cost of the purchase was \$A37 million. The order made Australia the CH-47's first export customer. The contract for the Chinooks included an offset agreement with Boeing through which the firm gave Australian companies opportunities to manufacture components of both the RAAF's helicopters and those destined for other customers. This was the first of several such agreements that were included in Australian military aircraft procurement contracts during the 1970s and 1980s, the goal being to assist the local defence industry to access international markets. This agreement had some benefits, as several of the participating Australian companies upgraded their factories to manufacture complex elements of the CH-47. The offset contracts for the Chinook concluded in the early 1980s, but the improved equipment and manufacturing processes were employed in the project to build McDonnell Douglas F/A-18 Hornet fighter aircraft in Australia between 1985 and 1990. In line with the RAAF's procurement and support philosophy and the aim of ensuring that the force was self-sufficient, a very large quantity of spare parts for the CH-47Cs was also ordered; in 1993 it was reported that this was the second-largest stockpile of Chinook spare parts after that held by Boeing, and was worth more than \$A120 million. It was decided to station the Chinooks at RAAF Base Amberley, Queensland, as it was located at the midpoint between the Army's main field formations that were based on the outskirts of Sydney in New South Wales and the north Queensland city of Townsville. Construction began on support facilities for the helicopters at Amberley shortly after the order for them was confirmed in 1972. ## Royal Australian Air Force service No. 12 Squadron was re-raised at Amberley on 3 September 1973 to operate the Chinooks. This unit had flown bombers between 1939 and 1948 before being renumbered No. 1 Squadron. The twelve CH-47s were officially accepted by the RAAF in the United States on 9 October 1973. They were subsequently shipped to Australia on board the aircraft carrier , and were unloaded at Brisbane on 28 March 1974. No. 12 Squadron began training flights on 8 July 1974, and the unit was declared operational in December the next year. The squadron typically had between four and six CH-47Cs operational at any time throughout the type's service, the fleet being rotated through long-term storage at RAAF Base Amberley as planned. In November 1980, eight Chinooks were simultaneously operational for the first time, and a formation flight was conducted to mark the occasion. The CH-47Cs had a crew of four, comprising two pilots, a loadmaster and one other, and could transport up to 33 passengers or 11,129 kilograms (24,535 lb) of cargo. The helicopters were assigned serial numbers A15-001 to A15-012. The Chinooks' main role was to support the Army. The helicopters were used to transport troops, artillery guns, ammunition, fuel and other supplies. They also provided part of the aeromedical evacuation capability available to the Army. While the Chinooks generally operated in Northern Australia, they made frequent deployments to other parts of Australia, and No. 12 Squadron conducted an annual high-altitude flying training exercise in Papua New Guinea. As part of the security measures introduced after the Sydney Hilton Hotel bombing on 13 February 1978, Chinooks were used to transport Australian Prime Minister Malcolm Fraser and several other national leaders from Sydney to Bowral for a Commonwealth Heads of Government Regional Meeting. In August 1980, a CH-47 was flown from Amberley to Malaysia, and used to recover a Royal Malaysian Air Force S-61 helicopter that had crashed in a remote location. This involved a return trip of 14,000 kilometres (8,700 mi), which was believed to have been the longest distance a helicopter had flown up to that time and remains the longest flight to have been conducted by a RAAF helicopter. During their RAAF service, the Chinooks also undertook a range of non-military tasks. The helicopters frequently formed part of the Australian Defence Force's response to natural disasters, including by delivering food for people and livestock cut off by floods. They were also used for civilian construction tasks such as emplacing lighthouses and carrying air conditioning equipment to the tops of tall buildings. On two occasions Chinooks supported Queensland Police Service drug eradication efforts in remote parts of the state by transporting fuel for RAAF Iroquois helicopters and carrying seized narcotics. In August 1981, two CH-47s lifted containers from the cargo ship Waigani Express to enable the vessel to be refloated after it ran aground in the Torres Strait. A similar operation was undertaken to free the Anro Asia when it ran aground near Caloundra, Queensland, in November the same year. Another unusual task was conducted in December 1981 when a Chinook transported two bulldozers onto a grounded iron ore carrier near Port Hedland, Western Australia, so that they could be used to reposition the ship's load. In May 1989 a Chinook transported a 8,000-kilogram (18,000 lb) section of a memorial to the pioneering aviator Lawrence Hargrave onto Mount Keira near Wollongong. The RAAF's Chinook fleet suffered two serious accidents. On 26 June 1975, A15-011 crashed when one of its engine turbines disintegrated; none of its crew were injured. The helicopter was initially assessed as a write off, but No. 3 Aircraft Depot was later assigned responsibility for repairing it. The maintenance unit lacked experience with major helicopter repairs, and A15-011 did not reenter service until 21 May 1981. On 4 February 1985, A15-001 struck power lines and crashed into Perseverance Dam near Toowoomba, Queensland, while undertaking a navigation exercise. The helicopter's pilot, an exchange officer from the Royal Air Force, was killed and the other three aircrew suffered minor injuries. The helicopter was written off and used as a fire training aid at Amberley. A court of inquiry found that A15-001's crew had been unaware of the presence of power lines in the area as they were not marked on the maps used to plan the flight, and were difficult to see from a moving helicopter. The inquiry also judged that the mission had been inadequately planned, and recommended that No. 12 Squadron update the master map used for preparing operations in the Amberley region to ensure that it included all flying hazards. In November 1986 the Chiefs of Staff Committee and Minister for Defence Kim Beazley decided to transfer all of the RAAF's Iroquois and Sikorsky S-70 Black Hawk battlefield helicopters to the Army. The Army did not want the Chinooks due to their high operating costs, and they remained with the RAAF at this time. The reduction of the RAAF's helicopter fleet increased the cost of operating the Chinooks due to the loss of economies of scale, and made it more difficult to find aircrew for No. 12 Squadron. The RAAF subsequently proposed transferring the Chinooks, but the Army remained unwilling to accept them. The problems the Army was experiencing keeping the Iroquois and Black Hawks operational may have influenced this position, the service being reluctant to take on an even more complex type. The RAAF and Army jointly decided to withdraw the Chinooks from service in May 1989. This decision was made to reduce costs, the Army believing that the Black Hawks would provide sufficient air lift capability. No. 12 Squadron ceased flying on 30 June 1989, and was disbanded on 25 August that year. The CH-47Cs were placed in storage at Amberley. ## Australian Army service ### CH-47D Chinook While it was intended to sell the Chinooks after they were withdrawn from service, experience soon demonstrated that the Black Hawks were unable to fully replace them. In particular, it was found that heavy-lift helicopters were needed to transport fuel supplies for the Black Hawks during exercises and operations. As a result, plans to sell the Chinooks were put on hold in late 1989, and the Army and RAAF began investigating options to reactivate them. The 1991 Force Structure Review recommended that between four and six Chinooks—preferably upgraded to CH-47D standard—be reactivated to support the Black Hawks. A deal to upgrade several of the Chinooks was reached in June 1993. Under this arrangement, seven of the surviving CH-47Cs were sold to the US Army for \$A40 million, the funds being used to partly cover the cost of upgrading the remaining four to CH-47D standard. The project's total cost was \$A62 million, of which \$A42 million was required to upgrade the four helicopters and the remainder for spare parts, administration and new facilities for the Chinooks at Townsville. It was also decided at this time to transfer the Chinooks to the Australian Army, as the RAAF no longer had significant expertise in operating the type and such a change would concentrate all the ADF's battlefield helicopters with the same service. The CH-47D variant of the Chinook was based on the C variant's airframe, and had improved engines and rotors, as well as upgraded avionics. These modifications resulted in the type having superior performance as well as lower operating costs. All eleven CH-47Cs were shipped to the United States in September 1993, and the upgraded helicopters returned to Australia in 1995. The four CH-47Ds upgraded were the former A15-002, 003, 004, and 006, now renumbered A15-102, 103, 104, and 106 respectively. They were assigned to C Squadron of the 5th Aviation Regiment, which was based at Townsville, and also comprised two squadrons equipped with Black Hawks as well as six Iroquois helicopters used as gunships. The Regiment's experiences during the 1990s demonstrated that four Chinooks were not sufficient to meet the ADF's needs, leading to an order for two newly built CH-47Ds in 1998. These helicopters were delivered in 2001, and designated A15-201 and A15-202. Following their transfer to the Army, the Chinooks were used in similar roles to those they had undertaken in RAAF service. The first operational deployment of the Army Chinooks began in October 1997, when two of the helicopters and three Black Hawks that were in Papua New Guinea as part of a training exercise were tasked with delivering food supplies to the highlands of the country following a severe drought. The Chinooks were also used to transport fuel supplies for the other ADF aircraft and helicopters involved in this effort. At this time, the deployment of two Chinooks was the largest possible given the need to reserve other CH-47s for training tasks and rotate the fleet through maintenance periods. The Chinooks returned to Australia in March 1998. None of the CH-47s were available to support the Australian-led INTERFET peacekeeping deployment to East Timor in 1999 as the fleet had been grounded due to systematic problems with their transmissions. United States Marine Corps CH-53s and Mil Mi-8 and Mil Mi-26 helicopters chartered from Bulgarian and Russian companies were used instead. In 2003 a detachment of three CH-47Ds was deployed to the Middle East as part of the Australian contribution to the invasion of Iraq. The detachment formed part of the Special Operations Task Group, and operated from Jordan to transport supplies and personnel for Australian special forces units. Two histories published in 2004 stated that the helicopters entered Western Iraq throughout the initial stage of the conflict. A 2005 history also stated that one of the tasks undertaken by the detachment was flying commandos from the 4th Battalion, Royal Australian Regiment to Al Asad Airbase within Iraq after the facility was captured by Special Air Service Regiment units. However, an uncompleted internal Army history of the deployment of Australian forces to the Iraq War—released in 2017 following a freedom of information request—stated that as the Chinooks were not equipped with missile countermeasure systems and their pilots had not been trained to insert special forces behind enemy lines, they had been prohibited from entering Iraq and remained in Jordan throughout the conflict. This history stated that "it is not possible to explain the rationale" for the deployment of the CH-47s given their unsuitability for operations within Iraq, and judged that the main achievement of the detachment had been to free up British and American helicopters for other tasks. During 2005 the Australian Government decided to deploy Chinooks to Afghanistan as part of the Australian forces in the country. The need to prepare for this task contributed to a decision in October that year to dispatch Black Hawks rather than Chinooks to Pakistan as part of Australia's contribution to the international relief efforts which followed the 2005 Kashmir earthquake, despite the Chinooks being better suited for operations at the high altitudes affected by the disaster. In November 2005 the Government authorised a program of urgent upgrades to the CH-47Ds to improve their combat readiness ahead of being deployed to Afghanistan. The upgrades included fitting the helicopters with extra armour as well as new electronic warfare and communications systems. The helicopters' machine guns were also replaced with M134D miniguns. A longer-term plan to upgrade the helicopters, designated Phase 5 of project AIR 9000, was also in place at this time. This was to involve two sub-phases: under Phase 5A new engines were purchased in December 2004, and were scheduled to be fitted in late 2006. It was also planned to put the helicopters through a mid-life update as part of Phase 5B, enabling them to remain in service until around 2025. Following the RAAF's acquisition of Boeing C-17 Globemaster III large transport aircraft in 2007, the Chinooks were transported by air on occasion. However, it took one and a half days to prepare the CH-47Ds for air transport. A detachment of two Chinooks operated in Afghanistan during 2006 to 2007 and 2008 to 2013. The detachment was designated the Aviation Support Element during 2006 and 2007, and renamed the Rotary Wing Group in 2008. The initial detachment arrived at Kandahar International Airport in March 2006, and was tasked with supporting the Australian Special Forces Task Group in the country. The upgrades the helicopters had received proved successful, and allowed them to operate in combat alongside other Coalition CH-47s. After the Special Forces Task Group was withdrawn in September 2006 the helicopters remained in the country and were used to support Coalition forces, with a particular emphasis on the Australian units located in Urozgan Province. The detachment was withdrawn to Australia in February 2007, and did not deploy again until February 2008. During this period all six helicopters received further upgrades, which included new engines and blue force tracker equipment. During subsequent years the detachment was withdrawn to Australia over the Afghan winters, and redeployed each northern spring. As the Chinooks' tasking was controlled by the International Security Assistance Force, the ADF chartered a Russian Mil Mi-26 between 2010 and 2013 to provide the Australian forces in Afghanistan with a dedicated heavylift helicopter. By the end of the final rotation on 14 September 2013, the helicopters had flown more than 6,000 hours in combat and transported almost 40,000 personnel. Preparing for and sustaining the Rotary Wing Group rotations absorbed most of C Squadron's resources throughout this period, and Chinooks were rarely available for other Army training or operational tasks. Two Australian CH-47Ds were destroyed in Afghanistan. On 30 May 2011, A15-102 crashed in Zabul Province, resulting in the death of an Army unmanned aerial vehicle pilot who was travelling on board as a passenger. As it was impractical to recover the helicopter, it was destroyed by Coalition forces. The official inquiry into the crash found that it was caused by a known issue in which Chinooks suffered uncommanded pitch oscillations while flying through dense air, and that the aircrew had not been adequately trained to prevent such incidents. A15-103 was written off following a hard landing in Kandahar Province on 22 June 2012; one of the crew members suffered minor injuries. Both of the Chinooks at Kandahar International Airport in April 2013 also suffered significant damage when the airport was struck by a severe hail storm. Two ex-US Army CH-47Ds were purchased in December 2011 to replace A15-102, and arrived in Australia in January 2012; these helicopters were designated A15-151 and A15-152. ### CH-47F Chinook A decision by the US Army in the mid-2000s to replace all its CH-47Ds with new-build CH-47Fs by 2017 endangered the viability of the Australian Chinooks. This was because the Australian Army's arrangements for the logistical support of its small number of CH-47Ds were heavily leveraged off those for the US Army's large fleet. In response, the Australian Army also established a project to acquire CH-47F Chinooks in the mid-2000s. The Australian Government provided initial approval for a CH-47F purchase in September 2007. As part of this decision, the Government chose to procure the helicopters through the US Government's Foreign Military Sales program to minimise potential risks to the schedule and cost of the project. Final approval to acquire CH-47Fs was granted by the Australian Government in February 2010, seven of the helicopters being ordered. A contract was signed on 19 March that year. The decision to increase the fleet size from six to seven was made to improve the robustness of the Army's helicopter capacities, including by reducing the impact of the loss of any of the helicopters. The total cost of the CH-47F project, including the construction of new facilities and the acquisition of two flight simulators, was \$A631 million. The CH-47F has generally similar performance to the CH-47D, and was designed to be easier to maintain and deploy. Its fuselage comprises few machined components, rather than the many fabricated sections of sheet metal used in the D variant, which reduces vibration and structural cracking. The F variant also includes more advanced avionics as well as design features that enable the helicopters to be more quickly prepared for transport within a cargo aircraft. The initial seven Australian CH-47Fs are fitted with rotor brakes and other equipment to better enable them to operate from the Royal Australian Navy's Canberra class landing helicopter dock vessels, but are otherwise identical to those operated by the US Army. Australia's first two CH-47Fs were delivered in early April 2015, eight months later than originally expected, and entered service with the 5th Aviation Regiment on 5 May that year. At this time it was planned for C Squadron to be fully operational with the new Chinooks by January 2017. The seventh CH-47F was delivered three weeks ahead of schedule in September 2015. These helicopters were designated A15-301 to A15-307. An urgent order was placed in March 2016 for a further three CH-47Fs for \$US150 million, including spare parts, related equipment and some support costs. The ADF had previously intended to expand the CH-47F fleet at a later date, but the order was placed at short notice to use funds made available by an under-spend on other Defence capabilities. All three helicopters were delivered in June 2016, two and half months earlier than planned. The Chinooks were designated A15-308 to A15-310. These helicopters are not fitted with rotor brakes as they were taken directly from the production line of helicopters for the US Army. As of 2017, it was planned to fit these helicopters with rotor brakes by 2020. C Squadron's air crew undertook training to prepare them to operate the new type using the two flight simulators, and the CH-47F fleet achieved initial operating capacity in April 2016. During 2016, the CH-47s were approved to operate from the Canberra class vessels after trials proved successful. The first seven CH-47s reached full operating capability status in July 2017. C Squadron's operations were constrained at this time by personnel shortages and a backlog of maintenance tasks which at one point led to four of the helicopters simultaneously being out of service for deep maintenance. These constraints are expected to delay full operating capability status for the entire CH-47F fleet to 2020. The CH-47Ds were retired as they became due for deep maintenance checks, the last of the type leaving service in September 2016. Due to the many common components between the D and F variants, the helicopters were stripped for spare parts before being preserved in Australia. A15-202 was handed over to the Australian War Memorial in April 2016, A15-104 will be displayed at the Australian Army Flying Museum and the former Air Force helicopter A15-106 was transferred to the RAAF Museum. The other three surviving CH-47Ds were retained by the Army for non-flying training, A15-151 and A15-152 for general and special forces training respectively, and A15-201 as a maintenance systems training airframe. The 2016 Defence White Paper and its supporting documentation stated that the CH-47Fs will receive modifications to better enable them to perform aeromedical evacuation tasks by the 2025–26 financial year, and that it is intended to regularly upgrade the helicopters over time so that they can continue to be supported through the US military's logistics system. This will involve keeping pace with key changes introduced to the American CH-47F fleet, though there will be options to modify the helicopters to meet Australian requirements. As of 2017, the ADF intended to retain the CH-47Fs until 2040. The US Army has indicated that it will operate the type until the 2060s, which may lead to Australia doing the same. The first overseas deployment of Australian CH-47Fs commenced in early March 2018. On 8 March, the Australian Government announced that three of the helicopters would be dispatched to Papua New Guinea to assist the relief efforts for victims of the 2018 Papua New Guinea earthquake. The Chinooks commenced operations in the country on 11 March, and the deployment concluded in April that year. A detachment of several Chinooks was deployed to RAAF Base East Sale during January 2020 as part of the ADF's response to the 2019–20 Australian bushfire season. During this deployment the helicopters transported evacuees and a wide range of supplies and equipment. The CH-47F fleet flew for over 400 hours during the month, the highest number of flying hours achieved by Australian Chinooks since the type entered service. In April 2021 the United States Department of State approved a potential sale of four CH-47Fs from US Army holdings to Australia. The Australian Government's interest in buying additional Chinooks had not been previously announced, and the Australian Defence Business Review has reported that it was partly motivated by "the low availability of the Army’s MRH-90 Taipan helicopter fleet". The order was confirmed on 8 July 2021, at a price of \$595 million. Two of the helicopters were delivered to Australia early that month on board a United States Air Force Lockheed C-5 Galaxy transport plane after the US Army agreed to transfer two of their aircraft. The other two were delivered in June 2022.
26,430,438
Hope (Watts)
1,170,699,090
1886 painting by George Frederic Watts
[ "1886 paintings", "19th-century allegorical paintings", "Allegorical paintings by English artists", "Collection of the Tate galleries", "Hope", "Musical instruments in art", "Paintings by George Frederic Watts", "Symbolist paintings" ]
Hope is a Symbolist oil painting by the English painter George Frederic Watts, who completed the first two versions in 1886. Radically different from previous treatments of the subject, it shows a lone blindfolded female figure sitting on a globe, playing a lyre that has only a single string remaining. The background is almost blank, its only visible feature a single star. Watts intentionally used symbolism not traditionally associated with hope to make the painting's meaning ambiguous. While his use of colour in Hope was greatly admired, at the time of its exhibition many critics disliked the painting. Hope proved popular with the Aesthetic Movement, who considered beauty the primary purpose of art and were unconcerned by the ambiguity of its message. Reproductions in platinotype, and later cheap carbon prints, soon began to be sold. Although Watts received many offers to buy the painting, he had agreed to donate his most important works to the nation and felt it would be inappropriate not to include Hope. Consequently, later in 1886 Watts and his assistant Cecil Schott painted a second version. On its completion Watts sold the original and donated the copy to the South Kensington Museum (the Victoria and Albert Museum); thus, this second version is better known than the original. He painted at least two further versions for private sale. As cheap reproductions of Hope, and from 1908 high-quality prints, began to circulate in large quantities, it became a widely popular image. President Theodore Roosevelt displayed a copy at his Sagamore Hill home in New York; reproductions circulated worldwide; and a 1922 film depicted Watts's creation of the painting and an imagined story behind it. By this time Hope was coming to seem outdated and sentimental, and Watts was rapidly falling out of fashion. In 1938 the Tate Gallery ceased to keep their collection of Watts's works on permanent display. Despite the decline in Watts's popularity, Hope remained influential. Martin Luther King Jr. based a 1959 sermon later named Shattered Dreams, on the theme of the painting, as did Jeremiah Wright in Chicago in 1990. Among the congregation for the latter was the young Barack Obama, who was deeply moved. Obama took "The Audacity of Hope" as the theme of his 2004 Democratic National Convention keynote address, and as the title of his 2006 book; he based his successful 2008 presidential campaign around the theme of "Hope". ## Background George Frederic Watts was born in London in 1817, the son of a musical instrument manufacturer. His two brothers died in 1823 and his mother in 1826, giving Watts an obsession with death throughout his life. Meanwhile, his father's strict evangelical Christianity led to both a deep knowledge of the Bible and a strong dislike of organised religion. Watts was apprenticed as a sculptor at the age of 10, and six years later was proficient enough as an artist to earn a living as a portrait painter and cricket illustrator. Aged 18 he gained admission to the Royal Academy schools, although he disliked their methods and his attendance was intermittent. In 1837 Watts was commissioned by Greek shipping magnate Alexander Constantine Ionides to copy a portrait of his father by Samuel Lane; Ionides preferred Watts's version to the original and immediately commissioned two more paintings from him, allowing Watts to devote himself full-time to painting. In 1843 he travelled to Italy where he remained for four years. On his return to London he suffered from depression and painted a number of notably gloomy works. His skills were widely celebrated, and in 1856 he decided to devote himself to portrait painting. His portraits were extremely highly regarded. In 1867 he was elected a Royal Academician, at the time the highest honour available to an artist, although he rapidly became disillusioned with the culture of the Royal Academy. From 1870 onwards he became widely renowned as a painter of allegorical and mythical subjects; by this time, he was one of the most highly regarded artists in the world. In 1881 he added a glass-roofed gallery to his home at Little Holland House, which was open to the public at weekends, further increasing his fame. In 1884 a selection of 50 of his works was shown at New York's Metropolitan Museum of Art. ## Subject Hope is traditionally considered by Christians as a theological virtue (a virtue associated with the grace of God, rather than with work or self-improvement). Since antiquity artistic representations of the personification depict her as a young woman, typically holding a flower or an anchor. During Watts's lifetime, European culture had begun to question the concept of hope. A new school of philosophy at the time, based on the thinking of Friedrich Nietzsche, saw hope as a negative attribute that encouraged humanity to expend their energies on futile efforts. The Long Depression of the 1870s wrecked both the economy and confidence of Britain, and Watts felt that the encroaching mechanisation of daily life, and the importance of material prosperity to Britain's increasingly dominant middle class, were making modern life increasingly soulless. In late 1885 Watts's adopted daughter Blanche Clogstoun had just lost her infant daughter Isabel to illness, and Watts wrote to a friend that "I see nothing but uncertainty, contention, conflict, beliefs unsettled and nothing established in place of them." Watts set out to reimagine the depiction of Hope in a society in which economic decline and environmental deterioration were increasingly leading people to question the notion of progress and the existence of God. Other artists of the period had already begun to experiment with alternative methods of depicting Hope in art. Some, such as the upcoming young painter Evelyn De Morgan, drew on the imagery of Psalm 137 and its description of exiled musicians refusing to play for their captors. Meanwhile, Edward Burne-Jones, a friend of Watts who specialised in painting mythological and allegorical topics, in 1871 completed the cartoon for a planned stained glass window depicting Hope for St Margaret's Church in Hopton-on-Sea. Burne-Jones's design showed Hope upright and defiant in a prison cell, holding a flowering rod. Watts generally worked on his allegorical paintings on and off over an extended period, but it appears that Hope was completed relatively quickly. He left no notes regarding his creation of the work, but his close friend Emilie Barrington noted that "a beautiful friend of mine", almost certainly Dorothy Dene, modelled for Hope in 1885. (Dorothy Dene, née Ada Alice Pullen, was better known as a model for Frederic Leighton but is known to have also modelled for Watts in this period. Although the facial features of Hope are obscured in Watts's painting, her distinctive jawline and hair are both recognisable.) By the end of 1885 Watts had settled on the design of the painting. ## Composition > Hope sitting on a globe, with bandaged eyes playing on a lyre which has all the strings broken but one out of which poor little tinkle she is trying to get all the music possible, listening with all her might to the little sound—do you like the idea? Hope shows its central character alone, with no other human figures visible and without her traditional fellow virtues, Love (also known as Charity) and Faith. She is dressed in classical costume, based on the Elgin Marbles; Nicholas Tromans of Kingston University speculated that her Greek style of clothing was intentionally chosen to evoke the ambivalent nature of hope in Greek mythology over the certainties of Christian tradition. Her pose is based on that of Michelangelo's Night, in an intentionally strained position. She sits on a small, imperfect orange globe with wisps of cloud around its circumference, against an almost blank mottled blue background. The figure is illuminated faintly from behind, as if by starlight, and also directly from the front as if the observer is the source of light. Watts's use of light and tone avoids the clear definition of shapes, creating a shimmering and dissolving effect more typically associated with pastel work than with oil painting. The design bears close similarities to Burne-Jones's Luna (painted in watercolour 1870 and in oils c. 1872–1875), which also shows a female figure in classical drapery on a globe surrounded by clouds. As with many of Watts's works the style of the painting was rooted in the European Symbolist movement, but also drew heavily on the Venetian school of painting. Other works which have been suggested as possible influences on Hope include Burne-Jones's The Wheel of Fortune (c. 1870), Albert Moore's Beads (1875), Dante Gabriel Rossetti's A Sea–Spell (1877), and The Throne of Saturn by Elihu Vedder (1884). Hope is closely related to Idle Child of Fancy, completed by Watts in 1885, which also shows a personification of one of the traditional virtues (in this case Love) sitting on a cloud-shrouded globe. In traditional depictions of the virtues, Love was shown blindfolded while Hope was not; in Hope and Idle Child Watts reversed this imagery, depicting Love looking straight ahead and Hope as blind. It is believed to be the first time a European artist depicted Hope as blind. The figure of Hope holds a broken lyre, based on an ancient Athenian wood and tortoiseshell lyre then on display in the British Museum. Although broken musical instruments were a frequently occurring motif in European art, they had never previously been associated with Hope. Hope's lyre has only a single string remaining, on which she attempts to play. She strains to listen to the sound of the single unbroken string, symbolising both persistence and fragility, and the closeness of hope and despair. Watts had recently shown interest in the idea of a continuity between the visual arts and music, and had previously made use of musical instruments as a way to invigorate the subjects of his portraits. Above the central figure shines a single small star at the very top of the picture, serving as a symbol of further hope beyond that of the central figure herself. The distance of the star from the central figure, and the fact that it is outside her field of vision even were she not blindfolded, suggests an ambiguity. It provides an uplifting message to the viewer that things are not as bad for the central character as she believes, and introduces a further element of pathos in that she is unaware of hope existing elsewhere. ## Reception > Hope's dress is of a dark aërial hue, and her figure is revealed to us by a wan light from the front and the paler light of stars in the sky beyone. This exquisite illumination fuses, so to say, the colours, substance and even the forms and contours of the whole, and suggests a vague, dreamlike magic, the charm of which assorts with the subject, and, as in all great art, imparts grace to the expression of the theme. > Deary! a young woman tying herself into a knot and trying to perform the chair-trick. She is balanced on a pantomime Dutch cheese, which is floating in stage muslin of uncertain age and colour. The girl would be none the worse for a warm bath. Although the Royal Academy Summer Exhibition was traditionally the most prestigious venue for English artists to display their new material, Watts chose to exhibit Hope at the smaller Grosvenor Gallery. In 1882 the Grosvenor Gallery had staged a retrospective exhibition of Watts's work and he felt an attachment to the venue. Also, at this time the Grosvenor Gallery was generally more receptive than the Royal Academy to experimentation. Hope was given the prime spot in the exhibition, in the centre of the gallery's longest wall. Watts's use of colour was an immediate success with critics; even those who otherwise disliked the piece were impressed by Watts's skilful use of colour, tone and harmony. Its subject and Watts's technique immediately drew criticism from the press. The Times described it as "one of the most interesting of [Watts's] recent pictures" but observed that while "in point of colour Mr. Watts has seldom given us anything more lovely and delicate ... and there is great beauty in the drawing, though it must be owned that the angles are too many and too marked". The Portfolio praised Watts's Repentance of Cain but thought Hope "a poetic but somewhat inferior composition". Theodore Child of The Fortnightly Review dismissed Hope as "a ghastly and apocalyptic allegory", while the highly regarded critic Claude Phillips considered it "an exquisite concept, insufficiently realised by a failed execution". Despite its initial rejection by critics, Hope proved immediately popular with many in the then-influential Aesthetic Movement, who considered beauty the primary purpose of art. Watts, who saw art as a medium for moral messages, strongly disliked the doctrine of "art for art's sake", but the followers of Aestheticism greatly admired Watts's use of colour and symbolism in Hope. Soon after its exhibition poems based on the image began to be published, and platinotype reproductions—at the time the photographic process best able to capture subtle variations in tone—became popular. The first platinotype reproductions of Hope were produced by Henry Herschel Hay Cameron, son of Watts's close friend Julia Margaret Cameron. ### Religious interpretations Because Hope was a work that was impossible to read using the traditional interpretation of symbolism in painting, Watts intentionally left its meaning ambiguous, and the bleaker interpretations were almost immediately challenged by Christian thinkers following its exhibition. Scottish theologian P. T. Forsyth felt that Hope was a companion to Watts's 1885 Mammon in depicting false gods and the perils awaiting those who attempted to follow them in the absence of faith. Forsyth wrote that the image conveyed the absence of faith, illustrated that a loss of faith placed too great a burden on hope alone, and that the message of the painting was that in the godless world created by technology, Hope has intentionally blinded herself and listens only to that music she can make on her own. Forsyth's interpretation, that the central figure is not herself a personification of hope but a representation of humanity too horrified at the world it has created to look at it, instead deliberately blinding itself and living in hope, became popular with other theologians. Watts's supporters claimed that the image of Hope had near-miraculous redemptive powers. In his 1908 work Sermons in Art by the Great Masters, Stoke Newington Presbyterian minister James Burns wrote of a woman who had been walking to the Thames with the intention of suicide, but had passed the image of Hope in a shop window and been so inspired by the sight of it that rather than attempting suicide she instead emigrated to Australia. In 1918 Watts's biographer Henry William Shrewsbury wrote of "a poor girl, character-broken and heart-broken, wandering about the streets of London with a growing feeling that nothing remained but to destroy herself" seeing a photograph of Hope, using the last of her money to buy the photograph, until "looking at it every day, the message sank into her soul, and she fought her way back to a life of purity and honour". When music hall star Marie Lloyd died in 1922 after a life beset with alcohol, illness and depression, it was noted that among her possessions was a print of Hope; one reporter observed that among her other possessions, it looked "like a good deed in a naughty world". Watts himself was ambivalent when questioned about the religious significance of the image, saying that "I made Hope blind so expecting nothing", although after his death his widow Mary Seton Watts wrote that the message of the painting was that "Faith must be the companion of Hope. Faith is the substance, the assurance of things hoped for, because it is the evidence of things not seen." Malcolm Warner, curator of the Yale Center for British Art, interpreted the work differently, writing in 1996 that "the quiet sound of the lyre's single string is all that is left of the full music of religious faith; those who still listen are blindfolded in the sense that, even if real reasons for Hope exist, they cannot see them; Hope remains a virtue, but in the age of scientific materialism a weak and ambiguous one". In 1900, shortly before his death, Watts again painted the character in Faith, Hope and Charity (now in the Hugh Lane Gallery, Dublin). This shows her smiling and with her lyre restrung, working with Love to persuade a blood-stained Faith to sheath her sword; Tromans writes that "the message would appear to be that if Faith is going to resume her importance for humanity ... it will have to be in a role deferential to the more constant Love and Hope." ## Second version By the time Hope was exhibited, Watts had already committed himself to donate his most significant works to the nation, and although he received multiple offers for the painting he thought it inappropriate not to include Hope in this donation, in light of the fact that it was already being considered one of his most important pictures. In mid-1886 Watts and his assistant Cecil Schott painted a duplicate of the piece, with the intention that this duplicate be donated to the nation allowing him to sell the original. Although the composition of this second painting is identical, it is radically different in feel. The central figure is smaller in relation to the globe, and the colours darker and less sumptuous, giving it an intentionally gloomier feel than the original. In late 1886 this second version was one of nine paintings donated to the South Kensington Museum (the Victoria and Albert Museum) in the first instalment of Watts's gift to the nation. Meanwhile, the original was briefly displayed in Nottingham before being sold to the steam tractor entrepreneur Joseph Ruston in 1887. Its whereabouts was long unknown until in 1986 it was auctioned at Sotheby's for £869,000 (about £ in 2023 terms), 100 years after its first exhibition. On their donation to the South Kensington Museum, the nine works donated by Watts were hung on the staircase leading to the library, but Hope proved a popular loan to other institutions as a symbol of British art. At the Royal Jubilee Exhibition of 1887 in Manchester, an entire wall was dedicated to the works of Watts. Hope, only recently completed but already the most famous of Watts's works, was placed at the centre of this display. It was then exhibited at the 1888 Melbourne Centennial Exhibition and the 1889 Exposition Universelle in Paris, before being moved to Munich for display at the Glaspalast. In 1897 it was one of the 17 Watts works transferred to the newly created National Gallery of British Art (the Tate Gallery, or Tate Britain); at the time, Watts was so highly regarded that an entire room of the new museum was dedicated to his works. The Tate Gallery considered Hope one of the highlights of their collection and did not continue the South Kensington Museum's practice of lending the piece to overseas exhibitions. ### Other painted versions Needing funds to pay for his new house and studio in Compton, Surrey (the Watts Gallery), Watts produced further copies of Hope for private sale. A small 66 by 50.8 cm (26.0 by 20.0 in) version was sold to a private collector in Manchester at some point between 1886 and 1890, and was exhibited at the Free Picture Exhibition in Canning Town (an annual event organised by Samuel Barnett and Henrietta Barnett in an effort to bring beauty into the lives of the poor) in 1897. It is in the Iziko South African National Gallery, Cape Town. Another version, in which Watts included a rainbow surrounding the central figure to reduce the bleakness of the image, was bought by Richard Budgett, a widower whose wife had been a great admirer of Watts, and remained in the possession of the family until 1997. Watts gave his initial oil sketch to Frederic Leighton; it has been in the collection of the Walker Art Gallery, Liverpool since 1923. Watts is thought to have painted at least one further version, but its location is unknown. ## Legacy Although Victorian painting styles went out of fashion soon after Watts's death, Hope has remained extremely influential. Mark Bills, curator of the Watts Gallery, described Hope as "the most famous and influential" of all Watts's paintings and "a jewel of the late nineteenth-century Symbolist movement". In 1889 socialist agitator John Burns visited Samuel and Henrietta Barnett in Whitechapel, and saw a photograph of Hope among their possessions. After Henrietta explained its significance to him, efforts were made by the coalition of workers' groups which were to become the Labour Party to recruit Watts. Although determined to stay outside of politics, Watts wrote in support of striking busmen in 1891, and in 1895 donated a chalk reproduction of Hope to the Missions to Seamen in Poplar in support of London dock workers. (This is believed to be the red chalk version of Hope in the Watts Gallery.) The passivity of Watts's depiction of Hope drew criticism from some within the socialist movement, who saw her as embodying an unwillingness to commit to action. The prominent art critic Charles Lewis Hind also loathed this passivity, writing in 1902 that "It is not a work that the robust admire, but the solitary and the sad find comfort in it. It reflects the pretty, pitiable, forlorn hope of those who are cursed with a low vitality, and poor physical health". Henry Cameron's platinotype reproductions of the first version of Hope had circulated since the painting's exhibition, but were slow to produce and expensive to buy. From the early 1890s photographer Frederick Hollyer produced large numbers of cheap platinotype reproductions of the second version, particularly after Hollyer formalised his business relationship with Watts in 1896. Hollyer sold the reproductions both via printsellers around the country and directly via catalogue, and the print proved extremely popular. ### Artistic influence In 1895 Frederic Leighton based his painting Flaming June, which also depicted Dorothy Dene, on the composition of Watts's Hope. Flaming June kept the central figure's pose, but showing her as relaxed and sleeping. Dene had worked closely with Leighton since the 1880s, and was left the then huge sum of £5000 (about £ in 2023 terms) in Leighton's will when he died the following year. By this time, Hope was becoming an icon of English popular culture, propelled by the wide distribution of reproductions; in 1898, a year after the opening of the Tate Gallery, its director noted that Hope was one of the two most popular works in their collection among students. As the 20th century began, the increasingly influential Modernist movement drew its inspiration from Paul Cézanne and had little regard for 19th-century British painting. Watts drew particular dislike from English critics, and Hope came to be seen as a passing fad, emblematic of the excessive sentimentality and poor taste of the late 19th and early 20th centuries. By 1904 author E. Nesbit used Hope as a symbol of poor taste in her short story The Flying Lodger, describing it as "a blind girl sitting on an orange", a description which would later be popularised by Agatha Christie in her 1942 novel Five Little Pigs (also known as Murder in Retrospect). Although Watts's work was seen as outdated and sentimental by the English Modernist movement, his experimentation with Symbolism and Expressionism drew respect from the European Modernists, notably the young Pablo Picasso, who echoed Hope's intentionally distorted features and broad sweeps of blue in The Old Guitarist (1903–1904). Despite Watts's fading reputation at home, by the time of his death in 1904 Hope had become a globally recognised image. Reproductions circulated in cultures as diverse as Japan, Australia and Poland, and Theodore Roosevelt, President of the United States, displayed a reproduction in his Summer White House at Sagamore Hill. By 1916, Hope was well known enough in the United States that the stage directions for Angelina Weld Grimké's Rachel explicitly use the addition of a copy of Hope to the set to suggest improvements to the home over the passage of time. Some were beginning to see it as embodying sentimentality and bad taste, but Hope continued to remain popular with the English public. In 1905 The Strand Magazine noted that it was the most popular picture in the Tate Gallery, and remarked that "there are few print-sellers who fail to exhibit it in their windows." After Watts's death the Autotype Company purchased from Mary Seton Watts the rights to make carbon print copies of Hope, making reproductions of the image affordable for poorer households, and in 1908 engraver Emery Walker began to sell full-colour photogravure prints of Hope, the first publicly available high-quality colour reproductions of the image. In 1922 the American film Hope, directed by Legaren à Hiller and starring Mary Astor and Ralph Faulkner, was based on the imagined origins of the painting. In it Joan, a fisherman's wife, is treated poorly by the rest of her village in her husband's absence, and has only the hope of his return to cling to. His ship returns but bursts into flames, before he is washed up safe and well on shore. The story is interspersed with scenes of Watts explaining the story to a model, and with stills of the painting. By the time the film was released, the fad for prints of Hope was long over, to the extent that references to it had become verbal shorthand for authors and artists wanting to indicate that a scene was set in the 1900s–1910s. Watts's reputation continued to fade as artistic tastes changed, and in 1938 the Tate Gallery removed their collection of Watts's works from permanent display. ### Later influence Despite the steep decline in Watts's popularity, Hope continued to hold a place in popular culture, and there remained those who considered it a major work. When the Tate Gallery held an exhibition of its Watts holdings in 1954, trade unionist and left-wing M.P. Percy Collick urged "Labour stalwarts" to attend the exhibition, supposedly privately recounting that he had recently met a Viennese Jewish woman who during "the terrors of the Nazi War" had drawn "renewed faith and hope" from her photographic copy. Meanwhile, Shattered Dreams, an influential 1959 sermon by Martin Luther King Jr., took Hope as a symbol of frustrated ambition and the knowledge that few people live to see their wishes fulfilled, arguing that "shattered dreams are a hallmark of our mortal life", and against retreating into either apathetic cynicism, a fatalistic belief in God's will or escapist fantasy in response to failure. Myths continued to grow about supposed beliefs in the redemptive powers of Hope, and in the 1970s a rumour began spread that after Israel defeated Egypt in the Six-Day War, the Egyptian government issued copies of it to its troops. There is no evidence this took place, and the story is likely to stem from the fact that in early 1974, shortly after the Yom Kippur War between Israel and Egypt, the image of Hope appeared on Jordanian postage stamps. Likewise, it is regularly claimed that Nelson Mandela kept a print of Hope in his cell on Robben Island, a claim for which there is no evidence. In 1990 Barack Obama, at the time a student at Harvard Law School, attended a sermon at the Trinity United Church of Christ preached by Jeremiah Wright. Taking the Books of Samuel as a starting point, Wright explained that he had studied Watts's Hope in the 1950s, and had rediscovered the painting when Dr Frederick G. Sampson delivered a lecture on it in the late 1980s (Sampson described it as "a study in contradictions"), before discussing the image's significance in the modern world. > The painting depicts a harpist, a woman who at first glance appears to be sitting atop a great mountain. Until you take a closer look and see that the woman is bruised and bloodied, dressed in tattered rags, the harp reduced to a single frayed string. Your eye is then drawn down to the scene below, down to the valley below, where everywhere are the ravages of famine, the drumbeat of war, a world groaning under strife and deprivation. It is this world, a world where cruise ships throw away more food in a day than most residents of Port-au-Prince see in a year, where white folks' greed runs a world in need, apartheid in one hemisphere, apathy in another hemisphere ... That's the world! On which hope sits! [...] And yet consider once again the painting before us. Hope! Like Hannah, that harpist is looking upwards, a few faint tones floating upwards towards the heavens. She dares to hope ... she has the audacity ... to make music ... and praise God ... on the one string ... she has left! Wright's sermon left a great impression on Obama, who recounted Wright's sermon in detail in his memoir Dreams from My Father. Soon after Dreams From My Father was published he went into politics, entering the Illinois Senate. In 2004 he was chosen to deliver the keynote address at the 2004 Democratic National Convention. In Obama's 2006 memoir The Audacity of Hope, he recollects that on being chosen to deliver this speech, he pondered the topics on which he had previously campaigned, and on major issues then affecting the nation, before thinking about the variety of people he had met while campaigning, all endeavouring in different ways to improve their own lives and to serve their country. > It wasn't just the struggles of these men and women that had moved me. Rather, it was their determination, their self-reliance, a relentless optimism in the face of hardship. It brought to mind a phrase that my pastor, Rev. Jeremiah A. Wright Jr.' had once used in a sermon. The audacity of hope ... It was that audacity, I thought, that joined us as one people. It was that pervasive spirit of hope that tied my own family's story to the larger American story, and my own story to those of the voters I sought to represent. Obama's speech, on the theme of "The Audacity of Hope", was extremely well received. Obama was elected to the U.S. Senate later that year, and two years later published a second volume of memoirs, also titled The Audacity of Hope. Obama continued to campaign on the theme of "hope", and in his 2008 presidential campaign his staff requested that artist Shepard Fairey amend the wording of an independently produced poster he had created, combining an image of Obama and the word progress, to instead read hope. The resulting poster came to be viewed as the iconic image of Obama's ultimately successful election campaign. In light of Obama's well-known interest in Watts's painting, and amid concerns over a perceived dislike of the British, in the last days of Gordon Brown's government historian and Labour Party activist Tristram Hunt proposed that Hope be transferred to the White House. According to an unverified report in the Daily Mail, the offer was made but rejected by Obama, who wished to distance himself from Jeremiah Wright following controversial remarks made by Wright. Hope remains Watts's best known work, and formed the theme of the opening ceremony of the 1998 Winter Paralympics in Nagano. In recognition of its continued significance, a major redevelopment of the Watts Gallery completed in 2011 was named the Hope Appeal.
6,775,666
Hurricane John (2006)
1,171,664,386
Category 4 Pacific hurricane
[ "2006 Pacific hurricane season", "2006 in Mexico", "2006 natural disasters in the United States", "Category 4 Pacific hurricanes", "Hurricanes and tropical depressions of the Gulf of California", "Hurricanes in Arizona", "Hurricanes in California", "Hurricanes in New Mexico", "Hurricanes in Texas", "Pacific hurricanes in Mexico", "Tropical cyclones in 2006" ]
Hurricane John was a Category 4 hurricane that caused heavy flooding and extensive damage across most of the Pacific coast of Mexico in late August through early September 2006. John was the eleventh named storm, seventh hurricane, and fifth major hurricane of the 2006 Pacific hurricane season. Hurricane John developed on August 28 from a tropical wave to the south of Mexico. Favorable conditions allowed the storm to intensify quickly, and it attained peak winds of 130 mph (210 km/h) on August 30. Eyewall replacement cycles and land interaction with western Mexico weakened the hurricane, and John made landfall on southeastern Baja California Sur with winds of 110 mph (180 km/h) on September 1. It slowly weakened as it moved northwestward through the Baja California peninsula, and dissipated on September 4. Moisture from the remnants of the storm entered the southwest United States. The hurricane threatened large portions of the western coastline of Mexico, resulting in the evacuation of tens of thousands of people. In coastal portions of western Mexico, strong winds downed trees, while heavy rain resulted in mudslides. Hurricane John caused moderate damage on the Baja California peninsula, including the destruction of more than 200 houses and thousands of flimsy shacks. The hurricane killed five people in Mexico, and damage totaled \$663 million (2006 MXN, \$60.8 million 2006 USD). In the southwest United States, moisture from the remnants of John produced heavy rainfall. The rainfall aided drought conditions in portions of northern Texas, although it was detrimental in locations that had received above-normal rainfall throughout the year. ## Meteorological history The tropical wave that would become John moved off the coast of Africa on August 17. It entered the eastern Pacific Ocean on August 24, and quickly showed signs of organization. That night, Dvorak classifications were initiated on the system while it was just west of Costa Rica, and it moved west-northwestward at 10–15 mph (16–24 km/h). Conditions appeared favorable for further development, and convection increased late on August 26 over the area of low pressure. Early on August 27, the system became much better organized about 250 miles (400 km) south-southwest of Guatemala, although convection remained minimal. Early on August 28, banding increased within its organizing convection, and the system developed into Tropical Depression Eleven-E. Due to low amounts of vertical shear, very warm waters, and abundant moisture, steady intensification was forecast, and the depression strengthened to Tropical Storm John later on August 28. Deep convection continued to develop over the storm, while an eye feature developed within the expanding central dense overcast. The storm continued to intensify, and John attained hurricane status on August 29 while 190 miles (310 km) south-southeast of Acapulco. Banding features continued to increase as the hurricane moved west-northwestward around the southwest periphery of a mid- to upper-level ridge over northern Mexico. The hurricane underwent rapid intensification, and John attained major hurricane status 12 hours after becoming a hurricane. Shortly thereafter, the eye became obscured, and the intensity remained at 115 mph (185 km/h) due to an eyewall replacement cycle. Another eye formed, and based on Reconnaissance data, the hurricane attained Category 4 status on the Saffir-Simpson Hurricane Scale on August 30 about 160 miles (260 km) west of Acapulco, or 95 miles (153 km) south of Lázaro Cárdenas, Michoacán. Hours later, the hurricane underwent another eyewall replacement cycle, and subsequently weakened to Category 3 status as it paralleled the Mexican coastline a short distance offshore. Due to land interaction and its eyewall replacement cycle, Hurricane John weakened to a 105 mph (169 km/h) hurricane by late on August 31, but restrengthened to a major hurricane shortly after as its eye became better defined. After completing another eyewall replacement cycle, the hurricane again weakened to Category 2 status, and on September 1, it made landfall on Cabo del Este on the southern tip of Baja California Sur, with winds of 110 mph (180 km/h). John passed near La Paz as a weakening Category 1 hurricane on September 2, and weakened to a tropical storm shortly thereafter over land. John continued to weaken, and late on September 3, the system deteriorated to a tropical depression while still over land. By September 4, most of the convection decoupled from the circulation towards mainland Mexico, and a clear circulation had not been discernible for 24 hours. Based on the disorganization of the system, the National Hurricane Center issued its last advisory on the system. ## Preparations The Mexican army and emergency services were stationed near the coast, while classes at public schools in and around Acapulco were canceled. Officials in Acapulco advised residents in low-lying areas to be on alert, and also urged fishermen to return to harbor. Authorities in the twin resort cities of Ixtapa and Zihuatanejo closed the port to small ocean craft. Government officials in the state of Jalisco declared a mandatory evacuation for 8,000 citizens in low-lying areas to 900 temporary shelters. Temporary shelters were also set up near Acapulco. The state of Michoacán was on a yellow alert, the middle of a five-level alert system. Carnival Cruise Lines diverted the path of one cruise ship traveling along the Pacific waters off Mexico. On August 31, the Baja California Sur state government ordered the evacuation of more than 10,000 residents. Those who refused to follow the evacuation order would have been forced to evacuate by the army. Shelters were set up to allow local residents and tourists to ride out the storm. Just weeks after a major flood in the area, officials evacuated hundreds of citizens in Las Presas in northern Mexico area near a dam. All public schools in the area were closed, as well. On September 4, the United States' National Weather Service issued flood watches and warnings for portions of Texas and the southern two-thirds of New Mexico. ## Impact ### Mexico The powerful winds of Hurricane John produced heavy surf and downed trees near Acapulco. The hurricane produced a 10-foot (3.0 m) storm surge in Acapulco that flooded coastal roads. In addition, John caused heavy rainfall along the western coast of Mexico, peaking at 12.5 inches (320 mm) in Los Planes, Jalisco. The rainfall resulted in mudslides in the Costa Chica region of Guerrero, leaving around 70 communities isolated. In La Paz, capital of Baja California Sur, the hurricane downed 40 power poles. Authorities cut off the power supply to the city to prevent electrocutions from downed wires. Strong winds downed trees and destroyed many advertisement signs. Heavy rainfall totaling more than 20 inches (510 mm) in isolated areas resulted in ankle-deep flooding, closing many roads in addition to the airport in La Paz. In La Paz, 300 families received damage to their homes, with another 200 families left homeless after their houses were destroyed. The combination of winds and rain destroyed thousands of flimsy houses across the region. The rainfall also destroyed large areas of crops, and also killed many livestock. The rainfall caused the Iguagil dam in Comondú to overflow, isolating 15 towns due to 4 feet (1.2 m) floodwaters. In the coastal city of Mulegé, flash flooding caused widespread damage throughout the town and the death of a United States citizen. More than 250 homes were damaged or destroyed in the town, leaving many people homeless. Severe flooding blocked portions of Federal Highway 1, and damaged an aqueduct in the region. In all, Hurricane John destroyed hundreds of houses and blew off the roofs of 160 houses on the Baja California peninsula. Five people were killed, and damage in Mexico amounted to \$663 million (2006 MXN, \$60.8 million 2006 USD). In Ciudad Juárez, Chihuahua, across the U.S. border from El Paso, Texas, rainfall from the storm's remnants flooded 20 neighborhoods, downed power lines, and resulted in several traffic accidents. Rainfall from John, combined with continual precipitation during the two weeks before the storm, left thousands of people homeless. ### United States Moisture from the remnants of John combined with an approaching cold front to produce moderate amounts of rainfall across the southwest United States, including a total of 8 inches (200 mm) in Whitharral and more than 3 inches (76 mm) in El Paso, Texas. The rainfall flooded many roads in southwestern Texas, including a 1⁄2 mile (800 m) portion of Interstate 10 in El Paso. A slick runway at El Paso International Airport delayed a Continental Airlines jet when its tires were stuck in mud. Rainfall from John in El Paso, combined with an unusually wet year, resulted in twice the normal annual rainfall, and caused 2006 to be the ninth wettest year on record by September. Damage totaled about \$100,000 (2006 USD) in the El Paso area from the precipitation. In northern Texas, the rainfall alleviated a severe drought, caused the Double Mountain Fork Brazos River to swell and Lake Alan Henry to overflow. The Texas Department of Transportation closed numerous roads due to flooding from the precipitation, including a portion of U.S. Route 385 near Levelland. Several other roads were washed out. Moisture derived from John also produced rainfall across southern New Mexico, peaking at 5.25 inches (133 mm) at Ruidoso. The rainfall overflowed rivers, forcing people to evacuate along the Rio Ruidoso. The rainfall also caused isolated road flooding. Rainfall in New Mexico canceled an annual wine festival in Las Cruces and caused muddy conditions at the All American Futurity at the Ruidoso Downs, the biggest day of horse racing in New Mexico. Flooding was severe in Mesquite, Hatch, and Rincon, where many homes experienced 4 feet (1.2 m) of flooding and mud. Some homeowners lost all they owned. Tropical moisture from the storm also produced rainfall in Arizona and Southern California. In California, the rainfall produced eight separate mudslides, trapping 19 vehicles, but caused no injuries. ## Aftermath Branches of the Mexican Red Cross in Guerrero, Oaxaca and Michoacán were put on alert. The organization's national emergency response team was on stand-by to assist the most affected areas. Navy helicopters delivered food and water to remote areas of the Baja California peninsula. The Mexican Red Cross dispatched 2,000 food parcels to the southern tip of Baja California Sur. In the city of Mulegé, gas supply, which was necessary to run generators, was low, drinking water was gone, and the airstrip was covered with mud. Many homeless residents initially stayed with friends or in government-run shelters. Throughout the Baja California peninsula, thousands remained without water or electricity two days after the storm, although a pilot from Phoenix prepared to fly to the disaster area with 100 gallons (380 litres) of water. Other pilots were expected to execute similar flights, as well. The office of Baja California Sur Tourism stated that minimal damage occurred to the tourism infrastructure, with only minimal delays to airports, roads, and maritime facilities. The Episcopal Relief and Development delivered food, clothing, medicine, and transportation to about 100 families, and gave mattresses to about 80 families. Many residents in Tucson, including more than 50 students, delivered supplies to flood victims in New Mexico, including clothing and other donations. ## See also - List of Category 4 Pacific hurricanes - List of Arizona hurricanes - List of Baja California hurricanes - Timeline of the 2006 Pacific hurricane season - Other tropical cyclones of the same name
47,433,016
J. R. Kealoha
1,139,538,531
Native Hawaiian Union Army soldier (d. 1877)
[ "1877 deaths", "American military personnel of Native Hawaiian descent", "Burials at Oahu Cemetery", "Hawaiian Kingdom people", "Native Hawaiian people", "People of Pennsylvania in the American Civil War", "People of the Hawaiian Kingdom in the American Civil War", "Union Army soldiers", "Year of birth missing" ]
J. R. Kealoha (died March 5, 1877) was a Native Hawaiian and a citizen of the Kingdom of Hawaiʻi, who became a Union Army soldier during the American Civil War. Considered one of the "Hawaiʻi sons of the Civil War", he was among a group of more than one hundred documented Native Hawaiian and Hawaiʻi-born combatants who fought in the American Civil War while the Kingdom of Hawaiʻi was an independent nation. Kealoha enlisted in the 41st United States Colored Infantry, a United States Colored Troops regiment formed in Pennsylvania. Participating in the siege of Petersburg, he and another Hawaiian soldier met the Hawaiʻi-born Colonel Samuel Chapman Armstrong, who recorded their encounter in a letter home. With the 41st USCT, Kealoha was present at the surrender of Confederate General Robert E. Lee and the Army of Northern Virginia at Appomattox Court House on April 9, 1865. After the war, Kealoha returned to Hawaiʻi. He died on March 5, 1877, and was buried in an unmarked grave in Honolulu's Oʻahu Cemetery. The legacy and contributions of Kealoha and other Hawaiian participants in the American Civil War were largely forgotten except in the private circles of descendants and historians, but in later years there was a revival of interest in the Hawaiian community. In 2010, these "Hawaiʻi sons of the Civil War" were commemorated with a bronze plaque erected along the memorial pathway at the National Memorial Cemetery of the Pacific in Honolulu. In 2014, through another local effort, a grave marker was dedicated over J. R. Kealoha's burial site, which had remained unmarked for 137 years. ## Life After the outbreak of the American Civil War, the Kingdom of Hawaiʻi under King Kamehameha IV declared its neutrality on August 26, 1861. Despite the declaration of neutrality, many Native Hawaiians and Hawaiʻi-born Americans (mainly descendants of American missionaries) abroad and in the islands volunteered and enlisted in the military regiments of various states in the Union and the Confederacy. Participation by Native Hawaiians in American wars was not unheard of. Individual Native Hawaiians had served in the United States Navy and Army since the War of 1812, and even more served during the American Civil War. Many Hawaiians sympathized with the Union because of Hawaiʻi's ties to New England through its missionaries and the whaling industries, and the ideological opposition of many to the institution of slavery. Nothing is known about the life of J. R. Kealoha before the war. He enlisted in 1864 as a private and was assigned to the 41st Regiment United States Colored Troops (USCT), a colored regiment formed in Camp William Penn, Pennsylvania, between September 30 and December 7, 1864, under the command of Colonel Llewellyn F. Haskell. Most Native Hawaiians who participated in the war were assigned to the colored regiments because of their dark skin color and the segregationist policy in the military at the time. Kealoha is one of the few Hawaiian soldiers of the Civil War whose Hawaiian name is known; many combatants served under anglicized pseudonyms because they were easier for English-speaking Americans to pronounce than Hawaiian language names. They often were registered as kanakas, the nineteenth-century term for Hawaiians and Pacific Islanders, with the "Sandwich Islands" (Hawaiʻi) noted as their place of origin. From October 1864 to April 1865, Kealoha fought in the Richmond–Petersburg campaign, better known as the siege of Petersburg. During the campaign, Kealoha and another Hawaiian named Kaiwi, of the 28th Regiment United States Colored Troops, came across Samuel Chapman Armstrong, a son of an American missionary posted in Maui. Armstrong wrote of the encounter in a letter home that later, was published in the Hawaiian missionary newspaper The Friend in 1865: > Yesterday, as my orderly was holding my horse, I asked him where he was from. He said he was from Hawaii! He proved to be a full-blood Kanaka, by the name of Kealoha, who came from the Islands last year. There is also another, by the name of Kaiwi, who lived near Judge Smith's, who left the Islands last July. I enjoyed seeing them very much and we had a good jabber in kanaka. Kealoha is a private in the 41st Regiment US Colored Troops, and Kaiwi is a Private in the 28th U.S.C.T., in the pioneer corps. Both are good men and seemed glad to have seen me. Kealoha survived months of trench warfare during the Richmond–Petersburg campaign and fought with the 41st USCT at the Battle of Appomattox Court House. He was present at the surrender of Confederate General Robert E. Lee and the Army of Northern Virginia at Appomattox Court House on April 9, 1865. The 41st USCT regiment was mustered out of service on November 10, 1865, at Brownsville, Texas, and was discharged December 14, 1865, at Philadelphia. Kealoha's enlistment of service is not present in any existing records or history from the 41st USCT regiment. Historians Justin Vance and Anita Manning speculate that "it is possible that his service is noted under a different name" or his name was never recorded because only the muster-out rolls from the regiment were returned to the Adjutant General's office after the unit disbanded. After the war, Kealoha returned to Hawaiʻi. He died on March 5, 1877, and was buried with eighteen other Native Hawaiians in an unmarked grave in Section 1, Lot 56 of the Oʻahu Cemetery, Honolulu. During the Hawaii Territorial period, Kealoha's Civil War service was recorded by the United Veterans Service Council (UVSC), a precursor of the United States Department of Veterans Affairs (VA), which included his name in their records as a "Deceased Veteran" and listed the location of his burial. ## Memorials For 137 years, Kealoha's burial site remained unmarked until a Hawaiian group affiliated with the organization Hawaiʻi Civil War Round Table, consisting of Anita Manning, Nanette Napoleon, Eric Mueller, and Justin Vance, started an effort to give him a grave marker. Historian Anita Manning, a member of this group, had discovered the records containing Kealoha's name at the Hawaii State Archives in 2011. It listed his service in the war and the location of his burial place, but when Manning went to the site of his grave, she was disappointed by the absence of a headstone. Giving the reason for the significance of Kealoha's service, Manning stated: > Kealoha represents many Hawaiian men and men from Hawaii who served in the Civil War who knew what they were getting into, who took a risk, and we all are the beneficiaries of that work and risk that they took ... We owe it to them to recognize that service. The group petitioned the United States Department of Veterans Affairs for a marker for Kealoha, but the Department denied the request because there was no next of kin to approve the request. A 2009 policy change enacted in 2012 required that only next of kin could request VA memorials. After the denial of the request, Honor Life Memorials, a local monument maker, donated a granite marker for Kealoha. The marker was formally dedicated and unveiled on October 25, 2014. Dressed in period costumes, members of the Hawaiʻi Civil War Round Table and others took part in the dedication ceremony at Oʻahu Cemetery. The ceremony was marked by military honors and a gun salute by a unit of Civil War re-enactors. Hawaiian minister Kahu Silva presided over the dedication ceremony, and in accordance with traditional Hawaiian customs, the consecrated marker was adorned with a sacred maile lei and a koa branch, representing "Kealoha's noble qualities of bravery, courage, and valor." The marker is inscribed with his name, regiment, death date, and the Hawaiian and English text: "He Koa Hanohano, a brave and honorable soldier". Other Hawaiian veterans of the Civil War are honored in Honolulu's National Memorial Cemetery of the Pacific with a bronze memorial plaque that was erected in 2010 in recognition of the "Hawaiʻi sons of the Civil War", the more than one hundred documented Hawaiians who served with the Union and the Confederacy. As of 2014, researchers have identified 119 documented Hawaiian and Hawaiʻi-born combatants from historical records. The exact number remains uncertain because of the lack of records. Of the 48 identified Native Hawaiian combatants, including James Wood Bush and Henry Hoʻolulu Pitman, Kealoha is the only one buried in Hawaii whose gravesite is known. According to Hawaiian news reporter Chelsea Davis, Kealoha has come to "[represent] all the men of Hawaii who took up arms in America's Civil War but who have been forgotten." ## See also - Hawaii and the American Civil War
19,541,336
Knut (polar bear)
1,154,318,178
Polar bear born in captivity at the Berlin Zoological Garden
[ "2006 animal births", "2011 animal deaths", "Articles containing video clips", "Berlin Zoological Garden", "Celebrity animals", "Deaths by drowning", "Filmed deaths of animals", "Individual animals in Germany", "Individual polar bears", "Male mammals" ]
Knut (; 5 December 2006 – 19 March 2011) was an orphaned polar bear born in captivity at the Berlin Zoological Garden. Rejected by his mother at birth, he was raised by zookeepers. He was the first polar bear cub to survive past infancy at the Berlin Zoo in more than 30 years. At one time the subject of international controversy, he became a tourist attraction and commercial success. After the German tabloid newspaper Bild ran a quote from an animal rights activist that decried keeping the cub in captivity, fans worldwide rallied in support of his being hand-raised by humans. Children protested outside the zoo, and e-mails and letters expressing sympathy for the cub's life were sent from around the world. Knut became the center of a mass media phenomenon dubbed "Knutmania" that spanned the globe and spawned toys, media specials, DVDs, and books. Because of this, the cub was largely responsible for a significant increase in revenue, estimated at €5 million, at the Berlin Zoo in 2007. Attendance figures for the year increased by an estimated 30 percent, making it the most profitable year in its 163-year history. On 19 March 2011, Knut unexpectedly died at the age of four. His death was caused by drowning after he collapsed into his enclosure's pool while suffering from anti-NMDA receptor encephalitis. ## Infancy Knut was born at the Berlin Zoo to 20-year-old Tosca, a former circus performer from East Germany who was born in Canada, and her 13-year-old mate Lars, who was originally from the Tierpark Hellabrunn in Munich. After an uncomplicated gestation, Knut and his unnamed brother were born on 5 December 2006. Tosca rejected her cubs for unknown reasons, abandoning them on a rock in the polar bear enclosure. Zookeepers rescued the cubs by scooping them out of the enclosure with an extended fishing net, but Knut's brother died of an infection four days later. Knut was the first polar bear to have been born and survive in the Berlin Zoo in over 30 years. Only the size of a guinea pig, the cub spent the first 44 days of his life in an incubator before zookeeper Thomas Dörflein began raising him. Knut's need for round-the-clock care required that Dörflein not only sleep on a mattress next to Knut's sleeping crate at night, but also play with, bathe, and feed the cub daily. Knut's diet began with a bottle of baby formula mixed with cod liver oil every two hours, before graduating at the age of four months to a milk porridge mixed with cat food and vitamins. Dörflein also accompanied Knut on his twice-daily one-hour shows for the public and therefore appeared in many videos and photographs alongside the cub. As a result, Dörflein became a minor celebrity in Germany and was awarded Berlin's Medal of Merit in honour of his continuous care for the cub. Dörflein died of a heart attack on 22 September 2008. He was 44 years old. ## Controversy and media coverage In early March 2007, German tabloid Bild-Zeitung carried a quote by animal rights activist Frank Albrecht who said that Knut should have been killed rather than be raised by humans. He declared that the zoo was violating animal protection legislation by keeping him alive. Wolfram Graf-Rudolf, the director of the Aachen Zoo, agreed with Albrecht and stated that the zookeepers "should have had the courage to let the bear die" after it was rejected, arguing that the bear will "die a little" every time it is separated from its caretaker. A group of children protested at the zoo, holding up placards reading "Knut Must Live" and "We Love Knut", and others sent numerous emails and letters asking for the cub's life to be spared. Threatening letters were also sent to Albrecht. The Berlin Zoo rallied in support of the baby polar bear, vowing not to harm him and rejecting the suggestion that it would be kinder to euthanise him. Albrecht stated his original aim was to draw attention to the law, not to have Knut put down. In December 2006 he had taken legal action against Leipzig Zoo to prevent them from killing a sloth bear cub rejected by its mother. His case was dismissed on the grounds that humans raising the animal would have been against the law of nature. In response to the criticism against him, Albrecht said that he was merely drawing parallels between the two cubs. The publicity from this coverage raised Knut's profile from national to international. ## Debut and first year On 23 March 2007, Knut was presented to the public for the first time. Around 400 journalists visited the Berlin Zoo on what was dubbed "Knut Day" to report on the cub's first public appearance to a worldwide audience. Because Knut became the focus of worldwide media at a very young age, many stories and false alarms regarding the cub's health and well-being were circulated during his first year. For example, on 16 April 2007, Knut was removed from display due to teething pains resulting from the growth of his right upper canine tooth, but initial reports stated that he was suffering from an unknown illness and subsequently put on antibiotics. Much ado was also made about a death threat that was sent shortly before 15:00 local time on Wednesday 18 April 2007. The zoo had received an anonymous letter by fax which said "Knut ist tot! Donnerstag Mittag." ("Knut is dead! Thursday noon.") In response, the police increased their security measures around the bear. The time frame for the threat passed without incident. Despite Der Spiegel reporting on 30 April 2007 that Knut was "steadily getting less cute" as he increased in age, Knut continued to bring in record crowds to the zoo that summer. After reaching seven months old and 50 kg (110 lb) in July 2007, Knut's scheduled twice daily public appearances were canceled due to the zoo's concern for the safety of his keeper. Zoo spokeswoman Regine Damm also said it was time for the bear to "associate with other bears and not with other people." After living in the same enclosure as Ernst, a Malaysian black bear cub who was born a month before Knut, and its mother, Knut was then moved to his own private living space. While visitor numbers dwindled from extreme highs in March and April, Knut remained a major attraction at the zoo for the rest of 2007. 400,000 guests were recorded in August 2007, which was an all-time high. News of Knut and his life at the zoo was still being reported internationally in late 2007. Knut's restricted diet, intended to curtail the natural weight gain necessary to survive harsh winters, made headlines outside of Germany. His daily meals were reduced in number from four to three, and treats, such as croissants, which were favored by the young polar bear, were restricted. After he hurt his foot while slipping on a wet rock in his enclosure a month later in September, there was an outpouring of concern and support from fans worldwide. In November 2007 and weighing over 90 kg (198 lb), Knut was deemed too dangerous for close handling and his interaction with human handlers was further diminished. The celebration of the cub's first birthday, which was attended by hundreds of children, was broadcast live on German television. The Berlin mint also produced 250,000 special commemorative silver coins to mark his birthday. Knut's role at the Berlin Zoo was to have included his becoming an "attractive stud" for other zoos in order to help preserve his species. When Flocke was born at the Nuremberg Zoo in December 2007 under similar circumstances, Bild dubbed her Mrs. Knut, suggesting that the two German-born polar bears might become mates when they matured. ## 2008–2010 A year after his public debut, Knut was reported as weighing more than 130 kg (286 lb). A plate of six-inch glass, strong enough to resist a mortar blast, was erected between him and zoo visitors. At the end of March 2008, Markus Röbke, one of the keepers who helped rear Knut, reported that the bear should leave the zoo as soon as possible in order to help him acclimate to a life alone. Röbke also said that Knut plainly misses his past father-figure, Thomas Dörflein, and has become so used to attention that he cries when no one is near his enclosure. "Knut needs an audience," Röbke stated. "That has to change". In April, animal welfare campaigners criticized the zoo for allowing Knut to kill and eat ten carp from the moat surrounding his enclosure, saying that it was a breach of German animal protection regulations. The zoo's bear expert, Heiner Klös, however, said that Knut's behavior was "all part of being a polar bear." In July 2008, it was announced that the Neumünster Zoo in northern Germany, which owns Knut's father, was suing the Berlin Zoo for the profits from Knut's success. Although the Berlin Zoo conceded Neumünster's ownership of Knut due to a previous agreement, it contended that the other zoo has no right to its proceeds. Neumünster had previously tried to negotiate with Berlin Zoo, but later sought a court ruling in their favor. Peter Drüwa, the zoo director at Neumünster, stated that they "do not want to remove Knut from his environment, but we have a right to our request for money." Shortly before Knut's second birthday, reports began circulating that the bear would have to be relocated to another zoo because he was becoming too large for his enclosure. The zoo later released statements that they wish to keep Knut, and the mayor of Berlin, Klaus Wowereit, also declared he wanted the still-adolescent cub to stay in the capital. Disputes between the two zoos continued into 2009. On 19 May, the Berlin Zoo offered to buy Knut from Neumünster and therefore negate their financial claim on the two-year-old polar bear. Although Neumünster Zoo set a price of €700,000, the Berlin Zoo stated that they would not pay "a cent more" than €350,000 (\$488,145). On 8 July, the Berlin Zoo agreed to pay €430,000 (\$599,721) to keep Knut in Berlin. Giovanna, a female polar bear roughly the same age as Knut, was relocated to Berlin from Munich's Hellabrunn Animal Garden in September 2009. She was presented to the public on 23 September, and was due to briefly share Knut's enclosure while her regular home in Munich underwent repairs. Her arrival sparked international interest, as many sources mused that the two bears (although sexually immature) would soon be "dating". However, in March 2010, the German chapter of People for the Ethical Treatment of Animals called for Knut to be castrated in order to avoid inbreeding; he and Giovanna share a grandfather and, according to PETA spokesman Frank Albrecht, the same animal rights activist who spoke out about Knut's handraising three years earlier, their offspring would threaten the genetic diversity of the German polar bear population. The Berlin Zoo declined to comment on the matter, only noting that Giovanna's stay in Berlin was still temporary. In August 2010, Giovanna was moved back to Munich after repairs on her enclosure were completed. Until his death, Knut shared an enclosure with three female polar bears: Nancy, Katjuscha and his mother Tosca. The older bears were reportedly aggressive towards the young male bear, causing news reports in late 2010 to question whether Knut was being bullied. One of the zookeepers disagreed, stating publicly that "For the time being, Knut is not yet an adult male and doesn't yet know how to get respect like his father did. But day by day, he is imposing himself and with time, this type of problem will go away." ## Death On 19 March 2011, at the age of four, Knut collapsed and died in his enclosure. Witnesses reported that after the bear's rear left leg began shaking, he became agitated before convulsing several times and falling backwards into the pool. Approximately 600 to 700 zoo visitors witnessed Knut's death. A statement made on 22 March in relation to the necropsy reported there were "significant changes in the brain, which may be regarded as a reason for the sudden death". Animal welfare organizations in Germany initially accused the Berlin Zoo of negligence, claiming that Knut died of stress caused by being forced to share his enclosure with three female polar bears. The zoo denied such claims. Bear curator Heiner Klös stated they "did everything to look after Knut—it's normal for polar bears to live with other polar bears in a zoo, and the idea was that Knut should learn social behavior and other skills from the older females ... He played with the other bears, he was relaxed and strong." On 1 April, pathology experts announced that Knut's immediate cause of death was from drowning. The bear's apparent seizure was due to his suffering from encephalitis, a swelling of the brain likely triggered by an infection. It is unknown what infection caused the swelling, but pathologists believe it was a virus. Although Knut showed no symptoms of being ill, pathologists believe that "this suspected infection must already have been there for a long time ... at least several weeks, possibly months." Knut's sudden death caused an international outpouring of grief. Hundreds of fans visited the zoo after the bear's death, leaving flowers and mementos near the enclosure. The mayor of Berlin, Klaus Wowereit, stated "We all held him so dearly. He was the star of the Berlin zoos." In January 2014, Knut's full autopsy results were published by the Leibniz Institute for Zoo and Wildlife Research in the Journal of Comparative Pathology. It was the most in-depth post-mortem ever carried out on an animal. The autopsy revealed that the damage to the bear's brain was so severe that even if he had not fallen into the water and drowned he would have died anyway. Experts hypothesized that he was suffering from a virus that caused the encephalitis. In August 2015, it was discovered that Knut died of anti-NMDA receptor encephalitis. ## Memorialization The zoo made plans to erect a monument in Knut's honour, financed by donations from fans. Thomas Ziolko, the chairman of the Friends of the Berlin Zoo, was quoted as saying "Knut will live on in the hearts of many visitors, but it's important to create a memorial for coming generations to preserve the memory of this unique animal personality." On 24 October 2012, the Berlin Zoo unveiled a bronze sculpture by Ukrainian artist Josef Tabachnyk. "Knut – The Dreamer" shows the bear "stretching out dreamily on a rock". Knut's remains were exhibited in Berlin's Museum of Natural History, although this decision has caused some controversy with fans. A full-sized sculpture covered in Knut's pelt was presented to the public on 16 February 2013. It went on display in the entrance hall of the museum where it was viewed free of charge until 5 May. It will later be used for an exhibition on climate change and environment protection. Museum spokeswoman Gesine Steiner stated that "It's important to make clear we haven't had Knut stuffed. It is an artistically valuable sculpture with the original fur." From 13 June until 1 September 2013, Knut went on display in the Naturalis Biodiversity Center, the Dutch national museum of natural history in Leiden, Netherlands. Knut returned to Berlin's Museum of Natural History on 28 July 2014 as an exhibit for a special exhibition on "Highlights of Taxidermy". The museum has won world championship prizes for taxidermy and Knut's remains will be the highlight of this exhibition for years to come. ## Effects of popularity ### Commercial success The Berlin Zoo registered "Knut" as a trademark in late March 2007. As a result, its shares more than doubled at the Berlin Stock Exchange; previously worth around €2,000, the value closed at €4,820 just a week later. The zoo reported that its attendance figures for 2007 increased by an estimated 30 percent, making it the most profitable it had been in its 163-year history. Knut earned the Berlin Zoo nearly €5 million that year, mainly thanks to an increase in visitors as well as the amount of merchandise sold. Various companies profited from the attention surrounding Knut by developing themed products such as ringtones and cuddly toys. Plush toy company Steiff produced several Knut-based plush toys in three sizes and models: sitting, standing, and lying down. The first 2,400 produced toys, which sold exclusively at the Berlin Zoo, sold out in only four days. The money raised from the Steiff deal was intended to be used to renovate the polar bear enclosure at the zoo. Candy company Haribo released a raspberry-flavored gummy bear sweet called Cuddly Knut beginning in April 2007. They pledged to donate ten cents to the zoo for every tub of Knut sweets it sold. The gummy bears sold so well that the Bonn-based company had to expand production to a second factory to deal with demand. Knut was the subject of several popular songs in Germany, the most successful of which were the singles "Knut is Cute" and "Knut, der kleine Eisbär" (English: "Knut, the little polar bear") by nine-year-old Kitty from Köpenick. In Britain, musical comedian Mitch Benn has performed four songs about Knut for BBC Radio 4 satirical series The Now Show: "The Baby Bear Must DIE!", "Knut Isn't Cute Anymore", "Goodbye Knut" and "Panda in Berlin". A blog with updates about the polar bear was maintained by a journalist at the regional public broadcaster Rundfunk Berlin-Brandenburg; it was available in German, English, and Spanish. RBB was also responsible for a weekly television program dedicated to the polar bear cub that was broadcast in Germany. Knut has also been the subject of several DVDs, including one entitled "Knut – Stories from a Polar Bear's Nursery". On 29 March 2007 he appeared on the cover of the German Vanity Fair magazine, which included a several page spread about the cub's life. On 1 May 2007, it was announced that New York-based Turtle Pond Publications and the Berlin Zoo had signed a deal for the worldwide publishing rights to Knut with the hopes of raising awareness of global warming issues. Written by Craig Hatkoff and his daughters Juliana and Isabella, the 44-page book entitled Knut, der kleine Eisbärenjunge (Little Polar Bear Knut) includes Knut's life story as well as previously unpublished photographs. Although several books about Knut had already been published in Germany, this book was the first to be authorized by the Berlin Zoo. The book was published in Germany by Ravensburger on 26 July 2007 and US publishing company Scholastic released the English version, entitled Knut: How one little polar bear captivated the world, in the United States in November of the same year. Rights to the book have also been sold to publishers in Japan, England, Mexico, China, and Italy. On 31 December 2007, the zoo's director confirmed the zoo had received a proposal for a film deal from Hollywood film producer Ash R. Shah, whose films include Supernova and Shark Bait, to make an animated film about the bear's life. Shah reportedly approached the Berlin Zoo with a purported €3.5 million film deal. Knut made his big screen debut in the German film Knut und seine Freunde (Knut and His Friends), which premiered in Berlin on 2 March 2008. Directed by Michael Johnson, the film depicts how Knut was rescued after his mother abandoned him and also features a polar bear family from the Arctic and two brown bear cubs from Belarus. ### Environmental causes Dr. Gerald Uhlich, of the Berlin zoo's board of trustees, stated that because of his vast popularity, Knut had become a means of communication and that he had the ability to "draw attention to the environment in a nice way. Not in a threatening, scolding way." As a result, the German Environment Minister Sigmar Gabriel officially adopted Knut as the mascot for a conference on endangered species to be held in Bonn in 2008. The minister met with Knut soon after his zoo debut, commenting that although Knut was in safe hands, "worldwide polar bears are in danger and if Knut can help the cause, then that is a good thing." Photographer Annie Leibovitz took pictures of Knut that were used for an environmental campaign, including Vanity Fair magazine's May 2007 Green Issue in which he was superimposed into a photograph with American actor Leonardo DiCaprio. The polar bear has also been depicted on the logo for the German Environment Minister's campaign to help stop global warming and a 2008 special issue stamp. Officially released on 9 April, the stamp shows the roughly one-year-old Knut with the slogan "Natur weltweit bewahren" ("Preserve nature worldwide"). ## See also - Binky (polar bear) - List of individual bears
5,712,514
Banksia sessilis
1,171,990,018
Species of plant of Western Australia
[ "Banksia ser. Dryandra", "Endemic flora of Southwest Australia", "Eudicots of Western Australia", "Ornamental trees", "Plants described in 1809", "Trees of Australia", "Trees of Mediterranean climate" ]
Banksia sessilis, commonly known as parrot bush, is a species of shrub or tree in the plant genus Banksia of the family Proteaceae. It had been known as Dryandra sessilis until 2007, when the genus Dryandra was sunk into Banksia. The Noongar peoples know the plant as budjan or butyak. Widespread throughout southwest Western Australia, it is found on sandy soils over laterite or limestone, often as an understorey plant in open forest, woodland or shrubland. Encountered as a shrub or small tree up to 6 m (20 ft) in height, it has prickly dark green leaves and dome-shaped cream-yellow flowerheads. Flowering from winter through to late spring, it provides a key source of food—both the nectar and the insects it attracts—for honeyeaters in the cooler months, and species diversity is reduced in areas where there is little or no parrot bush occurring. Several species of honeyeater, some species of native bee, and the European honey bee seek out and consume the nectar, while the long-billed black cockatoo and Australian ringneck eat the seed. The life cycle of Banksia sessilis is adapted to regular bushfires. Killed by fire and regenerating by seed afterwards, each shrub generally produces many flowerheads and a massive amount of seed. It can recolonise disturbed areas, and may grow in thickets. Banksia sessilis has a somewhat complicated taxonomic history. It was collected from King George Sound in 1801 and described by Robert Brown in 1810 as Dryandra floribunda, a name by which it was known for many years. However, Joseph Knight had published the name Josephia sessilis in 1809, which had precedence due to its earlier date, and the specific name was formalised in 1924. Four varieties are recognised. It is a prickly plant with little apparent horticultural potential; none of the varieties are commonly seen in cultivation. A profuse producer of nectar, B. sessilis is valuable to the beekeeping industry. ## Description Banksia sessilis grows as an upright shrub or small tree up to 6 m (20 ft) high, without a lignotuber. In most varieties, new stems are covered in soft, fine hairs that are lost with maturity; but new stems of B. sessilis var. flabellifolia are usually hairless. Leaves are blue-green or dark green. Their shape differs by variety: in var. cygnorum and var. flabellifolia they are wedge-shaped, with teeth only near the apex; in var. cordata they are wedge-shaped, but with teeth along the entire margin; and in var. sessilis they are somewhat broader at the base, sometimes almost oblong in shape. Leaf size ranges from 2 to 6 cm (1 to 2.5 in) in length, and 0.8–4 cm (0.31–1.57 in) in width. They may be sessile (that is, growing directly from the stem without a petiole) or on a petiole up to 0.5 cm (0.20 in) long. The inflorescences are cream or yellow, and occur in domed heads 4 to 5 cm (1+1⁄2 to 2 in) wide, situated at the end of a stem. Each head contains from 55 to 125 individual flowers, surrounded at the base by a whorl of short involucral bracts. As with most other Proteaceae, individual flowers consist of a tubular perianth made up of four united tepals, and one long wiry style. The style end is initially trapped inside the upper perianth parts, but breaks free at anthesis. In B. sessilis the perianth is straight, 20 to 32 mm (0.79 to 1.26 in) long, and pale yellow. The style is slightly shorter, also straight, and cream-coloured. Thus in B. sessilis, unlike many other Banksia species, the release of the style at anthesis does not result in a showy flower colour change. One field study found that anthesis took place over four days, with the outer flowers opening first and moving inwards. Flowering mostly takes place from July to November; var. sessilis can start as early as May. After flowering, the flower parts wither and fall away, and up to four follicles develop in the receptacle (the base of the flower head). Young follicles are covered in a fine fur, but this is lost as they mature. Mature follicles are ovoid in shape, and measure 1–1.5 cm (0.39–0.59 in) in length. Most follicles open as soon as they are ripe, revealing their contents: a woody seed separator and up to two winged seeds. ## Discovery and naming Specimens of B. sessilis were first collected by Scottish surgeon Archibald Menzies during the visit of the Vancouver Expedition to King George Sound in September and October 1791. No firm location or collection date can be ascribed to Menzies' specimens, as their labels simply read "New Holland, King Georges Sound, Mr. Arch. Menzies", and Menzies' journal indicates that he collected over a wide area, visiting a different location every day from 29 September to 8 October. In addition to B. sessilis, Menzies collected plant material of B. pellaeifolia, and seeds of at least four more Banksia species. This was therefore an important early collection for the genus, only seven species of which had previously been collected. Menzies' seed specimens were sent to England from Sydney in 1793, but his plant material remained with him for the duration of the voyage, during which some material was lost. On his return to England in 1795, the surviving specimens were deposited into the herbarium of Sir Joseph Banks, where they lay undescribed for many years. The next collection was made in December 1801, when King George Sound was visited by HMS Investigator under the command of Matthew Flinders. On board were botanist Robert Brown, botanical artist Ferdinand Bauer, and gardener Peter Good. All three men gathered material for Brown's specimen collection, including specimens of B. sessilis, but neither Brown's nor Good's diary can be used to assign a precise location or date for their discovery of the species. Good also made a separate seed collection, which included B. sessilis, and the species was drawn by Bauer. Like nearly all of his field drawings of Proteaceae, Bauer's original field sketch of B. sessilis was destroyed in a Hofburg fire in 1945. A painting based on the drawing survives, however, at the Natural History Museum in London. On returning to England in 1805, Brown began preparing an account of his Australian plant specimens. In September 1808, with Brown's account still far from finished, Swedish botanist Jonas Dryander asked him to prepare a separate paper on the Proteaceae so he could use the genera erected by Brown in a new edition of Hortus Kewensis. Brown immediately began a study of the Proteaceae, and in January 1809 he read to the Linnean Society of London a monograph on the family entitled On the Proteaceae of Jussieu. Among the eighteen new genera presented was one that Brown named Josephia in honour of Banks. Brown's paper was approved for printing in May 1809, but did not appear in print until March the following year. In the meantime, Joseph Knight published On the cultivation of the plants belonging to the natural order of Proteeae, which appeared to draw heavily on Brown's unpublished material, without permission, and in most cases without attribution. It contained the first publication of Brown's Josephia, for which two species were listed. The first, Josephia sessilis, was based on one of Menzies' specimens: "This species, discovered by Mr. A. Menzies on the West coast of New Holland, is not unlike some varieties of Ilex aquifolium, and now in his Majesty's collection at Kew." The etymology of the specific epithet was not explicitly stated, but it is universally accepted that it comes from the Latin sessilis (sessile, stalkless), in reference to the sessile leaves of this species. Blame for the alleged plagiarism largely fell on Richard Salisbury, who had been present at Brown's readings and is thought to have provided much of the material for Knight's book. Salisbury was ostracized by the botanical community, which undertook to ignore his work as much as possible. By the time Brown's monograph appeared in print, Brown had exchanged the generic name Josephia for Dryandra, giving the name Dryandra floribunda to Knight's Josephia sessilis. As there were then no firm rules pertaining to priority of publication, Brown's name was accepted, and remained the current name for over a century. Another significant early collection was the apparent discovery of the species at the Swan River in 1827. In that year, the colonial botanist of New South Wales Charles Fraser visited the area as part of an exploring expedition under James Stirling. Among the plants Fraser found growing on the south side of the river entrance was "a beautiful species of Dryandra", which was probably this species. Over the course of the 19th century, the principle of priority in naming gradually came to be accepted by botanists, as did the need for a mechanism by which names in current usage could be conserved against archaic or obscure prior names. By the 1920s, Dryandra R.Br. was effectively conserved against Josephia Knight; a mechanism for formal conservation was put in place in 1933. Brown's specific name, however, was not conserved, and Karel Domin overturned Dryandra floribunda R.Br. by transferring Knight's name into Dryandra as Dryandra sessilis (Knight) Domin in 1924. This name was current until 2007, when all Dryandra species were transferred into Banksia by Austin Mast and Kevin Thiele. The full citation for the current name is thus Banksia sessilis (Knight) A.R.Mast & K.R.Thiele. ### Common names The first common names for this species were literal translations of the scientific names. When published as Josephia sessilis in 1809, it was given the common name sessile Josephia. Brown did not offer a common name when he published Dryandra floribunda in 1810, but later that year the Hortus Kewensis translated it as many flowered dryandra. This name was also used when the plant was featured in Curtis's Botanical Magazine in 1813. In Australia, the names prickly banksia and shaving-brush flower were offered up by Emily Pelloe in 1921, the latter because "when in bud the flower very much resembles a shaving-brush". Shaving-brush flower was still in use as late as the 1950s. The name holly-leaved dryandra was used when the plant was featured as part of a series of articles in the Western Mail of 1933–34, and this was taken up by William Blackall in 1954, and was still in use as late as 1970. Meanwhile, Gardner used the name parrot bush in 1959, a name derived from the observation that the blooms attract parrots, by which the species was already "well-known to bee-keepers". This name was widely adopted, and since 1970 has been in almost exclusive usage. The only indigenous names reported for the plant are Budjan and But-yak. These were published by Ian Abbott in his 1983 Aboriginal Names for Plant Species in South-western Australia, with Abbott suggesting that the latter name should be preferred, but with the orthography "Pudjak". However, Abbott sources these names to George Fletcher Moore's 1842 A Descriptive Vocabulary of the Language of the Aborigines, which in fact attributes these names to the species Dryandra fraseri (now Banksia fraseri). It is unclear whether Abbott has corrected Moore's error, or introduced an error of his own. ## Taxonomy ### Infrageneric placement Brown's 1810 monograph did not include an infrageneric classification of Dryandra, and neither did his Prodromus, published later that year. In 1830, however, he introduced the first taxonomic arrangement of Dryandra, placing D. floribunda in section Dryandra verae along with most other species, because its follicles contain a single seed separator. Dryandra verae was renamed Eudryandra by Carl Meissner in 1845. Eleven years later Meissner published a new arrangement, retaining D. floribunda in D. sect. Eudryandra, and further placing it in the unranked subgroup § Ilicinae, because of the similarity of its leaves to those of Ilex (holly). In 1870, George Bentham published a revised arrangement in his Flora Australiensis. Bentham retained section Eudryandra, but abandoned almost all of Meissner's unranked groups, including § Ilicinae. D. floribunda was instead placed in D. ser. Floribundae along with four other species with small, mostly terminal flowers, left exposed by their having unusually short floral leaves. Bentham's arrangement stood for over a hundred years, eventually replaced in 1996 by the arrangement of Alex George. Section Eudryandra was promoted to subgenus rank, but replaced by the autonym D. subg. Dryandra. D. sessilis, as this species was now called, was retained in D. ser. Floribundae, but alone, as the series was redefined as containing only those taxa that apparently lack floral bracts altogether. The placement of D. sessilis in George's arrangement, with 1999 and 2005 amendments, may be summarised as follows: Dryandra (now Banksia ser. Dryandra) : D. subg. Dryandra : : D. ser. Floribundae : : : D. sessilis (now B. sessilis) : : : : D. sessilis var. sessilis (now B. sessilis var. sessilis) : : : : D. sessilis var. flabellifolia (now B. sessilis var. flabellifolia) : : : : D. sessilis var. cordata (now B. sessilis var. cordata) : : : : D. sessilis var. cygnorum (now B. sessilis var. cygnorum) : : D. ser. Armatae : : D. ser. Marginatae : : D. ser. Folliculosae : : D. ser. Acrodontae : : D. ser. Capitellatae : : D. ser. Ilicinae : : D. ser. Dryandra : : D. ser. Foliosae : : D. ser. Decurrentes : : D. ser. Tenuifoliae : : D. ser. Runcinatae : : D. ser. Triangulares : : D. ser. Aphragma : : D. ser. Ionthocarpae : : D. ser. Inusitatae : : D. ser. Subulatae : : D. ser. Gymnocephalae : : D. ser. Plumosae : : D. ser. Concinnae : : D. ser. Obvallatae : : D. ser. Pectinatae : : D. ser. Acuminatae : : D. ser. Niveae : D. subg. Hemiclidia : D. subg. Diplophragma George's arrangement remained current until 2007, when Austin Mast and Kevin Thiele transferred Dryandra into Banksia. They also published B. subg. Spathulatae for the Banksia taxa having spoon-shaped cotyledons, thus redefining B. subg. Banksia as comprising those that do not. They were not ready, however, to tender an infrageneric arrangement encompassing Dryandra, so as an interim measure they transferred Dryandra into Banksia at series rank. This minimised the nomenclatural disruption of the transfer, but also caused George's rich infrageneric arrangement to be set aside. Thus under the interim arrangements implemented by Mast and Thiele, B. sessilis is placed in B. subg. Banksia, ser. Dryandra. ### Varieties Four varieties are recognised: - B. sessilis var. sessilis is an autonym that encompasses the type material of the species. This is the most widespread variety, occurring from Regans Ford and Moora in the north, south-east to Albany, and inland as far as Wongan Hills, Pingelly and Kulin. Its blue-green leaves are cuneate (wedge-shaped) or oblong, and are usually two to three centimetres long but may reach five. - B. sessilis var. cordata was published as Dryandra floribunda var. cordata by Carl Meissner in 1848. In 1870, George Bentham published D. floribunda var. major, but this is now considered a taxonomic synonym of B. sessilis var. cordata. It has larger inflorescences than var. sessilis, as well as larger dark green, rather than blue green leaves. It is found in the state's far southwest, between Capes Leeuwin and Naturaliste, and east to Walpole, and grows on sandy soils over limestone. - B. sessilis var. cygnorum has its roots in Michel Gandoger's publication of two new species names in 1919. He published Dryandra cygnorum and Dryandra quinquedentata, but in 1996 both of these were found to refer to the same taxon, which Alex George gave variety rank as Dryandra sessilis var. cygnorum. The term cygnorum is Latin for "swan" and relates to the Swan River, which runs past the suburb of Melville where the type material was collected. It has smaller dark green leaves only 2–3 cm (1–1 in) long and 0.8–1.7 cm (0.31–0.67 in) wide, whose teeth are limited to the distal part of the leaf. The range is along the Western Australian coastline from Dongara southwards past Fremantle, and east to Lake Indoon and Kings Park. - B. sessilis var. flabellifolia was published by George in 1996, the type specimen having been collected northwest of Northampton in 1993. The northernmost of the four varieties, it is found from Kalbarri south to Geraldton and Northampton. There are some scattered records further south towards Moora. Its specific name is derived from the Latin flabellum "fan" and folium "leaf". Its leaves are fan shaped, with a long, toothless lower margin, and a toothed end. Its stems are hairless, unlike the other varieties. ## Distribution and habitat Banksia sessilis is endemic to the Southwest Botanical Province, a floristic province renowned as a biodiversity hotspot, located in the southwest corner of Western Australia. This area has a Mediterranean climate, with wet winters and hot, dry summers. B. sessilis occurs throughout much of the province, ranging from Kalbarri in the north, south to Cape Leeuwin, east along the south coast as far as Bremer Bay, and inland to Wongan Hills and Kulin. It thus spans a wide range of climates, occurring in all but the semi-arid areas well inland. It is also absent from the Karri forest in the cool, wet, southwest corner of the province, but even there, B. sessilis var. cordata occurs along the coast. The species tolerates a range of soils, requiring only that its soil be well-drained. Like most dryandras, it grows well in lateritic soils and gravels; this species is also found in deep sand, sand over laterite, and sand over limestone. It also occurs in a range of vegetation complexes, including coastal and kwongan heath, tall shrubland, woodland and open forest. It is a common understorey plant in drier areas of Jarrah forest, and forms thickets on limestone soils of the Swan Coastal Plain. Banksia sessilis sets a large amount of seed and is an aggressive coloniser of disturbed and open areas; for example, it has been recorded colonising gravel pits in the Darling Scarp. Nothing is known of the conditions that affect its distribution, as its biogeography is as yet unstudied. An assessment of the potential impact of climate change on this species found that its range is likely to contract by half in the face of severe change, but unlikely to change much under less severe scenarios. ## Ecology ### As food The nectar of B. sessilis is an important component of the diet of several species of honeyeater. In one study, B. sessilis was found to be the main source of nectar for all six species studied, namely the tawny-crowned honeyeater (Gliciphila melanops), white-cheeked honeyeater (Phylidonyris niger), western spinebill (Acanthorhynchus superciliosus), brown honeyeater (Lichmera indistincta), brown-headed honeyeater (Melithreptus brevirostris), and black honeyeater (Certhionyx niger). Moreover, B. sessilis played an important role in their distributions, with species that feed only on nectar occurring only where B. sessilis occurs, and remaining for longest at sites where B. sessilis is most abundant. Other honeyeaters that have been recorded feeding on B. sessilis include the red wattlebird (Anthochaera carunculata), western wattlebird (A. lunulata), and New Holland honeyeater (Phylidonyris novaehollandiae). Furthermore, a study of bird species diversity in wandoo woodland around Bakers Hill found that honeyeater species and numbers were much reduced in forest that lacked a Banksia sessilis understory; the plant is a key source of nectar and insects during the winter months. A field study in jarrah forest 9 km south of Jarrahdale, where B. sessilis grows in scattered clumps, found that western wattlebirds and New Holland honeyeaters sought out groups of plants with the greatest numbers of new inflorescences, particularly those one or two days after anthesis, where nectar yield was highest. The birds likely recognises these by visual clues. Banksia sessilis is also a source of food for the Australian ringneck (Barnardius zonarius), and the long-billed black cockatoo (Calyptorhynchus baudinii), which tear open the follicles and consume the seeds. The introduced European honey bee (Apis mellifera) has also been observed feeding on B. sessilis, as have seven species of native bee, comprising four species of Hylaeus (including the banksia bee H. alcyoneus), two of Leioproctus, and a Lasioglossum. ### Life cycle Honeyeaters are clearly the most important pollination vector, as inflorescences from which honeyeaters are excluded generally do not set any fruit. Moreover, honeyeaters have been observed moving from tree to tree with significant loads of B. sessilis pollen on their foreheads, beaks and throats, having acquired it by brushing against pollen presenters while foraging for nectar; experiments have shown that some of this pollen may be subsequently deposited on stigmas during later foraging. The flowers of B. sessilis have adaptations that encourage outcrossing. Firstly, they are protandrous: a flower's pollen is released around 72 hours before it becomes itself receptive to pollen, by which time around half of its pollen has lost its viability. Secondly, the period of maximum nectar production closely matches the period during which the flower is sexually active, so honeyeaters are enticed to visit at the most opportune time for pollination. This has proven an effective strategy: almost all pollen is removed within two to three hours of presentation. In addition, honeyeaters tend to move between inflorescences on different plants, rather than between inflorescences on the same plant, at least in high density sites. These factors combine to make it fairly unusual for a flower to be fertilised by its own pollen. When self-fertilisation does occur, whether autogamous or geitonogamous, the resulting seed is almost always aborted, and the species ultimately achieves an outcrossing rate of nearly 100%, at least in high density sites. Limited data for low-density sites, where honeyeaters move from plant to plant less frequently, suggest more of a mixed-mating system. The species is a prolific flowerer, and this, combined with the very high outcrossing rates, results in massive seed output. In one study, the average number of seeds produced per B. sessilis plant was 622, compared with an average of two for B. dallanneyi. This exceptionally high fecundity can be understood as an adaption to regular bushfire. Most Banksia species can be placed in one of two broad groups according to their response to fire: resprouters survive fire, resprouting from a lignotuber or, more rarely, epicormic buds protected by thick bark; reseeders are killed by fire, but populations are rapidly re-established through the recruitment of seedlings. B. sessilis is a reseeder, but it differs from many other reseeders in not being strongly serotinous: the vast majority of seeds are released spontaneously in autumn, even in the absence of fire. The degree of serotiny is a matter of some contradiction in the scientific literature: it has been treated as "serotinous", "weakly serotinous" and "non-serotinous". Regardless of the terminology used, the massive spontaneous seed output of B. sessilis is its primary survival strategy, and is so effective the species has a reputation as an excellent coloniser. However, this strategy, together with its relatively long juvenile period, makes it vulnerable to overly frequent fire. Seeds of B. sessilis are short-lived, and must germinate in the winter following their release, or they die. They are also very sensitive to heating, and thus killed by bushfire; in one study, just 30 seconds in boiling water reduced the germination rate from 85% to 22%, and not a single seed survived one minute of boiling. Like most other Proteaceae, B. sessilis has compound cluster roots, roots with dense clusters of short lateral rootlets that form a mat in the soil just below the leaf litter. These exude a range of carboxylates, including citrate, malonate and trans-aconitate, that act as acid phosphatase, allowing the absorption of nutrients from nutrient-poor soils, such as the phosphorus-deficient native soils of Australia. ### Disease Banksia sessilis is highly susceptible to dieback caused by the introduced plant pathogen Phytophthora cinnamomi, a soil-borne water mould that causes root rot; in fact it is so reliably susceptible it is considered a good indicator species for the presence of the disease. Most highly susceptible species quickly become locally extinct in infected areas, and in the absence of hosts the disease itself eventually dies out. However, B. sessilis, being an aggressive coloniser of disturbed and open ground, often colonises old disease sites. The new colonies are themselves infected, and thus P. cinnamomi survives at these sites indefinitely. The application of phosphite inhibits growth of P. cinnamomi in B. sessilis, but does not kill the pathogen. In one study, a foliar spray containing phosphite inhibited the growth of P. cinnamomi by over 90% in plants infected with B. sessilis two weeks after spraying, and by 66% in plants infected one year after spraying; yet most plants infected shortly before or after spraying were dead 100 days later, while nearly all plants infected seven months later spraying survived a further 100 days. Phosphite is not known to affect plant growth, but has been shown to reduce pollen fertility: one study recorded fertility reductions of up to 50%, and, in a separate experiment, fertility reductions that persisted for more than a year. Infection of coastal stands of B. sessilis by the fungus Armillaria luteobubalina has also been recorded. The apparent infection rate of 0.31 is quite slow compared to the progress of other Armillaria species through pine plantations. ## Cultivation ### History It is not known whether the seed collection sent to the Royal Botanic Gardens, Kew, by Menzies in 1793 included seeds of B. sessilis, but if it did then it did not germinate. The species was successfully germinated, however, from Good's seed, which was sent from Sydney on 6 June 1802 and arrived at Kew the following year. According to Brown's notes it was flowering at Kew by May 1806, and in 1810 it was reported in the second edition of Hortus Kewensis as flowering "most part of the Year". In 1813 a flowering specimen from the nursery of Malcolm and Sweet was featured as Plate 1581 in Curtis's Botanical Magazine. By the 1830s the species was in cultivation in continental Europe. It was recorded as being cultivated in the garden of Karl von Hügel in Vienna, Austria in 1831, and in 1833 it was listed amongst the rare plants that had been introduced into Belgium. Along with several hundred other native Australian plants it was exhibited at plant shows held at Utrecht and Haarlem in the Netherlands in the 1840s and 1850s. By this time, however, English gardeners had already begun to lose interest in the Proteaceae, and by the end of the 19th century European interest in the cultivation of Proteaceae was virtually non-existent. In Australia, there was little interest in the cultivation of Australian plants until the mid-20th century, despite a long-standing appreciation of their beauty as wildflowers. For example, in 1933 and 1934 The Western Mail published a series of Edgar Dell paintings of Western Australian wildflowers, including a painting of B. sessilis. These were subsequently republished in Charles Gardner's 1935 West Australian Wild Flowers. One of the first published colour photographs of the species appeared in William Blackall's 1954 How to know Western Australian wildflowers, but this publication was restricted to plant identification. The species was discussed and illustrated in the 1959 Wildflowers of Western Australia, and in the 1973 Flowers and plants of Western Australia, but these books did not provide cultivation advice either. Possibly the first published information on the cultivation of Dryandra appeared in the magazine Australian Plants in June and September 1961. D. sessilis was among the species treated, but as there was not yet any experimental data on cultivation, information was restricted to its aesthetic qualities and the soil in which it naturally occurs. From its inception in 1962, the Kings Park and Botanic Garden undertook extensive research into the cultivation of native plants, resulting in two early publications that mentioned the cultivation potential of B. sessilis. In 1965, John Stanley Beard published Descriptive catalogue of Western Australian plants, "a work of reference in which the horticultural characteristics of the plants concerned could be looked up by the staff", which described D. sessilis as an erect shrub with pale yellow flowers appearing from May to October, growing in sand and gravel. Five years later, Arthur Fairall published West Australian native plants in cultivation. This presented largely the same information as Beard's catalogue, adding only that the species flowers well in its third season. ### Current knowledge According to current knowledge, B. sessilis is an extremely hardy plant that grows in a range of soils and aspects, so long as it is given good drainage, and tolerates both drought and moderate frost. Unlike many dryandras, it grows well on limestone (alkaline) soils. It flowers very heavily and is an excellent producer of honey. It attracts birds, and is also popular with beekeepers. However, its size makes it unsuitable for smaller gardens, and if given an ideal situation it may produce a great many seedlings. It is propagated only from seed, as propagating it from cuttings has proven virtually impossible. Germination takes about five or six weeks, and plants may take two years to flower.
72,412,168
The Dance of the Twisted Bull
1,170,136,389
2002 fashion collection by Alexander McQueen
[ "2000s fashion", "2001 in Paris", "Alexander McQueen collections", "British fashion" ]
The Dance of the Twisted Bull (Spring/Summer 2002; Spanish: El baile del toro retorcido) is the nineteenth collection by British designer Alexander McQueen for his eponymous fashion house. Twisted Bull was inspired by Spanish culture and art, especially the traditional clothing worn for flamenco dancing and bullfighting. In McQueen's typical fashion, the collection included sharp tailoring and historicist elements and emphasised femininity and sexuality. The runway show for Twisted Bull was staged during Paris Fashion Week on 6 October 2001 at the headquarters of the Stade Français sports club [fr]. It was McQueen's first collection following his departure from Givenchy and the sale of his company to the Gucci Group in 2001. Compared to his previous seasons, which tended to be theatrical and artistic, the runway show was simple, and the clothing designs were unusually commercial. McQueen confirmed that this was a business decision intended to drive sales for his first season under Gucci. Sales for the collection were reportedly strong. Reception for Twisted Bull was mostly positive, especially from British journalists, who highlighted the accessible designs and polished presentation. American journalists were less impressed, particularly with the dressmaking. The most noted look from the collection was a showpiece dress made to look as though its torso was pierced through by spears, which later appeared in both stagings of the retrospective exhibition Alexander McQueen: Savage Beauty. Other looks appeared in the 2022 retrospective exhibition Lee Alexander McQueen: Mind, Mythos, Muse. ## Background British designer Alexander McQueen was known in the fashion industry for his imaginative, sometimes controversial designs. His collections were strongly historicist, referencing and reworking historical narratives and concepts. His fashion shows were theatrical to the point of verging on performance art. The runway shows for his last two collections before The Dance of the Twisted Bull had both been in this mode: Voss (Spring/Summer 2001) was staged as a voyeuristic look inside a stereotypical insane asylum, while the set dressing for What A Merry-Go-Round (Autumn/Winter 2001) included an actual carousel ride. From 1996 to October 2001, McQueen was also – in addition to his responsibilities for his own label – head designer at French fashion house Givenchy. His time at Givenchy was fraught, primarily because of creative differences between him and the label, and the press speculated that he would leave his contract early. In 2000, before his contract with Givenchy had finished, McQueen signed a deal with Gucci, an Italian fashion house and rival to Givenchy, effectively daring Givenchy to fire him. Gucci bought 51 per cent of McQueen's company with McQueen remaining its creative director. Twisted Bull was McQueen's first collection for his own label under Gucci. ## Concept and creative process The Dance of the Twisted Bull (Spring/Summer 2002) is the nineteenth collection by British designer Alexander McQueen for his eponymous fashion house. It was inspired by Spanish culture and art, particularly the traditional clothing worn for flamenco dancing and bullfighting – traje de flamenca and traje de luces, respectively. The romantic, feminine collection incorporated ruffled and polka-dotted flamenco dresses, ornamented short jackets in the vein of the matador's traditional chaquetilla, and sharply tailored suits, the latter a McQueen staple. Some designs appeared to reference The Tailor's Pattern Book, a 1589 book of patterns by Spanish mathematician Juan de Alcega. Other historicist elements included corsets, which appeared integrated into garments and as outerwear. The collection's primary palette was red, black, and white. The darker colours of some ensembles referenced the moody work of Spanish painter Francisco Goya, and architectural elements referenced Spanish architect Antoni Gaudí. Writing in 2012, fashion historian Judith Watt noted that the collection's highly-feminine styling was in line with trends for 2002, although she also found a significant influence from sportswear. McQueen described his customer for Twisted Bull as a woman wanting to look sexy at a nightclub, and consequently the collection had sexuality front and centre. Many outfits were styled to expose cleavage. Dresses were skintight, and some ensembles had cutouts exposing skin. On some runway looks, the breasts of the models were fully exposed. The form-fitting cut of the trouser suits emphasised the bodies of the models, and the use of masculine elements for womenswear subversively played up the sexual attractiveness of the traditional matador in a way that is often sidelined in Spanish culture. The juxtaposition of sexuality with violence and death and the tension between aggression and fragility were recurring themes in McQueen's work. The clothing in Twisted Bull was far more commercial than McQueen's typical designs, which tended to be more artistic than practical. Making the collection accessible and customer-focused was a business decision for McQueen, intended to drive sales for his first season with Gucci. McQueen stated that the overt sexuality of the collection was explicitly intended to push sales, saying: "It's romantic and it's hot sex. That's what makes the world go around and it's what sells clothes too." McQueen's commercial strategy seemingly paid off; Gucci president Domenico De Sole reported that the brand saw a 400 per cent increase in sales compared to previous collections. ## Runway show The runway show for Twisted Bull was staged on 6 October 2001, during Paris Fashion Week, at the headquarters of the Stade Français sports club [fr] in the 16th arrondissement of Paris. As a British designer, McQueen had always presented in London during London Fashion Week. Twisted Bull was the first collection he presented in Paris for his own label; afterward he showed all his womenswear collections there until his death in 2010. The show was sponsored by American Express, which had sponsored McQueen several times. Production was handled by Gainsbury & Whiting, and Katy England was in charge of overall styling. Headpieces were made by miliner Philip Treacy. Makeup artist Val Garland, then with MAC Cosmetics, styled makeup for the models. The look was dark and smoky, with a red, black, and grey colour palette that echoed the clothing. Stylist Guido Palau was responsible for the hair, which was given a retro style reminiscent of classic pin-up models and rockabilly fashion. The overall effect, according to Watt, was a grungy glamour that suggested the models had "crawled out of bed and thrown on something from the night before". Unlike many of his previous shows, the runway show for Twisted Bull was relatively mundane, with no complex set pieces or performance aspects. Models entered and exited through a curtain of grey smoke at the rear of the stage, upon which video clips – flamenco dancing, bullfighting, and softcore pornography – were projected. The soundtrack was a combination of electronic tracks, flamenco guitar music, and Björk songs. Following the last model, the soundtrack changed to the sound of a woman moaning. A woman's face, apparently mid-coitus, was projected on the smokescreen. Her expression changed to one of fear, and the projection cut to a man swinging a sword. The smoke turned blood-red, and the models appeared en masse for a final turn. ### Notable pieces The collection's central showpiece was Look 33, worn on the runway by Irish model Laura Morgan. Look 33 is a long red and white ruffled flamenco-style dress designed to look as though it – and the model – had been pierced by decorative bullfighting spears. The long train of the dress was caught up on the spears in the back. The spears were created by jeweller Shaun Leane. Watt noted a similarity between the dress and a sketch of an impaled mermaid McQueen had made in 1990. A second dress, Look 61, also incorporated weaponry. This look was a strapless black-and-white dress with a sword sewn into the skirt. On the runway, the model held the sword perpendicular to her body so that the skirt's train was lifted up behind her. Jewellery designer Naomi Filmer created blown-glass body pieces for the collection at McQueen's request. Look 4 features "Ball in the Small of my Back", a sphere which fits over the wearer's hands while held behind them, dictating a dance-like posture with pulled-back shoulders. ## Reception The collection was generally well-received by British critics, who appreciated its more commercial designs. Despite the low-key presentation, journalist Hilary Alexander called it a "powerful and passionate show". John Davidson of The Glasgow Herald called the collection "truly polished" and agreed with McQueen's decision to forgo theatrics for the show, although he found the sexuality excessive. An unbylined style brief in The Guardian criticised the appearance of drop crotch pants in the collection, describing them as "not a nice look". The staff writer at Vogue España noted that the influence was a series of Spanish cultural clichés but called the collection a "perfect adaptation" to his brand's new home at Gucci. American critics were less impressed, particularly with the dressmaking. Writing for The New York Times, Cathy Horyn called the show "overwrought" and dismissed the style of the dresses as being like a "rigid satin party skirt of the 1950s genre". American fashion editor Robin Givhan found the tailoring excellent but found the "dressmaking flourishes were too showy and indulgent". Critics called out Look 33, the spear-pierced dress, as the most significant look from the collection. American journalist Dana Thomas wrote that it was the collection's "most poignant look". Journalist Jess Cartner-Morley called it one of McQueen's "classic show pieces". Rebecca Lowthorpe of The Independent agreed, also calling out Look 14 for having a skirt which appeared to be made "entirely out of Spanish fans". On the other hand, Davidson criticised the spear dress as "masochistic tromp l'oeil nonsense". ### Analysis In her book Alexander McQueen: Evolution, Catherine Gleason reports that some audience members were upset by the use of sexual sounds and imagery of apparent sexual violence that concluded the show. Some critics found the content particularly shocking as it came less than a month after the September 11 attacks in the United States. Conversely, The Adelaide Advertiser suggested that the relatively low-key shows at Paris Fashion Week that season indicated a subdued feeling in the fashion world following 11 September. The concept for the show had in fact been developed approximately four months in advance, well before the terrorist incident. McQueen dismissed the idea that he should have altered his collection in response to the attacks, saying "There's no link between the two things as far as I can see." Journalist Dana Thomas noted the parallel to an earlier Spanish-themed collection by British designer John Galliano during his time at Givenchy. The two men were often compared in the press due to their roughly parallel career arcs and similarly maximalist styles, and McQueen often sought to emulate or outdo Galliano's designs in his own work. Thomas argued that Twisted Bull was an effort to do so across an entire collection. Look 61, the dress which incorporated a sword, was similar to Look 10 from Galliano's Filibustiers (Spring/Summer 1993), a dress which also used a sword to hold up its train at an angle. Fashion historian Ingrid Loschek discussed Twisted Bull as an example of McQueen's habit of playing with dichotomies, and his ability to express emotions and ideas through the styling of the clothes and the runway show. She noted particularly the transformation of the "confident flamenco dancer who becomes a victim herself when a lance 'skewers' body and dress". ## Legacy McQueen revisited elements of the matador costume in Sarabande (Spring/Summer 2007), which featured a pair of black and white ensembles with ruffled shirts and embroidery reminiscent of Spanish blackwork. One outfit had tight trousers like the matador's taleguilla, and one had beadwork resembling their traditional braces. The Alexander McQueen archive retains ownership of Look 33, the spear-pierced dress. This look appeared in both stagings of the retrospective exhibition Alexander McQueen: Savage Beauty, where it was one of only two pieces from Twisted Bull to be featured. The other was Look 66, from the collection of Daphne Guinness: a beaded black jacket over beaded black jumpsuit, with a leather hat by Philip Treacy. Look 33 was used again for "Dark Angel", a 2015 retrospective editorial of McQueen's work in British Vogue by British fashion photographer Tim Walker. Several looks from Twisted Bull owned by the Los Angeles County Museum of Art appeared at the 2022 retrospective exhibition Lee Alexander McQueen: Mind, Mythos, Muse. The collection was placed in the Evolution and Existence section of the exhibition, which highlighted collections focused on "life cycles and the human condition".
76,811
Prince Albert of Saxe-Coburg and Gotha
1,173,562,980
Consort of Queen Victoria from 1840 to 1861
[ "11th Hussars officers", "1819 births", "1861 deaths", "19th-century British people", "Bailiffs Grand Cross of Honour and Devotion of the Sovereign Military Order of Malta", "British Protestants", "British field marshals", "British princes", "British royal consorts", "Chancellors of the University of Cambridge", "German Protestants", "German emigrants to the United Kingdom", "Grand Cross of the Legion of Honour", "Grand Crosses of the Order of Aviz", "Grand Crosses of the Order of Christ (Portugal)", "Grand Crosses of the Order of Saint James of the Sword", "Grand Crosses of the Order of Saint Stephen of Hungary", "Great Masters of the Order of the Bath", "Grenadier Guards officers", "Honorary Fellows of the Royal Society of Edinburgh", "House of Saxe-Coburg and Gotha (United Kingdom)", "Knights Companion of the Order of the Star of India", "Knights Grand Cross of the Order of St Michael and St George", "Knights Grand Cross of the Order of the Bath", "Knights of St Patrick", "Knights of the Garter", "Knights of the Golden Fleece of Spain", "Knights of the Thistle", "Members of Trinity House", "Members of the Privy Council of the United Kingdom", "People from Coburg", "Presidents of the British Science Association", "Presidents of the Zoological Society of London", "Prince Albert of Saxe-Coburg and Gotha", "Princes of Saxe-Coburg and Gotha", "Queen Victoria", "Recipients of the Order of St. Anna, 1st class", "Recipients of the Order of the Medjidie, 1st class", "Recipients of the Order of the Netherlands Lion", "Recipients of the Order of the Tower and Sword", "Recipients of the Order of the White Eagle (Russia)", "Royal reburials", "Scots Guards officers", "Sons of monarchs", "University of Bonn alumni" ]
Prince Albert of Saxe-Coburg and Gotha (Franz August Karl Albert Emanuel; 26 August 1819 – 14 December 1861) was the husband of Queen Victoria. As such, he was consort of the British monarch from their marriage on 10 February 1840 until his death in 1861. Albert was born in the Saxon duchy of Saxe-Coburg-Saalfeld to a family connected to many of Europe's ruling monarchs. At the age of 20, he married his cousin Victoria; they had nine children. Initially, he felt constrained by his role as consort, which did not afford him power or responsibilities. He gradually developed a reputation for supporting public causes, such as educational reform and the abolition of slavery worldwide, and he was entrusted with running the Queen's household, office and estates. He was heavily involved with the organisation of the Great Exhibition of 1851, which was a resounding success. Victoria came to depend more and more on Albert's support and guidance. He aided the development of Britain's constitutional monarchy by persuading his wife to be less partisan in her dealings with the British Parliament, but he actively disagreed with the interventionist foreign policy pursued during Lord Palmerston's tenure as Foreign Secretary. Albert died in 1861 at age 42, devastating Victoria so much that she entered into a deep state of mourning and wore black for the rest of her life. On her death in 1901, their eldest son succeeded as Edward VII, the first British monarch of the House of Saxe-Coburg and Gotha, named after the ducal house to which Albert belonged. ## Early life Prince Albert was born on 26 August 1819, at Schloss Rosenau, near Coburg, Germany, the second son of Ernest III, Duke of Saxe-Coburg-Saalfeld, and his first wife, Louise of Saxe-Gotha-Altenburg. His first cousin and future wife, Victoria, was born earlier in the same year with the assistance of the same accoucheuse, Charlotte von Siebold. He was baptised into the Lutheran Evangelical Church on 19 September 1819 in the Marble Hall at Schloss Rosenau, with water taken from the local river, the Itz. His godparents were his paternal grandmother, the Dowager Duchess of Saxe-Coburg-Saalfeld; his maternal grandfather, the Duke of Saxe-Gotha-Altenburg; the Emperor of Austria; the Duke of Teschen; and Emanuel, Count of Mensdorff-Pouilly. In 1825, Albert's great-uncle, Frederick IV, Duke of Saxe-Gotha-Altenburg, died, which led to a realignment of the Saxon duchies the following year; and Albert's father became the first reigning duke of Saxe-Coburg and Gotha. Albert and his elder brother, Ernest, spent their youth in close companionship, which was marred by their parents' turbulent marriage and eventual separation and divorce. After their mother was exiled from court in 1824, she married her lover, Alexander von Hanstein, Count of Pölzig and Beiersdorf. She presumably never saw her children again, and died of cancer at the age of 30 in 1831. The following year, their father married his niece Princess Marie of Württemberg; their marriage was not close, however, and Marie had little—if any—impact on her stepsons' lives. The brothers were educated privately at home by Christoph Florschütz and later studied in Brussels, where Adolphe Quetelet was one of their tutors. Like many other German princes, Albert attended the University of Bonn, where he studied law, political economy, philosophy and the history of art. He played music and he excelled at sport, especially fencing and riding. His tutors at Bonn included the philosopher Fichte and the poet Schlegel. ## Marriage The idea of marriage between Albert and his cousin Victoria was first documented in an 1821 letter from his paternal grandmother, the Dowager Duchess of Saxe-Coburg-Saalfeld, who said that he was "the pendant to the pretty cousin". By 1836, this idea had also arisen in the mind of their ambitious uncle Leopold, who had been King of the Belgians since 1831. At this time, Victoria was the heir presumptive to the British throne. Her father, Prince Edward, Duke of Kent and Strathearn, the fourth son of King George III, had died when she was an infant, and her elderly uncle, King William IV, had no surviving legitimate children. Her mother, the Duchess of Kent, was the sister of both Albert's father—the Duke of Saxe-Coburg and Gotha—and King Leopold. Leopold arranged for his sister, Victoria's mother, to invite the Duke of Saxe-Coburg and Gotha and his two sons to visit her in May 1836, with the purpose of meeting Victoria. William IV, however, disapproved of any match with the Coburgs, and instead favoured the suit of Prince Alexander, second son of the Prince of Orange. Victoria was well aware of the various matrimonial plans and critically appraised a parade of eligible princes. She wrote, "[Albert] is extremely handsome; his hair is about the same colour as mine; his eyes are large and blue, and he has a beautiful nose and a very sweet mouth with fine teeth; but the charm of his countenance is his expression, which is most delightful." Alexander, on the other hand, she described as "very plain". Victoria wrote to her uncle Leopold to thank him "for the prospect of great happiness you have contributed to give me, in the person of dear Albert ... He possesses every quality that could be desired to render me perfectly happy." Although the parties did not undertake a formal engagement, both the family and their retainers widely assumed that the match would take place. Victoria came to the throne on 20 June 1837, aged eighteen. Her letters of the time show interest in Albert's education for the role he would have to play, although she resisted attempts to rush her into marriage. In the winter of 1838–39, the prince visited Italy, accompanied by the Coburg family's confidential adviser, Baron Stockmar. Albert returned to the United Kingdom with Ernest in October 1839 to visit the Queen, with the objective of settling the marriage. Albert and Victoria felt mutual affection and the Queen proposed to him on 15 October 1839. Victoria's intention to marry was declared formally to the Privy Council on 23 November, and the couple married on 10 February 1840 at the Chapel Royal, St James's Palace. Just before the marriage, Albert was naturalised by Act of Parliament, and granted the style of Royal Highness by an Order in Council. Initially Albert was not popular with the British public; he was perceived to be from an impoverished and undistinguished minor state, barely larger than a small English county. The British Prime Minister, Lord Melbourne, advised the Queen against granting her husband the title of "King Consort"; Parliament also objected to Albert being created a peer—partly because of anti-German sentiment and a desire to exclude Albert from any political role. Albert's religious views provided a small amount of controversy when the marriage was debated in Parliament: although as a member of the Lutheran Evangelical Church Albert was a Protestant, the non-Episcopal nature of his church was considered worrisome. Of greater concern, however, was that some of Albert's family were Roman Catholic. Melbourne led a minority government and the opposition took advantage of the marriage to weaken his position further. They opposed a British peerage for Albert and granted him a smaller annuity than previous consorts, £30,000 instead of the usual £50,000. Albert claimed that he had no need of a British peerage, writing: "It would almost be a step downwards, for as a Duke of Saxony, I feel myself much higher than a Duke of York or Kent." For the next seventeen years, Albert was formally titled "HRH Prince Albert" until, on 25 June 1857, Victoria formally granted him the title Prince Consort. Victoria explained, in a letter to Lord Palmerston on 15 March 1857, that she was: "... inclined ... to content herself by simply giving her husband by Letters Patent the title of 'Prince Consort' which can injure no one while it will give him an English title consistent with his position, & avoid his being treated by Foreign Courts as a junior Member of the house of Saxe-Coburg". ## Consort of the Queen The position in which Albert was placed by his marriage, while one of distinction, also offered considerable difficulties; in his own words, "I am very happy and contented; but the difficulty in filling my place with the proper dignity is that I am only the husband, not the master in the house." The Queen's household was run by her former governess, Baroness Lehzen. Albert referred to her as the "House Dragon", and manoeuvred to dislodge the Baroness from her position. Within two months of the marriage, Victoria was pregnant. Albert started to take on public roles; he became President of the Society for the Extinction of Slavery (slavery was still lawful in most parts of the world beyond the British Empire); and helped Victoria privately with her government paperwork. In June 1840, while on a public carriage ride, Albert and the pregnant Victoria were shot at by Edward Oxford, who was later judged insane. Neither Albert nor Victoria was hurt, and Albert was praised in the newspapers for his courage and coolness during the attack. He was gaining public support as well as political influence, which showed itself practically when, in August, Parliament passed the Regency Act 1840 to designate him regent in the event of Victoria's death before their child reached the age of majority. Their first child, Victoria, named after her mother, was born in November. Eight other children would follow over the next seventeen years. All nine children survived to adulthood, which was remarkable for the era; biographer Hermione Hobhouse credited the healthy running of the nursery on Albert's "enlightened influence". In early 1841, he successfully removed the nursery from Lehzen's pervasive control, and in September 1842, Lehzen left Britain permanently—much to Albert's relief. After the 1841 general election, Melbourne was replaced as Prime Minister by Sir Robert Peel, who appointed Albert chairman of the Royal Commission in charge of redecorating the new Palace of Westminster. The Palace had burned down seven years before, and was being rebuilt. As a patron and purchaser of pictures and sculpture, the commission was set up to promote the fine arts in Britain. The commission's work was slow, and the palace's architect, Charles Barry, took many decisions out of the commissioners' hands by decorating rooms with ornate furnishings that were treated as part of the architecture. Albert was more successful as a private patron and collector. Among his notable purchases were early German and Italian paintings—such as Lucas Cranach the Elder's Apollo and Diana and Fra Angelico's St Peter Martyr—and contemporary pieces from Franz Xaver Winterhalter and Edwin Landseer. Ludwig Gruner, of Dresden, assisted Albert in buying artworks of the highest quality. Albert and Victoria were shot at again on both 29 and 30 May 1842, but were unhurt. The culprit, John Francis, was detained and condemned to death, although he was later reprieved. Some of the couple's early unpopularity came about because of their stiffness and adherence to protocol in public, though in private the couple were more easy-going. In early 1844, Victoria and Albert were apart for the first time since their marriage when he returned to Coburg on the death of his father. By 1844, Albert had managed to modernise the royal finances and, through various economies, had sufficient capital to purchase Osborne House on the Isle of Wight as a private residence for their growing family. Over the next few years a house modelled in the style of an Italianate villa was built to the designs of Albert and Thomas Cubitt. Albert laid out the grounds, and improved the estate and farm. Albert managed and improved the other royal estates; his model farm at Windsor (Shaw Farm) was admired by his biographers, and under his stewardship the revenues of the Duchy of Cornwall—the hereditary property of the Prince of Wales—steadily increased. Unlike many landowners who approved of child labour and opposed Peel's repeal of the Corn Laws, Albert supported moves to raise working ages and free up trade. In 1846, Albert was rebuked by Lord George Bentinck when he attended the debate on the Corn Laws in the House of Commons to give tacit support to Peel. During Peel's premiership, Albert's authority behind, or beside, the throne became more apparent. He had access to all the Queen's papers, was drafting her correspondence and was present when she met her ministers; he would even see them alone in her absence. The clerk of the Privy Council, Charles Greville, wrote of him: "He is King to all intents and purposes." In 1847, Victoria and Albert spent a rainy holiday in the west of Scotland at Loch Laggan, but heard from their doctor, Sir James Clark, that Clark's son had enjoyed dry, sunny days farther east at Balmoral Castle. The tenant of Balmoral, Sir Robert Gordon, died suddenly in early October, and Albert began negotiations to take over the lease from the owner, the Earl Fife. In May the following year, Albert leased Balmoral, which he had never visited. In September 1848 he, his wife and their older children went there for the first time. They came to relish the privacy it afforded. ## Reformer and innovator ### Foreign affairs Revolutions spread throughout Europe in 1848 as a result of a widespread economic crisis. Throughout the year, Victoria and Albert complained about Foreign Secretary Palmerston's independent foreign policy, which they believed further destabilised Continental European powers. Albert was concerned for many of his royal relatives, a number of whom were deposed by revolutionaries. He and Victoria, who gave birth to their daughter Louise during that year, spent some time away from London in the relative safety of Osborne. Although there were sporadic demonstrations in England, no effective revolutionary action took place. ### Domestic reforms According to historian G. M. Trevelyan, regarding the Prince and home affairs: > His influence over the Queen was on the whole liberal; he greatly admired Peel, was a strong free-trader, and took more interest in scientific and commercial progress, and less in sport and fashion than was at all popular in the best society. In 1847, Albert was elected Chancellor of the University of Cambridge after a close contest with the Earl of Powis. Albert used his position as chancellor to campaign successfully for reformed and more modern university curricula by expanding the subjects taught beyond the traditional mathematics and classics to include modern history and the natural sciences. Albert gained public acclaim when he expressed paternalistic yet well-meaning and philanthropic views. In an 1848 speech to the Society for the Improvement of the Condition of the Labouring Classes, of which he was president, he expressed his "sympathy and interest for that class of our community who have most of the toil and fewest of the enjoyments of this world". It was the "duty of those who, under the blessings of Divine Providence, enjoy station, wealth, and education" to assist those less fortunate than themselves. His progressive and relatively liberal ideas were expressed by his support of emancipation, technological progress, science education, the ideas of Charles Darwin, and the welfare of the working classes. Albert led reforms in university education, welfare and the royal finances, and supported the campaign against slavery. He also had a special interest in applying science and art to manufacturing industry. The Great Exhibition of 1851 arose from the annual exhibitions of the Society of Arts, of which Albert was president from 1843, and owed most of its success to his efforts to promote it. Albert served as president of the Royal Commission for the Exhibition of 1851, and had to fight for every stage of the project. In the House of Lords, Lord Brougham fulminated against the proposal to hold the exhibition in Hyde Park. Opponents of the exhibition prophesied that foreign rogues and revolutionists would overrun England, subvert the morals of the people, and destroy their faith. Albert thought such talk absurd and quietly persevered, trusting always that British manufacturing would benefit from exposure to the best products of foreign countries. The Queen opened the exhibition on 1 May 1851 in a specially designed and built glass building known as the Crystal Palace. It proved a colossal success. A surplus of £180,000 was used to purchase land in South Kensington on which to establish educational and cultural institutions, including the Natural History Museum, Science Museum, Imperial College London and what would later be named the Royal Albert Hall and the Victoria and Albert Museum. The area was referred to as "Albertopolis" by sceptics. ## Family and public life (1852–1859) In 1852, John Camden Neild, an eccentric miser, left Victoria an unexpected legacy, which Albert used to obtain the freehold of Balmoral. As usual, he embarked on an extensive programme of improvements. The same year, he was appointed to several of the offices left vacant by the death of the Duke of Wellington, including the mastership of Trinity House and the colonelcy of the Grenadier Guards. With Wellington's death, Albert was able to propose and campaign for modernisation of the army, which was long overdue. Thinking that the military was unready for war and that Christian rule was preferable to Islamic rule, Albert counselled a diplomatic solution to conflict between the Russian and Ottoman empires. Palmerston was more bellicose, and favoured a policy that would prevent further Russian expansion. Palmerston was manoeuvred out of the cabinet in December 1853, but at about the same time a Russian fleet attacked the Ottoman fleet at anchor at Sinop. The London press depicted the attack as a criminal massacre, and Palmerston's popularity surged as Albert's fell. Within two weeks, Palmerston was re-appointed a minister. As public outrage at the Russian action continued, false rumours circulated that Albert had been arrested for treason and was being held prisoner in the Tower of London. By March 1854, Britain and Russia were embroiled in the Crimean War. Albert devised a master plan for winning the war by laying siege to Sevastopol while starving Russia economically, which became the Allied strategy after the Tsar decided to fight a purely defensive war. Early British optimism soon faded as the press reported that British troops were ill-equipped and mismanaged by aged generals using out-of-date tactics and strategy. The conflict dragged on as the Russians were as poorly prepared as their opponents. The Prime Minister, Lord Aberdeen, resigned, and Palmerston succeeded him. A negotiated settlement eventually put an end to the war by the March 1856 Treaty of Paris. During the war, Albert arranged the marriage of his fourteen-year-old daughter, Victoria, to Prince Frederick William of Prussia, but Albert delayed the marriage until Victoria was seventeen. Albert hoped that his daughter and son-in-law would be a liberalising influence in the enlarging but very conservative Prussian state. Albert promoted many public educational institutions. Chiefly at meetings in connection with them, he spoke of the need for better schooling. A collection of his speeches was published in 1857. Recognised as a supporter of education and technological progress, he was invited to speak at scientific meetings, such as the memorable address he delivered as president of the British Association for the Advancement of Science when it met at Aberdeen in 1859. His espousal of science met with clerical opposition; he and Palmerston unsuccessfully recommended a knighthood for Charles Darwin, after the publication of On the Origin of Species, which was opposed by the Bishop of Oxford. Albert continued to devote himself to the education of his family and the management of the royal household. His children's governess, Lady Lyttelton, thought him unusually kind and patient, and described him joining in family games with enthusiasm. He felt keenly the departure of his eldest daughter for Prussia when she married her fiancé at the beginning of 1858, and was disappointed that his eldest son, the Prince of Wales, did not respond well to the intense educational programme that Albert had designed for him. At the age of seven, the Prince of Wales was expected to take six hours of instruction, including an hour of German and an hour of French every day. When the Prince of Wales failed at his lessons, Albert caned him. Corporal punishment was common at the time, and was not thought unduly harsh. Albert's biographer Roger Fulford wrote that the relationships between the family members were "friendly, affectionate and normal ... there is no evidence either in the Royal Archives or in the printed authorities to justify the belief that the relations between the Prince and his eldest son were other than deeply affectionate". Philip Magnus wrote in his biography of Albert's eldest son that Albert "tried to treat his children as equals; and they were able to penetrate his stiffness and reserve because they realised instinctively not only that he loved them but that he enjoyed and needed their company". Albert was a talented amateur musician and composer. For his wedding, he composed a duet, Die Liebe hat uns nun vereint ("Love has now united us"). Felix Mendelssohn described Albert playing the Buckingham Palace organ "so charmingly and clearly and correctly that it would have done credit to any professional". After tuition from George Elvey, the organist at St George's Chapel, Windsor, Albert composed several choral pieces for Anglican worship, including settings of the Te Deum and Jubilate, and an anthem, Out of the Deep. His secular compositions included a cantata, L'Invocazione all'armonia, and Melody for the Violin, which Yehudi Menuhin later described as "pleasant music without presumption". ## Illness and death In August 1859, Albert fell seriously ill with stomach cramps. His steadily worsening medical condition led to a sense of despair; the biographer Robert Rhodes James describes Albert as having lost "the will to live". Albert later had an accidental brush with death during a trip to Coburg in October 1860, when he was driving alone in a carriage drawn by four horses that suddenly bolted. As the horses continued to gallop toward a wagon waiting at a railway crossing, Albert jumped for his life from the carriage. One of the horses was killed in the collision, and Albert was badly shaken though his only physical injuries were cuts and bruises. He confided in his brother and eldest daughter that he sensed that his time had come. Victoria's mother and Albert's aunt, the Duchess of Kent, died in March 1861, and Victoria was grief-stricken. Albert took on most of the Queen's duties despite his continuing chronic stomach trouble. The last public event over which he presided was the opening of the Royal Horticultural Gardens on 5 June 1861. In August, Victoria and Albert visited the Curragh Camp, Ireland, where the Prince of Wales was attending army manoeuvres. At the Curragh, the Prince of Wales was introduced by his fellow officers to Nellie Clifden, an Irish actress. By November, Victoria and Albert had returned to Windsor, and the Prince of Wales had returned to Cambridge, where he was a student. Two of Albert's young cousins, brothers King Pedro V of Portugal and Prince Ferdinand, died of typhoid fever within five days of each other in early November. On top of that news, Albert was informed that gossip was spreading in gentlemen's clubs and the foreign press that the Prince of Wales was involved with Nellie Clifden. Albert and Victoria were horrified by their son's indiscretion and feared blackmail, scandal or pregnancy. Although Albert was ill and at a low ebb, he travelled to Cambridge to see the Prince of Wales on 25 November and discuss the indiscreet affair. In his final weeks, Albert suffered from pains in his back and legs. Also in November 1861, the Trent Affair—the forcible removal of Confederate envoys from a British ship, the RMS Trent, by Union forces during the American Civil War—threatened war between the United States and Britain. The British government prepared an ultimatum and readied a military response. Albert was gravely ill but intervened to defuse the crisis. In a few hours, he revised the British demands in a manner that allowed the Lincoln administration to surrender the Confederate commissioners who had been seized from the Trent and to issue a public apology to London without losing face. The key idea, based on a suggestion from The Times, was to give Washington the opportunity to deny that it had officially authorised the seizure and thereby to apologise for the captain's mistake. On 9 December, one of Albert's doctors, William Jenner, diagnosed him with typhoid fever. Albert died at 10:50 p.m. on 14 December 1861 in the Blue Room at Windsor Castle, in the presence of the Queen and five of their nine children. He was 42 years old. The contemporary diagnosis was typhoid fever, but modern writers have pointed out that Albert's ongoing stomach pain, which left him ill for at least two years before his death, may indicate that a chronic disease such as Crohn's disease, kidney failure or abdominal cancer was the cause of death. ## Legacy The Queen's grief was overwhelming, and the tepid feelings that the public had for Albert were replaced by sympathy. The widowed Victoria never recovered from Albert's death; she entered into a deep state of mourning and wore black for the rest of her life. Albert's rooms in all his houses were kept as they had been even with hot water brought in the morning and linen and towels changed daily. Such practices were common in the houses of the very rich. Victoria withdrew from public life and her seclusion eroded some of Albert's work in attempting to remodel the monarchy as a national institution by setting a moral, if not political, example. Albert is credited with introducing the principle that the British royal family should remain above politics. Before his marriage to Victoria, she supported the Whigs. For example, early in her reign, Victoria had managed to thwart the formation of a Tory government by Sir Robert Peel by refusing to accept substitutions that Peel wanted to make among her ladies-in-waiting. Albert's funeral was held on 23 December 1861 at St George's Chapel, Windsor Castle. His body was temporarily entombed in the chapel's Royal Vault. A year after his death, his remains were deposited at the Royal Mausoleum, Frogmore, which remained incomplete until 1871. The sarcophagus, in which both he and the Queen were eventually laid, was carved from the largest block of granite that had ever been quarried in Britain. Despite Albert's request for no effigies of him to be raised, many public monuments were erected all over the country and across the British Empire. The most notable are the Royal Albert Hall and the Albert Memorial in London. The plethora of memorials erected to Albert became so great that Charles Dickens told a friend that he sought an "inaccessible cave" to escape from them. Places and objects named after Albert range from Lake Albert in Africa, to the city of Prince Albert, Saskatchewan, and the Albert Medal presented by the Royal Society of Arts. Four regiments of the British Army were named after him: 11th (Prince Albert's Own) Hussars, Prince Albert's Light Infantry, Prince Albert's Own Leicestershire Regiment of Yeomanry Cavalry and The Prince Consort's Own Rifle Brigade. He and Victoria had showed a keen interest in the establishment and development of Aldershot in Hampshire as a garrison town in the 1850s. They had a wooden Royal Pavilion built there in which they would often stay when they attended military reviews. Albert established and endowed the Prince Consort's Library at Aldershot, which still exists. Biographies published after his death were typically heavy on eulogy. Theodore Martin's five-volume magnum opus was authorised and supervised by Queen Victoria, and her influence shows in its pages. Nevertheless, it is an accurate and exhaustive account. Lytton Strachey's Queen Victoria (1921) was more critical, but it was discredited in part by mid-20th-century biographers such as Hector Bolitho and Roger Fulford, who, unlike Strachey, had access to Victoria's journal and letters. Popular myths about Prince Albert, such as the claim that he introduced Christmas trees to Britain, are dismissed by scholars. Recent biographers such as Stanley Weintraub portray Albert as a figure in a tragic romance who died too soon and was mourned by his lover for a lifetime. In the 2009 film The Young Victoria, Albert, played by Rupert Friend, is made into a heroic character. In the fictionalised depiction of the 1840 shooting, he is struck by a bullet, which did not happen in real life. ## Titles, styles, honours and arms ### Titles and styles In the United Kingdom, Albert was styled "His Serene Highness Prince Albert of Saxe-Coburg and Gotha" in the months before his marriage. He was granted the style of Royal Highness on 6 February 1840, and given the title of Prince Consort on 25 June 1857. ### British honours - KG: Royal Knight of the Garter, 16 December 1839 - GCB: Knight Grand Cross of the Bath (military), 6 March 1840; Great Master, 25 May 1847 - GCMG: Knight Grand Cross of St Michael and St George, 15 January 1842 - KT: Knight of the Thistle, 17 January 1842 - KP: Extra and Principal Knight of St. Patrick, 20 January 1842 - KSI: Extra Knight of the Star of India, 25 June 1861 #### Military appointments - Field Marshal of the British Army, 8 February 1840 - Colonel-in-chief of the 11th (Prince Albert's Own) Hussars, 30 April 1840 – 1842 - Colonel of the Scots Fusilier Guards, 25 April 1842 – 1852 - Captain-general and Colonel of the Honourable Artillery Company, 1843 - Constable and Governor of Windsor Castle, 1843 - Colonel-in-chief of the 60th (The King's Royal Rifle Corps) Regiment of Foot, 15 August 1850 – 1852 - Colonel of the 1st Grenadier Guards, 23 August 1852 - Colonel-in-chief of the Rifle Brigade, 23 September 1852 ### Foreign honours ### Arms Upon his marriage to Queen Victoria in 1840, Prince Albert received a personal grant of arms, being the royal coat of arms of the United Kingdom differenced by a white three-point label with a red cross in the centre, quartered with his ancestral arms of Saxony. They are blazoned: "Quarterly, 1st and 4th, the Royal Arms, with overall a label of three points Argent charged on the centre with cross Gules; 2nd and 3rd, Barry of ten Or and Sable, a crown of rue in bend Vert". The arms are unusual, being described by S. T. Aveling as a "singular example of quartering differenced arms, [which] is not in accordance with the rules of Heraldry, and is in itself an heraldic contradiction." Prior to his marriage Albert used the arms of his father undifferenced, in accordance with German custom. Albert's Garter stall plate displays his arms surmounted by a royal crown with six crests for the House of Saxe-Coburg and Gotha; these are from left to right: 1. "A bull's head caboshed Gules armed and ringed Argent, crowned Or, the rim chequy Gules and Argent" for Mark. 2. "Out of a coronet Or, two buffalo horns Argent, attached to the outer edge of five branches fesswise each with three linden leaves Vert" for Thuringia. 3. "Out of a coronet Or, a pyramidal chapeau charged with the arms of Saxony ensigned by a plume of peacock feathers Proper out of a coronet also Or" for Saxony. 4. "A bearded man in profile couped below the shoulders clothed paly Argent and Gules, the pointed coronet similarly paly terminating in a plume of three peacock feathers" for Meissen. 5. "A demi griffin displayed Or, winged Sable, collared and langued Gules" for Jülich. 6. "Out of a coronet Or, a panache of peacock feathers Proper" for Berg. The supporters were the crowned lion of England and the unicorn of Scotland (as in the Royal Arms) charged on the shoulder with a label as in the arms. Albert's personal motto is the German Treu und Fest (Loyal and Sure). This motto was also used by Prince Albert's Own or the 11th Hussars. ## Issue Prince Albert's 42 grandchildren included four reigning monarchs: King George V of the United Kingdom; Wilhelm II, German Emperor; Ernest Louis, Grand Duke of Hesse; and Charles Edward, Duke of Saxe-Coburg and Gotha, and five consorts of monarchs: Empress Alexandra of Russia and Queens Maud of Norway, Sophia of Greece, Victoria Eugenie of Spain, and Marie of Romania. Albert's many descendants include royalty and nobility throughout Europe. ## Ancestry ## See also - John Brown - List of coupled cousins - Royal Albert Memorial Museum
56,378,386
Parliament of 1327
1,169,973,628
English parliament
[ "1327 in England", "14th century in London", "14th-century English parliaments", "Edward II of England", "Edward III of England", "House of Plantagenet" ]
The Parliament of 1327, which sat at the Palace of Westminster between 7 January and 9 March 1327, was instrumental in the transfer of the English Crown from King Edward II to his son, Edward III. Edward II had become increasingly unpopular with the English nobility due to the excessive influence of unpopular court favourites, the patronage he accorded them, and his perceived ill-treatment of the nobility. By 1325, even his wife, Queen Isabella, despised him. Towards the end of the year, she took the young Edward to her native France, where she entered into an alliance with the powerful and wealthy nobleman Roger Mortimer, who her husband previously had exiled. The following year, they invaded England to depose Edward II. Almost immediately, the King's resistance was beset by betrayal, and he eventually abandoned London and fled west, probably to raise an army in Wales or Ireland. He was soon captured and imprisoned. Isabella and Mortimer summoned a parliament to confer legitimacy on their regime. The meeting began gathering at Westminster on 7 January, but little could be done in the absence of the King. The fourteen-year-old Edward was proclaimed "Keeper of the Realm" (but not yet king), and a parliamentary deputation was sent to Edward II asking him to allow himself to be brought to parliament. He refused, and the parliament continued without him. The King was accused of offences ranging from the promotion of favourites to the destruction of the church, resulting in a betrayal of his coronation oath to the people. These were known as the "Articles of Accusation". The City of London was particularly aggressive in its attacks on Edward II, and its citizens may have helped intimidate those attending the parliament into agreeing to the King's deposition, which occurred on the afternoon of 13 January. On or around 21 January, the Lords Temporal sent another delegation to the King to inform him of his deposition, effectively giving Edward an ultimatum: if he did not agree to hand over the crown to his son, then the lords in parliament would give it to somebody outside the royal family. King Edward wept but agreed to their conditions. The delegation returned to London, and Edward III was proclaimed king immediately. He was crowned on 1 February 1327. In the aftermath of the parliamentary session, his father remained imprisoned, being moved around to prevent attempted rescues; he died—presumed killed, probably on Mortimer's orders—that September. Crises continued for Mortimer and Isabella, who were de facto rulers of the country, partly because of Mortimer's own greed, mismanagement, and mishandling of the new king. Edward III led a coup d'état against Mortimer in 1330, overthrew him, and began his personal rule. ## Background King Edward II of England had court favourites who were unpopular with his nobility, such as Piers Gaveston and Hugh Despenser the Younger. Gaveston was killed during an earlier noble rebellion against Edward in 1312, and Despenser was hated by the English nobility. Edward was also unpopular with the common people due to his repeated demands from them for unpaid military service in Scotland. None of his campaigns there were successful, and this led to a further decline in his popularity, particularly with the nobility. His image was further diminished in 1322 when he executed his cousin, Thomas, Earl of Lancaster, and confiscated the Lancaster estates. Historian Chris Given-Wilson has written how by 1325 the nobility believed that "no landholder could feel safe" under the regime. This distrust of Edward was shared by his wife, Isabella of France, who believed Despenser responsible for poisoning the King's mind against her. In September 1324 Queen Isabella had been publicly humiliated when the government declared her an enemy alien, and the King had immediately repossessed her estates, probably at the urging of Despenser. Edward also disbanded her retinue. Edward had already been threatened with deposition on two previous occasions (in 1310 and 1321). Historians agree that hostility towards Edward was universal. W. H. Dunham and C. T. Wood ascribed this to Edward's "cruelty and personal faults", suggesting that "very few, not even his half-brothers or his son, seemed to care about the wretched man" and that none would fight for him. A contemporary chronicler described Edward as rex inutilis, or a "useless king". France had recently invaded the Duchy of Aquitaine, then an English royal possession. In response, King Edward sent Isabella to Paris, accompanied by their thirteen-year-old son, Edward, to negotiate a settlement. Contemporaries believed she had sworn, on leaving, never to return to England with the Despensers in power. Soon after her arrival, correspondence between Isabella and her husband, as well between them and her brother King Charles IV of France and Pope John XXII, effectively disclosed the royal couple's increasing estrangement to the world. A contemporary chronicler reports how Isabella and Edward became increasingly scathing of each other, worsening relations. By December 1325 she had entered into a possibly sexual relationship in Paris with the wealthy exiled nobleman Roger Mortimer. This was public knowledge in England by March 1326, and the King openly considered a divorce. He demanded that Isabella and Edward return to England, which they refused to do: "she sent back many of her retinue but gave trivial excuses for not returning herself" noted her biographer, John Parsons. Their son's failure to break with his mother angered the King further. Isabella became more strident in her criticisms of Edward's government, particularly against Walter de Stapledon, Bishop of Exeter, a close associate of the King and Despenser. King Edward alienated his son by putting the prince's estates under royal administration in January 1326, and the following month the King ordered that both he and his mother be arrested on landing in England. While in Paris, the Queen became the head of King Edward's exiled opposition. Along with Mortimer, this group included Edmund of Woodstock, Earl of Kent, Henry de Beaumont, John de Botetourt, John Maltravers and William Trussell. All were united by hatred of the Despensers. Isabella portrayed her and Prince Edward as seeking refuge from her husband and his court, both of whom she claimed were hostile to her, and claimed protection from Edward II. King Charles refused to countenance an invasion of England; instead, the rebels gained the Count of Hainaut's backing. In return, Isabella agreed that her son would marry the Count's daughter Philippa. This was a further insult to Edward II, who had intended to use his eldest son's marriage as a bargaining tool against France, probably intending a marriage alliance with Spain. ### Invasion of England From February 1326 it was clear in England that Isabella and Mortimer intended to invade. Despite false alarms, large ships, as a defensive measure, were forbidden from leaving English ports, and some were pressed into royal service. King Edward declared war on France in July; Isabella and Mortimer invaded England in September, landing in Suffolk on the 24th. The commander of the royal fleet assisted the rebels: the first of many betrayals Edward II suffered. Isabella and Mortimer soon found they had significant support among the English political class. They were quickly joined by Thomas, Earl of Norfolk, the King's brother, accompanied by Henry, Earl of Leicester (brother of the executed Earl of Lancaster), and soon afterwards arrived the Archbishop of Canterbury and the Bishops of Hereford and Lincoln. Within the week, support for the King had dissolved, and, accompanied by Despenser, he deserted London and travelled west. Edward's flight to the west precipitated his downfall. Historian Michael Prestwich describes the King's support as collapsing "like a building hit by an earthquake". Edward's rule was already weak, and "even before the invasion, along with preparation, there had been panic. Now there was simply panic". Ormrod notes how > Given that Mortimer and his adherents were already condemned traitors and that any engagement with the invading force was to be treated as an act of open rebellion, it is all the more striking how many great men were prepared to enter upon such a high-risk venture at so early a stage in its prosecution. In this respect at least the presence of the heir to the throne in the queen's entourage may have proved decisive. King Edward's attempt to raise an army in South Wales was to no avail, and he and Despenser were captured on 16 November 1326 near Llantrisant. This, along with the unexpected swiftness with which the entire regime had collapsed, forced Isabella and Mortimer to wield executive power until they made arrangements for a successor to the throne. The King was incarcerated by the Earl of Leicester, while those suspected of being Despenser spies or supporters of the King—particularly in London, which was aggressively loyal to the Queen—were murdered by mobs. Isabella spent the last months of 1326 in the West Country, and while in Bristol witnessed the hanging of Despenser's father, the Earl of Winchester on 27 October. Despenser himself was captured in Hereford and executed there within the month. In Bristol Isabella, Mortimer and the accompanying lords discussed strategy. Not yet possessing the Great Seal, on 26 October they proclaimed the young Edward guardian of the realm, declaring that "by the assent of the whole community of the said kingdom present there, they unanimously chose [Edward III] as keeper of the said kingdom". He was not yet officially declared king. The rebels' description of themselves as a community deliberately harked back to the reform movement of Simon de Montfort and the baronial league, which had described its reform programme as being of the community of the realm against Henry III. Claire Valente has pointed out how, in reality, the most common phrase heard "was not 'the community of the realm', but 'the quarrel of the earl of Lancaster'", illustrating how the struggle was still a factional one within baronial politics, whatever cloak it may have appeared to possess as a reform movement. By 20 November 1326 the Bishop of Hereford had retrieved the Great Seal from the King, and delivered it to the King's son. He could now be announced as his father's heir apparent. Although, at this stage, it might still have been possible for Edward II to remain king, says Ormrod, "the writing was on the wall". A document issued by Isabella and her son at this time described their respective positions thus: > Isabel by the grace of God Queen of England, lady of Ireland, Countess of Ponthieu and we, Edward, eldest son of the noble King Edward of England, Duke of Gascony, Earl of Chester, of Ponthieu, of Montreuil... ## Summoning of parliament Isabella, Mortimer and the lords arrived in London on 4 January 1327. In response to the previous year's spate of murders, Londoners had been forbidden to bear arms, and two days later all citizens had sworn an oath to keep the peace. Parliament met on 7 January to consider the state of the realm now the King was incarcerated. It had originally been summoned by Isabella and the Prince, in the name of the King, on 28 October the previous year. Parliament had been intended to assemble on 14 December 1326, but on 3 December—still in the name of the King—further writs were issued deferring the sitting until early the next year. This, it was implied, was due to the King being abroad, rather than imprisoned. Because of this, parliament would have to be held before the Queen and Prince Edward. The History of Parliament Trust has described the legality of the writs as being "highly questionable", and C. T. Wood called the sitting "a show of pseudo-parliamentary regularity", "stage-managed" by Mortimer and Thomas, Lord Wake. For Isabella and Mortimer, governing through parliament was only a temporary solution to a constitutional problem, because at some point their positions would likely be challenged legally. Thus, suggests Ormrod, they had to enforce a solution favourable to Mortimer and the Queen, by any means they could. Contemporaries were uncertain as to the legality of Isabella's parliament. Edward II was still king, although in official documents, this was only alongside his "most beloved consort Isabella queen of England" and his "firstborn son keeper of the kingdom", in what Phil Bradford called as a "nominal presidency". King Edward was said to be abroad when in reality he was imprisoned in Kenilworth Castle. It was maintained that he desired a "colloquium" and a "tractatum" (conference and consultation) with his lords "upon various affairs touching himself and the state of his kingdom", hence the holding of parliament. Supposedly it was Edward II himself who postponed the first sitting until January, "for certain necessary causes and utilities", presumably at the behest of the Queen and Mortimer. A priority for the new regime was deciding what to do with Edward II. Mortimer considered holding a state trial for treason, in the expectation of a guilty verdict and a death sentence. He and other lords discussed the matter at Isabella's Wallingford Castle just after Christmas, but with no agreement. The Lords Temporal affirmed that Edward had failed his country so gravely that only his death could heal it; the attending bishops, on the other hand, held that whatever his faults, he had been anointed king by God. This presented Isabella and Mortimer with two problems. First, the bishops' argument would be popularly understood as risking the wrath of God. Second, public trials always bring the danger of an unintended verdict, particularly as it seems likely a broad body of public opinion doubted whether an anointed king could even commit treason. Such a result would mean not only Edward's release but his restoration to the throne. Mortimer and Isabella sought to avoid a trial and yet keep Edward II imprisoned for life. The King's imprisonment (officially by his son) had become public knowledge, and Isabella's and Mortimer's hand was forced as the arguments for the young Edward being named keeper of the kingdom were now groundless (as the King had clearly returned to his realm—one way or another). ### Attendance No parliament had sat since November 1325. Only 26 of the 46 barons who had been summoned in October 1326 for the December parliament were then also summoned to that of January 1327, and six of those had never received summonses under Edward II at all. Officially, the instigators of the parliament were the Bishops of Hereford and Winchester, Roger Mortimer and Thomas Wake; Isabella almost certainly played a background role. They summoned, as Lords Spiritual, the Archbishop of Canterbury and fifteen English and four Welsh bishops as well as nineteen abbots. The Lords Temporal were represented by the Earls of Norfolk, Kent, Lancaster, Surrey, Oxford, Atholl and Hereford. Forty-seven barons, twenty-three royal justices, and several knights and burgesses were summoned from the shires and the Cinque Ports. They may well have been encouraged, suggests Maddicott, by the wages to be paid to those attending: the "handsome sum" of four shillings a day for a knight and two for a burgess. The knights provided the bulk of Isabella's and the Prince's vocal support; they included Mortimer's sons, Edward, Roger and John. Sir William Trussell was appointed procurator, or Speaker, despite his not being an elected member of parliament. Although the office of procurator was not new, the purpose of Trussell's role set a constitutional precedent, as he was authorised to speak on behalf of parliament as a body. A chronicle describes Trussell as one "who cannot disagree with himself and, [therefore], shall ordain for all". There were fewer lords present than were traditionally summoned, which increased the influence of the Commons. This may have been a deliberate strategy on behalf of Isabella and Mortimer, who, suggests Dodd, would have known well that in the occasionally tumultuous parliaments of earlier reigns, "the trouble that had been caused in parliament had emanated almost exclusively from the barons". The Archbishop of York, who had been summoned to the December parliament, was "conspicuous by his absence" from the January sitting. Some Welsh MPs also received summonses, but these had deliberately been despatched too late for those elected to attend; others, such as the sheriff of Meirionnydd, Gruffudd Llwyd, refused to attend, out of loyalty to Edward II and also hatred of Roger Mortimer. Although a radical gathering, the parliament was to some degree consistent with previous assemblies, being dominated by lords reliant on a supportive Commons. It differed, though, in the greater-than-usual influence that outsiders and commoners had, such as those from London. The January–February parliament was geographically broader too, as it contained unelected members from Bury St Edmunds and St Albans: says Maddicott, "those who planned the deposition reached out in parliament to those who had no right to be there". And, says Dodd, the rebels deliberately made parliament "centre stage" to their plans. ## Parliament assembled ### The King's absence Before parliament met, the lords had sent Adam Orleton (the Bishop of Hereford) and William Trussell to Kenilworth to see the King, with the intention of persuading Edward to return with them and attend parliament. They failed in this mission: Edward flatly refused and roundly cursed them. The envoys returned to Westminster on 12 January; by which time parliament had been sitting five days. It was felt that nothing could be done until the King had arrived: historically a parliament could only pass statutes with the monarch present. On hearing from Orleton and Trussell how Edward had denounced them, the King's opponents were no longer willing to let his absence stand in their way. Edward II's refusal to attend failed to prevent the parliament from taking place, the first time this had ever happened. #### Constitutional crisis The various titles bestowed on the younger Edward at the end of 1326—which acknowledged his unique position in government while avoiding calling him king—reflected an underlying constitutional crisis, of which contemporaries were keenly aware. The fundamental question was how the crown was transferred between two living kings, a situation which had never arisen before. Valente has described how this "upset the accepted order of things, threatened the sacrosanctity of kingship, and lacked clear legality or established process". Contemporaries were also uncertain as to whether Edward II had abdicated or was being deposed. On 26 October it had been recorded in the Close Rolls that Edward had "left or abandoned his kingdom", and his absence enabled Isabella and Mortimer to rule. They could legitimately argue that King Edward, having provided no regent during his absence (as would be usual), should make his son governor of the kingdom in his father's stead. They also said Edward II held Parliament in contempt by calling it a treasonous assembly and insulted those attending it as traitors". It is unknown whether the King did, in fact, say or believe this, but it certainly suited Isabella and Mortimer for parliament to think so. If Edward did denounce parliament then he probably did not realise how it could be used against him. In any case, Edward's absence saved the couple the embarrassment of having a reigning king present when they deposed him, and Seymour Phillips suggests that if Edward had attended he may have found enough support to disrupt their plans. ### Proceedings of Monday, 12 January Parliament had to consider its next step. Bishop Orleton—emphasising Isabella's fear of the King—asked the assembled lords whom they would prefer to rule, Edward or his son. The response was sluggish, with no rush to either depose or acclaim. Deposition had been raised too suddenly for many members to stomach: the King was still not entirely friendless, and indeed, has been described by Paul Dryburgh as casting an "ominous shadow" over the proceedings. Orleton suspended proceedings until the next day to allow the lords to dwell on the question overnight. Also on the 12th, Sir Richard de Betoyne, the Mayor of London, and the Common Council wrote to the lords in support of both the Earl of Chester being made King and the deposition of Edward II, whom they accused of failing to uphold his coronation oath and the duties of the crown. Mortimer, who was highly regarded by Londoners, may well have instigated this as a means of influencing the lords. The Londoners' petition also proposed that the new king should be governed by his Council until it was clear he understood his coronation oath and regal responsibilities. This petition the lords accepted; another, requesting the King should hold Westminster parliaments annually until he reached his majority, was not. ### Proceedings of Tuesday, 13 January Whether Edward II resigned his throne or was forced from it under pressure, the crown legally changed hands on 13 January with the support, it was recorded, of "all the baronage of the land". Parliament met in the morning and then suspended itself. A large group of the lords temporal and spiritual made their way to the City of London's Guildhall where they swore an oath "to uphold all that has been ordained or shall be ordained for the common profit". This was intended to present those in parliament who disagreed with deposition with a fait accompli. At the Guildhall they also swore to uphold the constitutional limitations of the Ordinances of 1311. The group then returned to Westminster in the afternoon, and the lords formally acknowledged that Edward II was no longer to be King. Several orations were made. Mortimer, speaking on behalf of the lords, announced their decision. Edward II, he proclaimed, would abdicate and "...Sir Edward ... should have the government of the realm and be crowned king". The French chronicler Jean Le Bel described how the lords proceeded to document Edward II's "ill-advised deeds and actions" to create a legal record which was duly presented to parliament. This record declared "such a man was unfit ever to wear the crown or call himself King". This list of misdeeds—probably drawn up by Orleton and Stratford personally—were known as the Articles of Accusation. The bishops gave sermons—Orleton, for example, spoke of how "a foolish king shall ruin his people", and, report Dunham and Wood, he "dwelt weightily upon the folly and unwisdom of the king, and upon his childish doings". This, says Ian Mortimer, was "a tremendous sermon, rousing those present in the way he knew best, through the power of the word of God". Orleton based his sermon on the biblical text "Where there is no governor the people shall fall" from the Book of Proverbs, while the Archbishop of Canterbury took for his text Vox Populi, Vox Dei. ### Articles of accusation During the sermons, the articles of deposition were officially presented to the assembly. In contrast to the elaborate and floridly hyperbolic accusations previously launched at the Despensers, this was a relatively simple document. The King was accused of being incapable of fair rule; of indulging false counsellors; preferring his own amusements to good government; neglecting England and losing Scotland; dilapidating the church and imprisoning the clergy; and, all in all, being in fundamental breach of the coronation oath he had made to his subjects. All of which, the rebels claimed, was so well known as to be undeniable. The articles accused Edward's favourites of tyranny although not the King himself, whom they described as "incorrigible, without hope of reform". England's succession of military failures in Scotland and France rankled with the lords: Edward had fought no successful campaigns in either theatre, yet had raised enormous levies to enable him to do so. Such levies says F. M. Powicke, "could only have been justified by military success". Accusations of military failure were not wholly fair in placing the blame for these losses, as they did, so squarely on Edward II's shoulders: Scotland had arguably been almost lost in 1307. Edward's father had, says Seymour Phillips, left him "an impossible task", having started the war without making sufficient gains to allow his son to finish it. And Ireland had been the theatre of one of the King's few military successes—the English victory at the Battle of Faughart in 1318 had crushed Robert the Bruce's ambitions in Ireland (and seen the death of his brother). Only the King's military failures, though, were remembered, and indeed, they were the most damning of all the articles: > By the common consent of all, the archbishop of Canterbury declared how the good King Edward when he died had left to his son his lands of England, Ireland, Wales, Gascony and Scotland in good peace; how Gascony and Scotland had been as good as lost by evil counsel and evil ward ... ### The King's deposition Every speaker on 13 January reiterated the articles of accusation, and all concluded by offering the young Edward as king, if the people approved him. The crowd outside, which included a large company of unruly Londoners, says Valente, had been "whipped ... into such fervour" by "dramatic outcries at appropriate points in the orations" from Thomas Wake, who repeatedly rose and demanded of the assembly whether they agreed with each speaker; "Do you agree? Do the people of the country agree?" Wake's exhortations—arms outstretched, says Prestwich, he cried "I say for myself that he shall reign no more")—combined with the intimidating mob, led to tumultuous responses of "Let it be done! Let it be done!" This, says May McKisack, gave the new regime a degree of "support of popular clamour". The Londoners played a key role in ensuring that remaining supporters of Edward II were intimidated and overwhelmed by events. Edward III was proclaimed king. At the end of the day, said Valente, "the electio of the magnates received the acclamatio of the populi, 'Fiat!,." Proceedings drew to a close with a chorus of Gloria, laus et honor, and perhaps oaths of homage from the lords to the new king. Assent to the new regime was not universal: the Bishops of London, Rochester and Carlisle abstained from the day's affairs in protest, and Rochester was later beaten up by a London mob because of his opposition. #### The King's response One final action remained to be taken: the ex-King in Kenilworth had to be informed that his subjects had chosen to withdraw their allegiance from him. A delegation was organised to take the news. The delegates were the Bishops of Ely, Hereford and London, and around 30 laymen. Among the latter, the Earl of Surrey represented the lords and Trussell represented the shire knights. The group was intended to be as representative of parliament—and so the kingdom—as possible. It was not composed solely of parliamentarians, but there were enough of them in it to appear parliamentarian. Its size also had the added advantage of spreading collective responsibility far more broadly than would have happened in a small group. They left on or shortly after Thursday 15 January and had arrived in Kenilworth by either 21 or 22 January, when William Trussell asked for the King to be brought to them in the name of parliament. Edward, dressed in a black gown and under the Earl of Lancaster's escort, was brought to the great hall. Geoffrey le Baker's Chronicle describes how the delegates equivocated at first, "adulterating the word of truth" before coming to the point. Edward was offered the choice of resigning in favour of his son, and being provided for according to his rank, or of being deposed. This, it was emphasised, could lead to the throne being offered to someone, not of royal blood but politically experienced, clearly referring to Mortimer. The King protested—mildly—and wept, fainting at one point. According to Orleton's later report, Edward claimed he had always followed the guidance of his nobles, but regretted any harm he had done. The deposed king took comfort from his son succeeding him. It seems probable that a memorandum of acknowledgement was drawn up between the delegation and Edward, minuting what was said, although this has not survived. Baker says that at the end of the meeting Edward's Steward, Thomas Blunt, dramatically broke his staff of office in half, and dismissed Edward's household. The delegation left Kenilworth for London on 22 January: their news preceded them. By the time they reached Westminster, around 25 January, Edward III was already officially referred to as king, and his peace had been proclaimed at St Paul's Cathedral on the 24th. Now the new king could be proclaimed in public; Edward III's reign was thus dated from 25 January 1327. Behind the scenes, though, discussions must have begun on the thorny question of what to do with his predecessor, who still had not had any judgement—legal or parliamentary—passed upon him. ## Subsequent events and aftermath ### Recall of parliament Edward III's political education was deliberately accelerated by the tutelage of advisors such as William of Pagula and Walter de Milemete. Still a minor, Edward III was crowned at Westminster Abbey on 1 February 1327: executive power remained with Mortimer and Isabella. Mortimer was made Earl of March in October 1328, but otherwise, received few grants of land or money. Isabella, on the other hand, gained an annual income of 20,000 marks (£13,333) within the month. She achieved this by requesting the return of her dower which her husband had confiscated; it was returned to her substantially augmented. Ian Mortimer has called the grant she received as amounting to "one of the largest personal incomes anyone had ever received in English history". Following Edward's coronation parliament was recalled. According to precedent, a new parliament should have been summoned with the accession of a new monarch, and this failure of process indicates the novelty of the situation. Official records regnally date the entire parliament to the first year of Edward III's reign rather than the last of his father's, even though it spread over both. When recalled, parliament returned to its usual business, and heard a large number (42) of petitions from the community. These not only included the political—and often lengthy—petitions related directly to the deposition, but a similar number coming from the clergy and the City of London. This was the greatest number of petitions to have been submitted by the Commons in the history of parliament. Their requests ranged from confirmation of the acts against the Despensers and those in favour of Thomas of Lancaster, to the reconfirmation of the Magna Carta. There were ecclesiastical petitions, and those from the shires dealt mainly in annulling debts and amercements of both individuals and towns. There were numerous requests for the King's grace, for example, overturning perceived false judgements in local courts and concerns for law and order in the localities generally. Restoring law and order was a priority of the new regime, as Edward II's reign had foundered on his inability to do so, and his failure then used to depose him. The principle behind Edward's deposition was, supposedly, to redress such wrongs his reign had caused. One petition requested members of the Commons be authorised to take written confirmation of their petition and its concomitant answer to their localities, while another protested against corrupt local royal officials. This eventually resulted in a proclamation in 1330 instructing individuals who had cause of complaint or need of redress from such should attend the approaching parliament. The Commons too were concerned for the restoration of law and order, and one of their petitions called for the immediate appointment of wide-ranging keepers of the peace who could personally put men on trial. This request was agreed by the King's council. This return to normal parliamentary business demonstrated, it was hoped, both the regime's legitimacy and its ability to repair the injustices of the previous reign. Most of the petitions were accepted—resulting in seventeen statute articles—which indicates how keen Isabella and Mortimer were to placate the Commons. When parliament finally dissolved on 9 March 1327, it had been the second longest, at seventy-one days, of the century to date; further, notes Dodd, because of this it was "the only assembly in the late medieval period to outlive a king and see in his successor". The dead Earl of Lancaster's titles and estates were restored to his brother Henry, and the 1323 judgement against Mortimer, which exiled him, was overturned. The invaders were also restored to their estates in Ireland. In an attempt at settling the Irish situation, parliament issued ordinances on 23 February pardoning those who had supported Robert Bruce's invasion. The deposed King was referred to only obliquely in official records—for example, as "Edward his father, when he was king," "Edward, the father of the King who now is" or as he had been known as a youth, "Edward of Caernarfon". Isabella and Mortimer were careful to try to prevent the deposition from tarnishing their reputations, reflected in their concern of not just obtaining Edward II's ex-post facto agreement to his removal, but then publicising his agreement. The problem they faced was that this effectively involved having to rewrite a piece of history in which many people were actively involved and had taken place only two weeks earlier. The City of London also benefited. In 1321, Edward II had disenfranchised London, and royal officials, in the words of a contemporary, had "pris[ed] every privilege and penny out of the city", as well as deposing their mayor: Edward had ruled London himself through a system of wardens. Gwyn Williams described this as "an emergency regime of dubious legality". In 1327 Londoners petitioned the recalled parliament for their liberties to be restored, and, since they had been of valuable—probably crucial—importance in enabling the deposition, on 7 March they received not just the rights Edward II had removed from them, but greater privileges than they had ever possessed. ### Later events Meanwhile, Edward II was still imprisoned at Kenilworth, and was intended to stay there forever. Attempts to free him led to his transfer to the more secure Berkeley Castle in early April 1327. Plotting continued, and he was frequently moved to other places. Eventually being returned to Berkeley for good, Edward died there on the night of 21 September. Mark Ormrod described this as "suspiciously timely", for Mortimer, as Edward's almost-certain murder permanently removed a rival and a target for restoration. Parliamentary proceedings were traditionally drawn up contemporaneously and entered onto a parliament roll by clerks. The Roll of 1327 is notable, according to the History of as Parliament, because "despite the highly charged political situation in January 1327, [it] contains no mention of the process by which Edward II ceased to be king". The roll only begins with the reassembling of parliament under Edward III in February, after the deposition of his father. It is likely, says Phillips, that since those involved were aware of the precarious legal basis for Edward's deposition—and how it would not bear "too close an examination"—there may never have been an enrolment: "Edward II had been airbrushed from the record". Other possible reasons for the lack of an enrolment are that it would never have been entered on a roll because the parliament was clearly illegitimate, or because Edward III later felt it was undesirable to have an official record of a royal deposition in case it suggested a precedent had been set, and removed it himself. It was not long before the crisis affected Mortimer's relationship with Edward III. Notwithstanding Edward's coronation, Mortimer was the country's de facto ruler. The high-handed nature of his rule was demonstrated, according to Ian Mortimer, on the day of Edward III's coronation. Not only did he arrange for his three eldest sons to be knighted, but—feeling a knight's ceremonial robes were inadequate—he had them dressed as earls for the occasion. Mortimer himself occupied his energies in getting rich and alienating people, and the defeat of the English army by the Scots at the Battle of Stanhope Park (and the Treaty of Edinburgh–Northampton which followed it in 1328) worsened his position. Maurice Keen describes Mortimer as being no more successful in the war against Scotland than his predecessor had been. Mortimer did little to rectify this situation and continued to show Edward disrespect. Edward, for his part, had originally (and unsurprisingly) sympathised with his mother against his father, but not necessarily for Mortimer. Michael Prestwich has described the latter as a "classic example of a man whose power went to his head", and compares Mortimer's greed to that of the Despensers and his political sensitivity to that of Piers Gaveston. Edward had married Philippa of Hainault in 1328, and they had a son in June 1330. Edward decided to remove Mortimer from the government: accompanied and assisted by close companions, Edward launched a coup d'état which took Mortimer by surprise at Nottingham Castle on 19 October 1330. He was hanged at Tyburn a month later and Edward III's personal reign began. ## Scholarship The parliament of 1327 is the focus of two main areas of interest for historians: in the long term, the part it played in the development of the English parliament, and in the short term, its place in the deposition of Edward II. On the first point, Gwilym Dodd has described the parliament as a landmark event in the institution's history, and, say Richardson and Sayles, it began a fifty-year period of developing and honing procedure. The assembly also, suggests G. L. Harriss, marks a point in the history of the English monarchy in which its authority was curtailed to a similar degree to the limitation previously imposed on King John by the Magna Carta and Henry III by de Montfort. Maddicott agrees with Richardson and Sayles regarding the significance of 1327 for the development of separate chambers, because it "saw the presentation of the first full set of commons' petitions [and] the first comprehensive statute to derive from such petitions". Maude Clarke described its significance as being in how "feudal defiance" was for the first time subsumed to the "will of the commonality, and the King was rejected not by his vassals but by his subjects". The second question it raises for scholars is whether Edward II was deposed by parliament, as an institution, or just while parliament sat. While many of the events necessary for the King's removal had taken place in parliament, others of equal significance (for example, the oath-taking at the Guildhall) occurred elsewhere. Parliament was certainly the public setting for the deposition. Victorian constitutional historians saw Edward's deposition as demonstrating fledgeling authority by the House of Commons akin to their own parliamentary system. Twentieth-century historiography remains divided on the issue. Barry Wilkinson, for example, considered it a deposition—but by the magnates, rather than parliament—but G. L. Harriss termed it an abdication, believing "there was no legal process of deposition, and kings like ... Edward II were induced to resign". Edward II's position has been summed up as his being offered "the choice of abdication in favour of his son Edward or forcible deposition in favour of a new king selected by his nobles". Seymour Phillips has argued that it was the "combined determination of the leading magnates, their personal followers and the Londoners" that Edward should be gone. Chris Bryant argues it is not clear whether these events were driven by parliament, or merely happened to occur in parliament, although he suggests Isabella and Roger Mortimer thought it necessary to have parliamentary support. Valente has suggested "the deposition was not revolutionary and did not attack kingship itself", it was not "necessarily illegal and outside the bounds of the 'constitution'", even though historians commonly describe it as such. The discussion is confused further, she says, because varying descriptions are given of the assembly by contemporaries. Some described it as being a royal council, others called it a parliament in the King's absence or a parliament with the Queen presiding, or one summoned by her and Prince Edward. Ultimately, she wrote, it was magnates deciding on policy, and being able to do so through the support of the knights and commoners. Dunham and Wood suggested that Edward's deposition was forced by political rather than legal factors. There is also a choice of who deposed: whether "the magnates alone deposed, that the magnates and people jointly deposed, that Parliament itself deposed, even that it was the 'people' whose voice was decisive". Ian Mortimer has described how "the representatives of the community of the realm would be called upon to act as an authority over and above that of the King". It was no advance of democracy, and was not intended to be—its purpose was to "unite all classes of the realm against the monarch" of the time. John Maddicott has said the proceedings began as a baronial coup but ended up becoming something close to a "national plebiscite", in which the commons were part of a radical reform of the state. This parliament also clarified procedures, such as codifying petitioning, legislating for it, and promulgating statutes, which would become the norm. The parliament also illustrates how contemporaries viewed the nature of tyranny. The leaders of the revolution, aware that deposition was a barely understood and unpopular concept in the political culture of the day, began almost immediately re-casting events as an abdication instead. Few contemporaries overtly disagreed with Edward's deposition, "but the fact of deposition itself caused immense anxiety", suggested David Matthews. It was an event as yet unheard of in English history. Phillips comments that "using accusations of tyranny to remove a legitimate and anointed king were too contentious and divisive to be of any practical use", which is why Edward had been accused of incompetence and inadequacy and much else, and not of tyranny. The Brut Chronicle, in fact, goes so far as to ascribe Edward's deposition, not to intentions of men and women, but to the fulfilment of a prophecy by Merlin. Edward's deposition also set a precedent and laid out arguments for subsequent depositions. The 1327 articles of accusation, for example, were drawn on sixty years later during the series of crises between King Richard II and the Lords Appellant. When Richard refused to attend parliament in 1386, Thomas of Woodstock, Duke of Gloucester and William Courtenay, Archbishop of Canterbury visited him at Eltham Palace and reminded him how—per "the statute by which Edward [II] had been adjudged"—a King who did not attend parliament was liable to deposition by his lords. Indeed, it has been suggested Richard II may have been responsible for the disappearance of the 1327 parliament roll when he recovered personal power two years later. Given-Wilson says that Richard considered Edward's deposition a "stain which he was determined to remove" from the royal family's history by proposing Edward's canonisation. Richard's subsequent deposition by Henry Bolingbroke in 1399 naturally drew direct parallels with that of Edward. Events which had taken place over 70 years earlier were by 1399 considered "ancient custom", which had set legal precedent, if an ill-defined one. A prominent chronicle of Henry's usurpation, composed by Adam of Usk, has been described as bearing "a striking resemblance" to the events of the 1327 parliament. Indeed, said Gaillard Lapsley, "Adam uses words that strongly suggest that he had this precedent in mind." Edward II's deposition was used as political propaganda as late as the troubled last years of James I in the 1620s. The King was very ill and played a peripheral role in government; his favourite, George Villiers, Duke of Buckingham became proportionately more powerful. Attorney general Henry Yelverton publicly compared Buckingham to Hugh Despenser on account of Villiers' penchant for enriching his friends and relatives through royal patronage. Curtis Perry has suggested that 17th-century "contemporaries applied the story [of Edward's deposition] to the political turmoil of the 1620s in conflicting ways: some used the parallel to point towards the corrupting influence of favourites and to criticize Buckingham; others drew parallels between the verbal intemperance of Yelverton and his ilk and the unruliness of Edward's opponents". The Parliament of 1327 was the last parliament before the Laws in Wales Acts 1535 and 1542 to summon Welsh representatives. They never took their seats, having been deliberately summoned too late to attend, because South Wales supported Edward, and North Wales was equally opposed to Mortimer. The 1327 parliament also provided almost the same list of attendees for the next five years of parliaments. ## Cultural depictions Christopher Marlowe was the first to dramatise the life and death of Edward II, with his 1592 play Edward II (or The Troublesome Reign and Lamentable Death of Edward the Second, King of England, with the Tragical Fall of Proud Mortimer). Marlowe emphasises the importance of parliament in Edward's reign, from his original taking of the coronation oath (Act I, scene 1), to his deposition (in Act V, scene 1). ## See also - List of parliaments of England - Parliament of England
485,670
Battle of Savo Island
1,172,096,634
Naval battle of the Pacific Campaign of World War II
[ "1942 in Japan", "1942 in the Solomon Islands", "August 1942 events", "Battles and operations of World War II involving the Solomon Islands", "Conflicts in 1942", "Military history of Japan during World War II", "Naval battles of World War II involving Australia", "Naval battles of World War II involving Japan", "Naval battles of World War II involving the United States", "Night battles", "Pacific Ocean theatre of World War II", "World War II naval operations and battles of the Pacific theatre" ]
The Battle of Savo Island, also known as the First Battle of Savo Island and in Japanese sources as the First Battle of the Solomon Sea (第一次ソロモン海戦, Dai-ichi-ji Soromon Kaisen), and colloquially among Allied Guadalcanal veterans as the Battle of the Five Sitting Ducks, was a naval battle of the Solomon Islands campaign of the Pacific War of World War II between the Imperial Japanese Navy and Allied naval forces. The battle took place on 8–9 August 1942 and was the first major naval engagement of the Guadalcanal campaign and the first of several naval battles in the straits later named Ironbottom Sound, near the island of Guadalcanal. The Imperial Japanese Navy, in response to Allied amphibious landings in the eastern Solomon Islands, mobilized a task force of seven cruisers and one destroyer under the command of Vice Admiral Gunichi Mikawa. The task forces sailed from Japanese bases in New Britain and New Ireland down New Georgia Sound (also known as "The Slot") with the intention of interrupting the Allied landings by attacking the supporting amphibious fleet and its screening force. The Allied screen consisted of eight cruisers and fifteen destroyers under Rear Admiral Victor Crutchley, but only five cruisers and seven destroyers were involved in the battle. In a night action, Mikawa thoroughly surprised and routed the Allied force, sinking one Australian and three American cruisers, while suffering only light damage in return. Rear Admiral Samuel J. Cox, director of the Naval History and Heritage Command, considers this battle and the Battle of Tassafaronga to be two of the worst defeats in U.S. naval history, with only the attack on Pearl Harbor being worse. After the initial engagement, Mikawa, fearing Allied carrier strikes against his fleet in daylight, decided to withdraw under cover of night rather than attempt to locate and destroy the Allied invasion transports. The Japanese attacks prompted the remaining Allied warships and the amphibious force to withdraw earlier than planned (before unloading all supplies), temporarily ceding control of the seas around Guadalcanal to the Japanese. This early withdrawal of the fleet left the Allied ground forces (primarily United States Marines), which had landed on Guadalcanal and nearby islands only two days before, in a precarious situation with limited supplies, equipment, and food to hold their beachhead. Mikawa's decision to withdraw under cover of night rather than attempt to destroy the Allied invasion transports was primarily founded on concern over possible Allied carrier strikes against his fleet in daylight. In reality, the Allied carrier fleet, similarly fearing Japanese attack, had already withdrawn beyond operational range. This missed opportunity to cripple (rather than interrupt) the supply of Allied forces on Guadalcanal contributed to Japan's failure to recapture the island. At this critical early stage of the campaign, it allowed the Allied forces to entrench and fortify themselves sufficiently to defend the area around Henderson Field until additional Allied reinforcements arrived later in the year. The battle was the first of five costly, large-scale sea and air-sea actions fought in support of the ground battles on Guadalcanal, as the Japanese sought to counter the American offensive in the Pacific. These sea battles took place after increasing delays by each side to regroup and refit, until the 30 November 1942 Battle of Tassafaronga—after which the Japanese, eschewing the costly losses, attempted resupplying by submarine and barges. The final naval battle, the Battle of Rennell Island, took place months later on 29–30 January 1943, by which time the Japanese were preparing to evacuate their remaining land forces and withdraw. ## Background ### Operations at Guadalcanal On 7 August 1942 Allied forces (primarily U.S. Marines) landed on Guadalcanal, Tulagi, and Florida Island in the eastern Solomon Islands. The landings were meant to deny their use to the Japanese as bases, especially the nearly completed airfield at Henderson Field that was being constructed on Guadalcanal. If Japanese air and sea forces were allowed to establish forward operating bases in the eastern Solomons, they would be in a position to threaten the supply shipping routes between the U.S. and Australia. The Allies also wanted to use the islands as launching points for a campaign to recapture the Solomons, isolate or capture the major Japanese base at Rabaul, and support the Allied New Guinea campaign, which was then building strength under General Douglas MacArthur. The landings initiated the six-month-long Guadalcanal campaign. The overall commander of Allied naval forces in the Guadalcanal and Tulagi operation was U.S. Vice Admiral Frank Jack Fletcher. He also commanded the carrier task groups providing air cover. U.S. Rear Admiral Richmond K. Turner commanded the amphibious fleet that delivered the 16,000 Allied troops to Guadalcanal and Tulagi. Also under Turner was Rear Admiral Victor Crutchley's screening force of eight cruisers, fifteen destroyers, and five minesweepers. This force was to protect Turner's ships and provide gunfire support for the landings. Crutchley commanded his force of mostly American ships from his flagship, the Australian heavy cruiser . The Allied landings took the Japanese by surprise. The Allies secured Tulagi, nearby islets Gavutu and Tanambogo, and the airfield under construction on Guadalcanal by nightfall on 8 August. On 7–8 August Japanese aircraft based at Rabaul attacked the Allied amphibious forces several times, setting afire the U.S. transport ship George F. Elliott (which sank later) and heavily damaging the destroyer USS Jarvis. In these air attacks, the Japanese lost 36 aircraft, while the U.S. lost 19 aircraft, including 14 carrier-based fighter aircraft. Concerned over the losses to his carrier fighter aircraft strength, anxious about the threat to his carriers from further Japanese air attacks, and worried about his ships' fuel levels, Fletcher announced that he would withdraw his carrier task forces on the evening of 8 August. Some historians contend that Fletcher's fuel situation was not at all critical but that Fletcher used it to justify his withdrawal from the battle area. Fletcher's biographer notes that Fletcher concluded that the landing was a success and that no important targets for close air support were at hand. Being concerned over the loss of 21 of his carrier fighters, he assessed that his carriers were threatened by torpedo-bomber strikes and wanting to refuel before Japanese naval forces arrived, he withdrew as he had previously forewarned Turner and Vandegrift. Turner, however, believed that Fletcher understood that he was to provide air cover until all the transports were unloaded on 9 August. Even though the unloading was going more slowly than planned, Turner decided that without carrier air cover, he would have to withdraw his ships from Guadalcanal. He planned to unload as much as possible during the night and depart the next day. ### Japanese response Unprepared for the Allied operation at Guadalcanal, the initial Japanese response included airstrikes and an attempted reinforcement. Mikawa, commander of the newly formed Japanese Eighth Fleet headquartered at Rabaul, loaded 519 naval troops on two transports and sent them towards Guadalcanal on 7 August. When the Japanese learned that Allied forces at Guadalcanal were stronger than originally reported, the transports were recalled. Mikawa also assembled all the available warships in the area to attack the Allied forces at Guadalcanal. At Rabaul were the heavy Takao-class cruiser Chōkai (Mikawa's flagship), the light cruisers Tenryū and Yūbari and the destroyer Yūnagi. En route from Kavieng were four heavy cruisers of Cruiser Division 6 under Rear Admiral Aritomo Goto: the Aoba-class Aoba and Kinugasa and the Furutaka-class Furutaka and Kako, totaling 34 8-inch main guns. The Japanese Navy had trained extensively in night-fighting tactics before the war, a fact of which the Allies were unaware. Mikawa hoped to engage the Allied naval forces off Guadalcanal and Tulagi on the night of 8–9 August when he could employ his night-battle expertise while avoiding attacks from Allied aircraft, which could not operate effectively at night. Mikawa's warships rendezvoused at sea near Cape St. George in the evening of 7 August and then headed east-southeast. ## Battle ### Prelude Mikawa decided to take his fleet north of Buka Island and then down the east coast of Bougainville. The fleet paused east of Kieta for six hours on the morning of 8 August to avoid daytime air attacks during its final approach to Guadalcanal. Mikawa proceeded along the dangerous New Georgia Sound (known as "The Slot"), hoping that no Allied plane would see them in the fading light. The Japanese fleet was in fact sighted in St George Channel, where the column almost ran into USS S-38, lying in ambush. She was too close to fire torpedoes, but her captain, Lieutenant Commander H.G. Munson alerted the fleet. Once at Bougainville, Mikawa spread his ships out over a wide area to mask the composition of his force and launched four floatplanes from his cruisers to scout for Allied ships in the southern Solomons. At 10:20 and 11:10, his ships were spotted by Royal Australian Air Force (RAAF) Hudson reconnaissance aircraft based at Milne Bay in New Guinea. The Hudson's crew tried to report the sighting to the Allied radio station at Fall River, New Guinea. Receiving no acknowledgment, they returned to Milne Bay at 12:42 to ensure that the report was received as soon as possible. The second Hudson also failed to report its sighting by radio but completed its patrol and landed at Milne Bay at 15:00. For unknown reasons, these reports were not relayed to the Allied fleet off Guadalcanal until 18:45 and 21:30, respectively. U.S. official historian Samuel Morison wrote in his 1949 account that the RAAF Hudson's crew failed to report the sighting until after they had landed and even had tea. This claim made international headlines and was repeated by many subsequent historians. Later research has discredited this version of events, and in 2014, the U.S. Navy's Naval History and Heritage Command acknowledged in a letter to the Hudson's radio operator, who had lobbied for decades to clear his crewmates' name, that Morison's criticisms were "unwarranted." Mikawa's floatplanes returned around 12:00 and reported two groups of Allied ships, one off Guadalcanal and the other off Tulagi. By 13:00, he reassembled his warships and headed south through Bougainville Strait at 24 knots (44 km/h). At 13:45, the cruiser force was near Choiseul southeast of Bougainville. At that time, several surviving Japanese aircraft from the noon torpedo raid on the Allied ships off the coast of Guadalcanal flew over the cruisers on the way back to Rabaul and gave them waves of encouragement. Mikawa entered The Slot by 16:00 and began his run towards Guadalcanal. He communicated the following battle plan to his warships: "On the rush-in we will go from S. (south) of Savo Island and torpedo the enemy main force in front of Guadalcanal anchorage; after which we will turn toward the Tulagi forward area to shell and torpedo the enemy. We will then withdraw north of Savo Island." Mikawa's run down The Slot was not detected by Allied forces. Turner had requested that U.S. Admiral John S. McCain Sr., commander of Allied air forces for the South Pacific Area, conduct extra reconnaissance missions over The Slot in the afternoon of 8 August. But for unexplained reasons McCain did not order the missions nor did he tell Turner that they were not carried out. Thus, Turner mistakenly believed that The Slot was under Allied observation throughout the day. However, McCain cannot totally bear fault as his patrol craft were few in number and operated over a vast area at the extreme limit of their endurance. Turner had fifteen scouting planes of the cruiser force, which were never used that afternoon and remained on the decks of their cruisers, filled with gasoline and serving as an explosive hazard to the cruisers. To protect the unloading transports during the night, Crutchley divided the Allied warship forces into three groups. A "southern" group, consisting of the Australian cruisers HMAS Australia and , cruiser USS Chicago, and destroyers USS Patterson and USS Bagley, patrolled between Lunga Point and Savo Island to block the entrance between Savo Island and Cape Esperance on Guadalcanal. A "northern" group, consisting of the cruisers USS Vincennes, USS Astoria and USS Quincy, and destroyers USS Helm and USS Wilson, conducted a box-shaped patrol between the Tulagi anchorage and Savo Island to defend the passage between Savo and Florida Islands. An "eastern" group consisting of the cruisers USS San Juan and with destroyers USS Monssen and USS Buchanan guarded the eastern entrances to the sound between Florida and Guadalcanal Islands. Crutchley placed two radar-equipped U.S. destroyers to the west of Savo Island to provide early warning for any approaching Japanese ships. The destroyer USS Ralph Talbot patrolled the northern passage and the destroyer USS Blue patrolled the southern passage, with a gap of 12–30 kilometers (7.5–18.6 mi) between their uncoordinated patrol patterns. At this time, the Allies were unaware of all of the limitations of their primitive ship-borne radar, such as the effectiveness of the radar could be greatly degraded by the presence of nearby landmasses. Chicago's Captain Bode ordered his ship's radar to be used only intermittently out of concern that it would reveal his position, a decision that conformed with general Navy radar usage guidelines but which may have been incorrect in this specific circumstance. He allowed a single sweep every half hour with the fire control radar, but the timing of the last pre-engagement sweep was too early to detect the approaching Japanese cruisers. Wary of the potential threat from Japanese submarines to the transport ships, Crutchley placed his remaining seven destroyers as close-in protection around the two transport anchorages. The crews of the Allied ships were fatigued after two days of constant alert and action in supporting the landings. Also, the weather was extremely hot and humid, inducing further fatigue and, in Morison's words, "inviting weary sailors to slackness." In response, most of Crutchley's warships went to "Condition II" the night of 8 August, which meant that half the crews were on duty while the other half rested, either in their bunks or near their battle stations. In the evening, Turner called a conference on his command ship off Guadalcanal with Crutchley and Marine commander Major General Alexander A. Vandegrift to discuss the departure of Fletcher's carriers and the resulting withdrawal schedule for the transport ships. At 20:55, Crutchley left the southern group in Australia to attend the conference, leaving Bode in charge of the southern group. Crutchley did not inform the commanders of the other cruiser groups of his absence, contributing further to the dissolution of command arrangements. Bode, awakened from sleep in his cabin, decided not to place his ship in the lead of the southern group of ships, the customary place for the senior ship, and went back to sleep. At the conference, Turner, Crutchley, and Vandegrift discussed the reports of the "seaplane tender" force reported by the Australian Hudson crew earlier that day. They decided that it would not be a threat that night, because seaplane tenders did not normally engage in a surface action. Vandegrift said that he would need to inspect the transport unloading situation at Tulagi before recommending a withdrawal time for the transport ships, and he departed at midnight to conduct the inspection. Crutchley elected not to return with Australia to the southern force but instead stationed his ship just outside the Guadalcanal transport anchorage, without informing the other Allied ship commanders of his intentions or location. As Mikawa's force neared the Guadalcanal area, the Japanese ships launched three floatplanes for one final reconnaissance of the Allied ships, and to provide illumination by dropping flares during the upcoming battle. Although several of the Allied ships heard and/or observed one or more of these floatplanes, starting at 23:45, none of them interpreted the presence of unknown aircraft in the area as an actionable threat, and no one reported the sightings to Crutchley or Turner. Mikawa's force approached in a single 3-kilometer (1.9 mi) column led by Chōkai, with Aoba, Kako, Kinugasa, Furutaka, Tenryū, Yūbari, and Yūnagi following. Sometime between 00:44 and 00:54 on 9 August, lookouts in Mikawa's ships spotted Blue about 9 kilometers (5.6 mi) ahead of the Japanese column. ### Action south of Savo To avoid Blue, Mikawa changed course to pass north of Savo Island. He also ordered his ships to slow to 22 knots (41 km/h) to reduce wakes that might make his ships more visible. Mikawa's lookouts spied either Ralph Talbot about 16 kilometers (9.9 mi) away or a small schooner of unknown nationality. The Japanese ships held their course while pointing more than 50 guns at Blue, ready to open fire at the first indication that Blue had sighted them. When Blue was less than 2 kilometers (1.2 mi) away from Mikawa's force, she reversed course, having reached the end of her patrol track, and steamed away, apparently oblivious to the long column of large Japanese ships sailing by her. Seeing that his ships were still undetected, Mikawa turned back to a course south of Savo Island and increased speed, first to 26 knots (48 km/h), and then to 30 knots (56 km/h). At 01:25, Mikawa released his ships to operate independently of his flagship, and at 01:31 he ordered "Every ship attack." At about this time, Yūnagi detached from the Japanese column and reversed direction, perhaps because she lost sight of the other Japanese ships ahead of her, or perhaps she was ordered to provide a rearguard for Mikawa's force. One minute later, Japanese lookouts sighted a warship to port. This ship was the destroyer USS Jarvis, heavily damaged the day before and departing Guadalcanal independently for repairs in Australia. Whether Jarvis sighted the Japanese ships is unknown, since her radios had been destroyed. Furutaka launched torpedoes at Jarvis, which all missed. The Japanese ships passed as close to Jarvis as 1,100 meters (1,200 yd), close enough for officers on Tenryū to look down onto the destroyer's decks without seeing any of her crew moving about. If Jarvis was aware of the Japanese ships passing by, she did not respond in any noticeable way and was torpedoed and sunk the following day by aircraft from Rabaul. There were no survivors. After sighting Jarvis, the Japanese lookouts sighted the Allied destroyers and cruisers of the southern force about 12,500 meters (13,700 yd) away, silhouetted by the glow from the burning George F. Elliott. At about 01:38, the Japanese cruisers began launching salvos of torpedoes at the Allied southern force ships. At this same time, lookouts on Chōkai spotted the ships of the Allied northern force at a range of 16 kilometers (9.9 mi). Chōkai turned to face this new threat, and the rest of the Japanese column followed, while still preparing to engage the Allied southern force ships with gunfire. Patterson's crew was alert because the destroyer's captain had taken seriously the earlier daytime sightings of Japanese warships and evening sightings of unknown aircraft. At 01:43, Patterson spotted a ship, probably Kinugasa, 5,000 meters (5,500 yd) dead ahead and immediately sent a warning by radio and signal lamp: "Warning! Warning! Strange ships entering the harbor!" Patterson increased speed to full and fired star shells towards the Japanese column. Her captain ordered a torpedo attack, but his order was not heard over the noise from the destroyer's guns. At about the same moment that Patterson sighted the Japanese ships and went into action, Japanese floatplanes dropped aerial flares directly over Canberra and Chicago. Canberra responded with Captain Frank Getting ordering an increase in speed and a reversal of an initial turn to port, which kept Canberra between the Japanese and the Allied transports, and for her guns to train out and fire at any targets that could be sighted. As Canberra's guns took aim at the Japanese, Chōkai and Furutaka opened fire on her, scoring numerous hits. Aoba and Kako joined in with gunfire, and Canberra took up to 24 large-caliber hits. Early hits killed her gunnery officer, mortally wounded Getting, and destroyed both boiler rooms, knocking out power to the entire ship before Canberra could fire any of her guns or communicate a warning to other Allied ships. The cruiser glided to a stop, on fire, with a 5- to 10-degree list to starboard, and unable to fight the fires or pump out flooded compartments because of lack of power. The crew of Chicago, observing the illumination of their ship by air-dropped flares and the sudden turn by Canberra in front of them, came alert and awakened Captain Bode. Bode ordered his 5 in (127 mm) guns to fire star shells towards the Japanese column, but the shells did not function. At 01:47, a torpedo, probably from Kako, hit Chicago's bow, sending a shock wave throughout the ship that damaged the main battery director. A second torpedo hit but failed to explode, and a shell hit the cruiser's mainmast, killing two crewmen. Chicago steamed west for 40 minutes,leaving behind the transports she was assigned to protect. The cruiser fired her secondary batteries at the trailing ships in the Japanese column and may have hit Tenryū, causing slight damage. Bode did not try to assert control over any of the other Allied ships in the southern force, of which he was still technically in command. More significantly, Bode made no attempt to warn any of the other Allied ships or personnel in the Guadalcanal area as his ship headed away from the battle area. Patterson engaged in a gun duel with the Japanese column. Patterson received a shell hit aft, causing moderate damage and killing 10 crew members. Patterson continued to pursue and fire at the Japanese ships and may have hit Kinugasa, causing moderate damage. Patterson then lost sight of the Japanese column as it headed northeast along the eastern shore of Savo Island. Bagley, whose crew sighted the Japanese shortly after Patterson and Canberra, circled completely around to port before firing torpedoes in the general direction of the rapidly disappearing Japanese column; one or two of which may have hit Canberra. Bagley played no further role in the battle. Yūnagi exchanged non-damaging gunfire with Jarvis before exiting the battle area to the west with the intention of eventually rejoining the Japanese column north and west of Savo Island. At 01:44, as Mikawa's ships headed towards the Allied northern force, Tenryū and Yūbari split from the rest of the Japanese column and took a more westward course. Furutaka, either because of a steering problem, or to avoid a possible collision with Canberra, followed Yūbari and Tenryū. Thus, the Allied northern force was about to be enveloped and attacked from two sides. ### Action north of Savo When Mikawa's ships attacked the Allied southern force, the captains of all three U.S. northern force cruisers were asleep, with their ships steaming quietly at 10 knots (19 km/h). Although crewmen on all three ships observed flares or gunfire from the battle south of Savo or else received Patterson's warning of threatening ships entering the area, it took some time for the crews to go from Condition II to full alert. At 01:44, the Japanese cruisers began firing torpedoes at the northern force. At 01:50, they aimed powerful searchlights at the three northern cruisers and opened fire with their guns. Astoria's bridge crew called general quarters upon sighting the flares south of Savo, around 01:49. At 01:52, shortly after the Japanese searchlights came on and shells began falling around the ship, Astoria's main gun director crews spotted the Japanese cruisers and opened fire. Astoria's captain, awakened to find his ship in action, rushed to the bridge and ordered a ceasefire, fearful that his ship might be firing on friendly forces. As shells continued to cascade around his ship, the captain ordered firing resumed less than a minute later. Chōkai had found the range, and Astoria was quickly hit by numerous shells and set afire. Between 02:00 and 02:15, Aoba, Kinugasa, and Kako joined Chōkai in pounding Astoria, destroying the cruiser's engine room and bringing the flaming ship to a halt. At 02:16, one of Astoria's remaining operational main gun turrets fired at Kinugasa's searchlight but missed and hit one of Chōkai's forward turrets, putting the turret out of action and causing moderate damage to the ship. Astoria sank at 12:16 after all attempts to save her failed. Quincy had also seen the aircraft flares over the southern ships, received Patterson's warning, and had just sounded general quarters and was coming alert when the searchlights from the Japanese column came on. Quincy's captain gave the order to commence firing, but the gun crews were not ready. Within a few minutes, Quincy was caught in a crossfire between Aoba, Furutaka, and Tenryū, and was hit heavily and set afire. Quincy's captain ordered his cruiser to charge towards the eastern Japanese column, but as she turned to do so Quincy was hit by two torpedoes from Tenryū, causing severe damage. Quincy managed to fire a few main gun salvos, one of which hit Chōkai's chart room 6 meters (20 ft) from Admiral Mikawa and killed or wounded 36 men, although Mikawa was not injured. At 02:10, incoming shells killed or wounded almost all of Quincy's bridge crew, including the captain. At 02:16, the cruiser was hit by a torpedo from Aoba, and the ship's remaining guns were silenced. Quincy's assistant gunnery officer, sent to the bridge to ask for instructions, reported on what he found: > When I reached the bridge level, I found it a shambles of dead bodies with only three or four people still standing. In the Pilot House itself the only person standing was the signalman at the wheel who was vainly endeavoring to check the ship's swing to starboard to bring her to port. On questioning him I found out that the Captain, who at that time was laying [sic] near the wheel, had instructed him to beach the ship and he was trying to head for Savo Island, distant some four miles (6 km) on the port quarter. I stepped to the port side of the Pilot House, and looked out to find the island and noted that the ship was heeling rapidly to port, sinking by the bow. At that instant the Captain straightened up and fell back, apparently dead, without having uttered any sound other than a moan. Quincy sank, bow first, at 02:38. Like Quincy and Astoria, Vincennes also sighted the aerial flares to the south, and furthermore, actually sighted gunfire from the southern engagement. At 01:50, when the U.S. cruisers were illuminated by the Japanese searchlights, Vincennes hesitated to open fire, believing that the searchlight's source might be friendly ships. Kako opened fire on Vincennes which responded with her own gunfire at 01:53. As Vincennes began to receive damaging shell hits, her commander, Captain Frederick L. Riefkohl, ordered an increase of speed to 25 knots (46 km/h), but at 01:55 two torpedoes from Chōkai hit, causing heavy damage. Kinugasa joined Kako in pounding Vincennes. Vincennes scored one hit on Kinugasa causing moderate damage to her steering engines. The rest of the Japanese ships also fired and hit Vincennes up to 74 times, and at 02:03 another torpedo hit her, this time from Yūbari. With all boiler rooms destroyed, Vincennes came to a halt, burning "everywhere" and listing to port. At 02:16, Riefkohl ordered the crew to abandon ship, and Vincennes sank at 02:50. During the engagement, the U.S. destroyers Helm and Wilson struggled to see the Japanese ships. Both destroyers briefly fired at Mikawa's cruisers but caused no damage and received no damage to themselves. At 02:16, the Japanese columns ceased fire on the northern Allied force as they moved out of range around the north side of Savo Island. Ralph Talbot encountered Furutaka, Tenryū, and Yūbari as they cleared Savo Island. The Japanese ships fixed Ralph Talbot with searchlights and hit her several times with gunfire, causing heavy damage, but Ralph Talbot escaped into a nearby rain squall, and the Japanese ships left her behind. ### Mikawa's decision At 02:16 Mikawa conferred with his staff about whether they should turn to continue the battle with the surviving Allied warships and try to sink the Allied transports in the two anchorages. Several factors influenced his ultimate decision. His ships were scattered and would take some time to regroup. His ships would need to reload their torpedo tubes, a labor-intensive task that would take some time. Mikawa also did not know the number and locations of any remaining Allied warships, and his ships had expended much of their ammunition. More importantly, Mikawa had no air cover and believed that U.S. aircraft carriers were in the area. Mikawa was probably aware that the Japanese Navy had no more heavy cruisers in production and thus would be unable to replace any he might lose to air attack the next day if he remained near Guadalcanal. He was unaware that the U.S. carriers had withdrawn from the battle area and would not be a threat the next day. Although several of Mikawa's staff urged an attack on the Allied transports, the consensus was to withdraw from the battle area. Therefore, at 02:20, Mikawa ordered his ships to retire. ## Aftermath ### Allied At 04:00 on 9 August, Patterson came alongside Canberra to assist the cruiser in fighting her fires. By 05:00, it appeared that the fires were almost under control, but Turner, who at this time intended to withdraw all Allied ships by 06:30, ordered the ship to be scuttled if she was not able to accompany the fleet. After the survivors were removed, the destroyers USS Selfridge and USS Ellet sank Canberra, which took some 300 shells and five torpedoes. Later in the morning, Vandegrift advised Turner that he needed more supplies unloaded from the transports before they withdrew. Therefore, Turner postponed the withdrawal of his ships until mid-afternoon. In the meantime, Astoria's crew tried to save their sinking ship. Astoria's fires eventually became completely out of control, and the ship sank at 12:15. On the morning of 9 August, an Australian coastwatcher on Bougainville radioed a warning of a Japanese airstrike on the way from Rabaul. The Allied transport crews ceased unloading for a time but were puzzled when the airstrike did not materialize. Allied forces did not discover until after the war was over that this Japanese airstrike instead concentrated on Jarvis south of Guadalcanal, sinking her with all hands. The Allied transports and warships all departed the Guadalcanal area by nightfall on 9 August. During the naval surface battle of Savo Island, three U.S. heavy cruisers, Astoria (219 killed), Quincy (370 killed), and Vincennes (322 killed), and one Australian heavy cruiser, (84 killed), were sunk or scuttled. The commanding officers of Canberra and Quincy were also killed in action. Chicago spent the next 6 months in drydock, returned to Guadalcanal in late January 1943 and was promptly finished off for good in the campaign's last engagement: the Battle of Rennell Island. ### Japanese In the late evening of 9 August, Mikawa on Chōkai released the four cruisers of Cruiser Division 6 to return to their home base at Kavieng. At 08:10 on 10 August, Kako was torpedoed and sunk by the submarine USS S-44 110 kilometers (68 mi) from her destination. The other three Japanese cruisers picked up all but 71 of her crew and went on to Kavieng. Admiral Isoroku Yamamoto signaled a congratulatory note to Mikawa on his victory, stating, "Appreciate the courageous and hard fighting of every man of your organization. I expect you to expand your exploits and you will make every effort to support the land forces of the Imperial army which are now engaged in a desperate struggle." Later on, though, when it became apparent that Mikawa had missed an opportunity to destroy the Allied transports, he was intensely criticised by his comrades. ## Tactical result From the time of the battle until several months later, almost all Allied supplies and reinforcements sent to Guadalcanal came by transports in small convoys, mainly during daylight hours, while Allied aircraft from the New Hebrides and Henderson Field and any available aircraft carriers flew covering missions. During this time, Allied forces on Guadalcanal received barely enough ammunition and provisions to withstand the several Japanese drives to retake the islands. Despite their defeat in this battle, the Allies eventually won the battle for Guadalcanal, an important step in the defeat of Japan. In hindsight, according to Richard B. Frank, if Mikawa had elected to risk his ships to go after the Allied transports on the morning of 9 August, he could have improved the chances of Japanese victory in the Guadalcanal campaign at its inception, and the course of the war in the southern Pacific could have gone much differently. Although the Allied warships at Guadalcanal that night were completely routed, the transports were unaffected. Many of these same transports were later used many times to bring crucial supplies and reinforcements to Allied forces on Guadalcanal over succeeding months. Mikawa's decision not to destroy the Allied transport ships when he had the opportunity proved to be a crucial strategic mistake for the Japanese. ## U.S. Navy board of inquiry A formal United States Navy board of inquiry, known as the Hepburn Investigation, prepared a report of the battle. The board interviewed most of the major Allied officers involved over several months, beginning in December 1942. The report recommended official censure for Captain Howard D. Bode of the Chicago for failing to broadcast a warning to the fleet of encroaching enemy ships. The report stopped short of recommending formal action against other Allied officers, including Admirals Fletcher, Turner, McCain, and Crutchley, and Captain Riefkohl. The careers of Turner, Crutchley, and McCain do not appear to have been affected by the defeat or the mistakes they made in contributing to it. Riefkohl never commanded ships again. Bode, upon learning that the report was going to be especially critical of his actions, shot himself in his quarters at Balboa, Panama Canal Zone, on 19 April 1943 and died the next day. Crutchley was later gazetted with the Legion of Merit (Chief Commander). Admiral Turner assessed why his forces were so soundly defeated in the battle: > "The Navy was still obsessed with a strong feeling of technical and mental superiority over the enemy. In spite of ample evidence as to enemy capabilities, most of our officers and men despised the enemy and felt themselves sure victors in all encounters under any circumstances. The net result of all this was a fatal lethargy of mind which induced a confidence without readiness, and a routine acceptance of outworn peacetime standards of conduct. I believe that this psychological factor, as a cause of our defeat, was even more important than the element of surprise." Historian Frank adds that "This lethargy of mind would not be completely shaken off without some more hard blows to (U.S.) Navy pride around Guadalcanal, but after Savo, the United States picked itself up off the deck and prepared for the most savage combat in its history." The report of the inquiry caused the U.S. Navy to make many operational and structural changes. All the earlier models of U.S. Navy cruisers were retrofitted with emergency diesel-electric generators. The fire mains of the ships were changed to a vertical loop design that could be broken many times and still function. During the battle, many ship fires were attributed to aviation facilities filled with gas, oil, and planes. Motorboats were filled with gasoline and also caught fire. In some cases, these facilities were dead amidships, presenting a perfect target for enemy ships at night. Ready-service lockers (lockers containing ammunition that is armed and ready for use) added to the destruction, and it was noted that the lockers were never close to being depleted, i.e., they contained much more dangerous ammunition than they needed to. A focus was put on removing or minimizing flammable amidship materials. Admiral Ernest J. King, the commander in chief of the United States Fleet, ordered sweeping changes to be made before ships entered surface combat in the future. ## See also - The Second Battle of Savo Island (a.k.a. the Battle of Cape Esperance) - The Third Battle of Savo Island (a.k.a. the Naval Battle of Guadalcanal) - The Fourth Battle of Savo Island (a.k.a. the Battle of Tassafaronga)
14,726,482
2005 Sugar Bowl
1,171,380,483
null
[ "2004–05 NCAA football bowl games", "2005 in sports in Louisiana", "21st century in New Orleans", "Auburn Tigers football bowl games", "January 2005 sports events in the United States", "Sugar Bowl", "Virginia Tech Hokies football bowl games" ]
The 2005 Sugar Bowl was a postseason American college football bowl game between the Virginia Tech Hokies and the Auburn Tigers at the Louisiana Superdome in New Orleans, Louisiana, on January 3, 2005. It was the 71st edition of the annual Sugar Bowl football contest. Virginia Tech represented the Atlantic Coast Conference (ACC) in the contest, while Auburn represented the Southeastern Conference (SEC). In a defensive struggle, Auburn earned a 16–13 victory despite a late-game rally by Virginia Tech. Virginia Tech was selected as a participant in the game after winning the ACC football championship during the team's first year in the conference. Tech, which finished 10–2 in the regular season prior to the Sugar Bowl, defeated 16th-ranked Virginia and ninth-ranked Miami en route to the game. Auburn finished the regular season undefeated at 12–0. The Tigers defeated fourth-ranked LSU and fifth-ranked Georgia during the course of the season, and were one of five teams to finish the regular season undefeated; the others were Southern California, Oklahoma, Utah, and Boise State, with USC and Oklahoma being selected to play in the Bowl Championship Series national championship game. Auburn, by virtue of its lower ranking in the BCS poll, was left out of the national championship and was selected to play in the Sugar Bowl. Pre-game media coverage of the game focused on Auburn being left out of the national championship game, a point of controversy for Auburn fans in the weeks leading up to the game. Much was made of that and the success of Auburn running backs Carnell Williams and Ronnie Brown, each of whom was considered among the best at his position. On the Virginia Tech side, senior quarterback Bryan Randall had a record-breaking season. Both teams also had high-ranked defenses, and Tech's appearance in the 2000 Sugar Bowl also was mentioned in the run-up to the game. The 2005 Sugar Bowl kicked off on January 3, 2005, at 8:00 p.m. EST. Early in the first quarter, the Tigers took a 3–0 lead. Following an interception by the Auburn defense, the Tigers were extended their lead to 6–0. In the second quarter, another field goal resulted in three points for the Tigers. At halftime, Auburn led, 9–0. Auburn opened the second half with its only touchdown drive of the game, giving Auburn a 16–0 lead, which it held into the fourth quarter. In that quarter, Tech scored its first touchdown of the game but did not convert the two-point try, making the score 16–6. Late in the quarter, Tech quarterback Bryan Randall cut Auburn's lead to 16–13 on an 80-yard pass that resulted in another touchdown. With almost no time remaining in the game, Virginia Tech attempted an onside kick to have another chance on offense. When Auburn recovered the kick, the Tigers ran out the clock and secured the win. In recognition of his game-winning performance, Auburn quarterback Jason Campbell was named the game's most valuable player. Despite Auburn's victory and undefeated season, they were not named national champions. That honor went to the University of Southern California, which defeated Oklahoma in the 2005 national championship game, 55–19. Three voters in the final Associated Press poll of the season voted Auburn the number one team in the country, but their votes were not enough to deny USC a national championship, as voted by members of the Associated Press and Coaches' polls. Several players from each team were selected in the 2005 NFL Draft and went on to careers in the National Football League. ## Team selection Virginia Tech and Auburn each earned automatic spots in a BCS bowl game due to their status as conference champions, and were selected by the 2005 Sugar Bowl. Virginia Tech finished the season 10–2 and was named ACC football champion its first year in the conference. Auburn, meanwhile, finished the season undefeated at 12–0, and was named champion of the SEC. Controversy erupted around Auburn's selection, as the Tigers had been denied a spot in the national championship game in favor of two other undefeated teams: the University of Southern California (USC) and Oklahoma. ### Virginia Tech The Virginia Tech Hokies entered the 2004 college football season having gone 8–5 in 2003, culminating with a 52–49 loss to California in the 2003 Insight Bowl. The 2003 season had also been Virginia Tech's final year in the Big East Conference, and Tech began the new season in the Atlantic Coast Conference. Tech started the season unranked for the first time since 1998, and was picked to finish sixth (out of 11 ACC teams) in the annual ACC preseason poll, held in July. The Hokies' first game in their new conference was a non-conference contest at FedEx Field in Landover, MD. against the top-ranked USC Trojans. Tech lost, 24–13, but recovered to win its next game—against lightly regarded Western Michigan—in blowout fashion, 63–0. In its first conference game in the ACC, the Hokies beat Duke, 47–17, to improve to a 2–1 record. Their first win in the ACC was followed by their first loss, however, as the Hokies lost the next week to North Carolina State, 17–16, when Tech kicker Brandon Pace missed a last-second field goal. Following the loss, Virginia Tech was 2–2 on the season, and faced the potential of being ineligible for a postseason bowl game if it did not improve its winning percentage. The Hokies won their next eight games, finishing the season with a 10–2 record. With late-season wins over perennial rival, 16th-ranked Virginia, and fellow ACC newcomer, ninth-ranked Miami, Virginia Tech clinched the ACC football championship (the last year in which it would be decided without a conference championship game) and a bid to a Bowl Championship Series game. Because the ACC's normal bowl destination, the Orange Bowl, was hosting the national championship game, Virginia Tech was selected to attend the Sugar Bowl in New Orleans, Louisiana, instead. ### Auburn Auburn, like Virginia Tech, had gone 8–5 during the 2003 college football season, and entered the 2004 season with high expectations. The Tigers were using a new offensive scheme—the West Coast offense—and boasted two highly rated running backs on offense. In its first game of the 2004 season, the 18th-ranked Auburn football team overwhelmed the University of Louisiana-Monroe, 31–0. It was Auburn's first shutout since 2002. One week later, the Tigers backed up their good start with an emphatic 43–14 victory over Southeastern Conference foe Mississippi State University. In the third week of the season, Auburn faced its first challenge of the young season, against the fourth-ranked Louisiana State Tigers. In a hard-fought defensive struggle, Auburn won, 10–9, when a missed extra point was replayed after a penalty. After an easy 33–3 victory over The Citadel, Auburn faced eighth-ranked Tennessee. The Tigers' defense forced six turnovers en route to a 34–10 victory. With the victory over Tennessee, Auburn reeled off another four victories and became a prominent candidate for inclusion in the national championship game. In the 11th week of the season, Auburn faced the fifth-ranked Georgia Bulldogs. After a defensive effort that held Georgia scoreless until late in the fourth quarter, the third-ranked Tigers won a 24–6 victory. After defeating Alabama in their final regular-season game, Auburn entered the SEC championship game undefeated and in third place nationally. Although the Tigers defeated the Volunteers, 38–28, in the conference championship game, Auburn remained in third place because both USC and Oklahoma also remained undefeated. With USC and Oklahoma selected to play in the national championship game, Auburn was forced into the Sugar Bowl. With the winner of the BCS Championship Game guaranteed first place in the Coaches Poll, Auburn fans held hopes that the combination of an overwhelming Tigers victory in the Sugar Bowl with Oklahoma defeating USC with a weak performance would cause enough voters in the AP Poll to put Auburn ahead of Oklahoma in their final poll. The result would have been a split national championship similar to what occurred the previous season. ## Pregame buildup In the weeks leading up to the game, media coverage of the game focused on Auburn's exclusion from the national championship game, a controversial point for Auburn fans and other observers in the weeks leading up to the game. In addition, both teams boasted high-ranked defenses that had performed well during the year. Much was made of that fact and the success of Auburn running backs Carnell "Cadillac" Williams and Ronnie Brown, each of whom were considered among the best players at their position. On the Virginia Tech side, senior quarterback Bryan Randall performed well for the Hokies during the regular season and was predicted to continue his success in the Sugar Bowl. ### Rankings controversy Shortly after the final pre-bowl game Bowl Championship Series standings were released on December 4, Auburn was among several teams disgruntled with the system. One of these was California, which only lost to top-ranked USC, but was denied a bid to the prestigious Rose Bowl after Texas vaulted it in the rankings despite having the same record. The Golden Bears were forced to attend the less-attractive Holiday Bowl instead. The Auburn Tigers, meanwhile, had completed their first 12-win regular season and won their first conference championship in 15 years, but in the final BCS rankings, Auburn was third, behind USC and Oklahoma. It was the first time since the creation of the BCS in 1998 that three major-conference college football teams were undefeated at the conclusion of the regular season. Some pundits and fans considered Auburn's failure to reach the championship game to be based on the fact that the Tigers had started with a lower ranking at the beginning of the season. The Tigers had been ranked 17th at the beginning of the season, while USC had been ranked first and Oklahoma second, the same spots they occupied at the end of the regular season. Sportswriters also pointed to the Tigers' tougher conference schedule when compared to those of USC and Oklahoma. SEC commissioner Mike Slive remarked, "If Auburn goes through this league undefeated, they deserve to play for the national championship." Virginia Tech head coach Frank Beamer, in the runup to the game, seemingly agreed with the assessment, saying, "We started out playing Southern Cal and I believe this Auburn team is better." Some writers also indicated USC's five-point win—in which the Trojans struggled—over rival UCLA as an indicator that the Tigers could be the better team. In the end, such arguments were unable to sway voters, who ranked USC first, Oklahoma second, and Auburn third in all of the major polls decided by human voters. The Utes, who were also undefeated at the conclusion of the regular season, received limited attention because they were a member of a non-BCS conference. Due to the controversy surrounding Auburn's failure to be given a chance to play for the national championship and controversies involving teams lobbying for improved ratings in the poll, the Associated Press sent a cease-and-desist order to BCS officials, forbidding them the use of the AP Poll in calculating BCS ratings. ### Auburn offense Auburn head coach Tommy Tuberville was named the Associated Press Coach of the Year on December 24, due in large part to his success in using Auburn's new West Coast offense to drive the Tigers to an undefeated 12–0 regular season. In response to his success, Auburn administrators agreed to a seven-year, \$16 million contract extension with Tuberville prior to the Sugar Bowl. Tuberville planned an offense that finished the regular season averaging 33.4 points and 430.8 yards of total offense per game. Heading Auburn's offense on the field was quarterback Jason Campbell. Campbell finished the regular season with 2,511 yards and 19 touchdowns, one short of tying the Auburn record for most touchdowns in a single season. Campbell was second in the Southeastern Conference in passing yards per game (209.2), and was a first-team All-SEC selection. Auburn's rushing offense was led by two highly regarded running backs: Carnell Williams and Ronnie Brown. The two men, combined with quarterback Campbell, ran for 15,739 yards and 129 career touchdowns prior to the Sugar Bowl. Williams led the SEC in all-purpose yardage (137.2 yards per game) and average yards per punt return (11.7). He finished the regular season with 1,104 rushing yards and 13 touchdowns. The touchdown mark was the most recorded by a running back in the SEC that year. For his accomplishments, Williams was named a first-team All-SEC pick. Despite missing most of his first two seasons due to injuries, he ranked second on Auburn's all-time rushing list with 3,770 yards—behind only NFL and MLB star Bo Jackson. Williams also had the most rushing touchdowns in Auburn history (45) and was Auburn's leading scorer in school history (276 points). Brown accumulated 845 rushing yards and caught 34 passes for 314 yards during the season prior to the Sugar Bowl. He finished with eight touchdowns and was named a second-team All-SEC pick. His 34 receptions were 10 more than he earned in his first three seasons combined. Despite the attention focused on Auburn's two star running backs, the team also boasted a capable corps of wide receivers as well. Prior to the game, Auburn receiver Ben Obomanu said, "When you have your running game making big plays and the defense has to load the box (defensive line) to make plays and try to stop the running game, that opens up things in the passing game." Auburn averaged more yards passing (241.4 per game) than running (189.4 per game). ### Virginia Tech offense Heading into the Sugar Bowl, the Virginia Tech offense was led by quarterback Bryan Randall, who completed 149 of 268 passes (55.6 percent) for 1,965 yards, 19 touchdowns, and seven interceptions. He also rushed for 466 yards and held Tech career records for total offense and passing yards. His 37 consecutive starts also are a school mark for a quarterback. In the preseason, Randall competed for the first-string quarterback spot with Marcus Vick until the latter was suspended from Tech for a semester after a criminal conviction. In the weeks leading up to the Sugar Bowl, Randall was named the Virginia Division I Offensive Player of the Year by the Roanoke Times and was named the ACC Player of the Year. Tech's rushing offense featured two running backs who shared time on the field: Mike Imoh and Cedric Humes. During the regular season, Imoh rushed the ball 152 times for 704 yards, an average of 4.6 yards per carry. He scored four touchdowns and set a school record for rushing yards in a game when he ran for 243 yards in Virginia Tech's game against North Carolina. Humes was on the field slightly less than Imoh, but earned 595 yards and five touchdowns on 124 carries. Tech offensive tackle Jimmy Martin was expected to play in the game after recovering from a high ankle sprain. On special teams, Tech's Jim Davis blocked three field goals during the regular season, and teammate Darryl Tapp blocked a punt. Tech's success on special teams was at least partially due to head coach Frank Beamer's emphasis on that aspect of the game, a strategy known as "Beamerball." Due to Tech's acumen on special teams, Auburn was forced to spend extra time in preparing its special teams to face Virginia Tech in the Sugar Bowl. The Sugar Bowl was a homecoming for Tech punter Vinnie Burns, who played high school football 15 miles (24 km) from the Louisiana Superdome, site of the Sugar Bowl. In addition, Burns' father, Ronnie Burns, was a longtime Sugar Bowl committee member, and Vinnie committed to attend Virginia Tech while the Hokies were in New Orleans to play in the 2000 Sugar Bowl, that year's national championship game. ### Auburn defense Before the Sugar Bowl, Auburn had the top-ranked scoring defense in the country (allowing 11.2 points per game), the fifth-ranked total defense (allowing 269.5 total yards per game), eighth in passing defense (allowing 163 yards passing per game), and 16th in rushing defense (allowing 106.5 yards rushing per game). Cornerback Carlos Rogers was one of the key players on the defensive squad. Rogers, who won the Jim Thorpe Award—given annually to the best defensive back in the country—earned consensus All-America honors and was a finalist for the Bronco Nagurski Award and a semifinalist for the Chuck Bednarik Award, each given to the best defensive college football player in the United States. Linebacker Travis Williams had the most tackles on the team during the regular season, finishing with 76. He also tied for third on the team in tackles for loss (nine), had two interceptions, two sacks, and was named a second-team All-SEC selection. Senior safety Junior Rosegreen, freshman end Stanley McClover and junior nose guard Tommy Jackson were first-team all-SEC picks, signifying they were the best players at their position in the conference. Rosegreen had five interceptions during the regular season, including four in Auburn's game against Tennessee. That single-game performance tied the SEC record and set the Auburn record for the most interceptions in one game. Jackson finished the regular season with 49 tackles, six tackles for loss, and one sack. Linebacker Antarrious Williams was scheduled to miss the game after undergoing surgery to repair a dislocated bone suffered in the Tigers' game against Georgia. Williams had 44 tackles during the regular season, and had been replaced by Derrick Graves in the SEC championship game. Graves was expected to do so again in the Sugar Bowl. ### Virginia Tech defense At the conclusion of the regular season, Virginia Tech's defense was ranked third nationally in scoring defense (12.6 points allowed per game), fourth in total defense (269.5 total yards allowed per game) and fifth in pass defense (149.8 passing yards allowed per game). The Tech defense featured two highly regarded cornerbacks, Jimmy Williams and Eric Green, who finished the regular season with 50 tackles and 31 tackles, respectively. Williams also had four interceptions (the most on the team), including one returned for a touchdown, and was named first-team All-ACC. Green, meanwhile, had one interception. Auburn wide receiver Courtney Taylor praised the two players highly in an interview before the game, saying, "Those cornerbacks are amazing to me every time I look at them. I think, 'God, those guys are very athletic.' We're going to have our hands full." Linebacker Mikal Baaqee was first on the team in tackles, recording 63 during the regular season. Fellow linebacker Vince Hall ranked second, with 62. On the defensive line, defensive tackle Jonathan Lewis was considered a key player. Though limited by a cast protecting a broken pinky finger suffered during Virginia Tech's game against Virginia, Lewis was expected to continue to perform well. Heading into the Sugar Bowl, Lewis had 38 tackles, including 10 tackles for loss and four sacks. Also on the defensive line was Darryl Tapp, who led the team in sacks, tackles for loss, and quarterback hurries. Tapp earned first-team All-ACC honors and had 55 tackles and one interception during the regular season. ## Game summary The 2005 Sugar Bowl kicked off at 8:00 p.m. EST on January 3, 2005, in New Orleans, Louisiana. Official attendance was listed as 77,349. Mike Tirico, Tim Brant, Terry Bowden, and Suzy Shuster were the announcers for the television broadcast, which was aired on ABC. About 10 million households watched the game on television in the United States, giving the game a Nielsen rating of 9.5 and making it the 24th most popular Bowl Championship Series game in terms of television ratings. The game was also broadcast on ESPN Radio, and was commentated by Mark Jones, Bob Davie, and Holly Rowe. Spread bettors favored Auburn to win the game by seven points. Pregame entertainment was provided by Bowl Games of America, a group composed of more than 2,000 performing-arts bands, dance teams, and cheer groups from across the United States. Together, they performed the song "God Bless America." The traditional pregame singing of the national anthem was sung by Brad Arnold from the band 3 Doors Down. Dick Honig was the referee, the umpire was Jim Krogstad, and the linesman was Brent Durbin. ### First quarter Following the ceremonial pre-game coin toss, Auburn elected to kick off to Virginia Tech to begin the game, ensuring the Tigers would have possession to begin the second half. Tech began the first drive of the game from its 20-yard line following a touchback. The Hokies initially had success moving the ball, as quarterback Bryan Randall rushed for seven yards on the game's first play, then completed a four-yard pass to wide receiver Eddie Royal two plays later for a first down. The Auburn defense recovered, however, and the Hokies did not gain another first down and were forced to punt. Auburn recovered the ball and began its first drive of the game from its 26-yard line. On the Tigers' first play, quarterback Jason Campbell threw a long pass to Cooper Wallace for 35 yards. This was followed by another long play as running back Ronnie Brown ran for 31 yards. After the initial shock of the Auburn offense, the Virginia Tech defense firmed up, and Auburn's next three plays were stopped for losses or minimal gains. Facing a fourth down at the Virginia Tech six-yard line, Auburn sent in kicker John Vaughn, who kicked a 23-yard field goal for the game's first points. With 8:35 remaining in the first quarter, Auburn took an early 3–0 lead. Following the post-field goal kickoff, the Virginia Tech offense attempted to answer Auburn's quick score. Unfortunately for the Hokies, their second drive fared even worse than the first. Tech committed a 10-yard penalty, suffered an eight-yard loss on a play, then had a Bryan Randall pass intercepted by Auburn safety Junior Rosegreen. Rosegreen returned the ball 31 yards, and put Auburn's offense into good field position for its second drive of the game. Auburn also suffered an early penalty in its drive, but moved the ball with another long play—a 23-yard pass to Courtney Taylor—to recover. Again, however, the Virginia Tech defense recovered to force Auburn into a fourth down and a field goal attempt. Vaughn returned to the field and kicked a 19-yard field goal, giving Auburn a 6–0 lead with 1:06 remaining in the quarter. With time in the quarter running out, Virginia Tech fielded the post-score kickoff and executed a quick series of plays, gaining a first down before time ran out. At the end of the first quarter, Auburn held an early 6–0 lead. ### Second quarter Virginia Tech began the second quarter in possession of the ball and driving down the field. Bryan Randall completed a 10-yard pass for another first down, but after Tech failed to gain another, the Hokies were forced to punt. Auburn reciprocated by going three and out and punting the ball back to Virginia Tech. In its first full drive of the second quarter, the Hokies had their best drive of the first half. After a holding penalty nullified a long kickoff return, Tech began at its 24-yard line. Randall completed a nine-yard pass to tight end Jeff King, then ran for another nine yards on a quarterback scramble. He followed the first-down run by completing three consecutive long passes of 16 yards, 13 yards, and 31 yards, respectively. The last pass, to wide receiver Josh Hyman, drove Virginia Tech inside the Auburn two-yard line. There, however, the Tech offense faltered. On three plays, Tech failed to cross the goal line, gaining only one yard in the process. Facing fourth down and needing just one yard for a touchdown, Virginia Tech head coach Frank Beamer elected to attempt to gain the touchdown, rather than send in his kicker for a field goal attempt. The attempted touchdown pass by Randall fell incomplete, and Virginia Tech turned the ball over on downs without scoring any points. Auburn's offense took over at its one-yard line after Tech's failure to score. Jason Campbell orchestrated a successful drive that took Auburn from the shadow of its own end zone, completing passes of 16 yards, 15 yards, and 37 yards in the process. Inside the Virginia Tech red zone, however, the Auburn offense again stumbled. As it had in its two previous scoring drives, Auburn was forced to send in kicker John Vaughn despite being inside the Virginia Tech 10-yard line. Vaughn's 24-yard kick was successful, and with 1:50 remaining in the second quarter, Auburn extended its lead to 9–0. With little time remaining before halftime, Virginia Tech used a hurry-up offense. Randall completed a 23-yard pass to Eddie Royal and ran for 22 yards on his own, but threw three consecutive incomplete passes to end the drive. Tech was forced to punt the ball away, and the first half came to an end. At halftime, Auburn led, 9–0. ### Halftime The halftime show was presented by Bowl Games of America, a collection of dance troupes, marching bands, and cheerleading squads from across the United States. Together, the organizations presented a pirate-themed show based on the character of Jean Lafitte, a noted brigand who lived in New Orleans—site of the game—during the War of 1812. ### Third quarter Because Virginia Tech received the ball to begin the game, Auburn received the ball to begin the second half. The Tigers started the first drive of the second half at their 22-yard line. Carnell Williams and Ronnie Brown alternated carries as Auburn gained 17 yards in their first three plays. Jason Campbell completed a pass for a five-yard loss, then, on the fifth play of the drive, completed a 53-yard pass to Anthony Mix. The pass was the longest play of the game, and drove the Tigers inside the Virginia Tech red zone. Three plays later, Campbell connected with Devin Aromashodu on a five-yard pass for the game's first touchdown. With 10:39 remaining in the third quarter, Auburn had taken a 16–0 lead. After Auburn's kickoff, Virginia Tech started its first drive of the second half at its 20-yard line. Down by 16 points, Tech needed to score. The Hokies gained a quick first down, but a five-yard penalty and a sack of Bryan Randall prevented Tech from gaining another. The Hokies were forced to punt, and Auburn took over at its 44-yard line. Despite having good field position, the Tigers went three and out. Following the punt, Virginia Tech reciprocated by also going three and out. With 3:47 remaining in the quarter, Auburn began an offensive drive from its 35-yard line. From the beginning of the drive, however, the Tigers had problems. The first play of Auburn's drive was a 10-yard penalty against the Tigers. The second resulted in a one-yard loss by Ronnie Brown, who attempted to rush through the middle of the defensive line. On the third play, Virginia Tech cornerback Jimmy Williams intercepted an errant pass by Jason Campbell. Though Williams was unable to advance the ball, the Hokies still took over on offense, and with 2:38 remaining in the quarter, had their best field position since the first half. The first play of the Tech drive resulted in a 12-yard gain as Josh Hyman rushed for 12 yards and a first down on an end-around. Running back Cedric Humes was stopped for a loss on the first play after Hyman's rush, but earned 10 yards on two subsequent rushes, setting up a fourth down. Needing one yard for a first down, behind by 16 points, and with time running down in the quarter, Tech head coach Frank Beamer elected to attempt the first down play rather than kick a field goal. Humes again rushed the ball, and as time ran out in the third quarter, picked up enough ground for the first down. With one quarter of play remaining, Auburn led Virginia Tech 16–0, but the Hokies had picked up a first down inside the Auburn 10-yard line to begin the fourth quarter. ### Fourth quarter Virginia Tech began the fourth quarter in possession of the ball, and facing a first down at the Auburn 10-yard line. In three consecutive plays, however, the Hokies only picked up a total of four yards. Needing six yards to get a touchdown, Virginia Tech sent in kicker Brandon Pace to attempt a 23-yard field goal. Despite the short distance, Pace missed the kick. With 13:56 remaining in the game, Auburn still held a 16–0 lead. Following the missed field goal, Auburn took over on offense at its six-yard line—the point from which Tech had missed the kick. Ronnie Brown picked up 13 yards and a first down on three rushes. Carnell Williams then picked up three yards, and Jason Campbell threw a seven-yard pass that gave Auburn another first down. A five-yard penalty against Virginia Tech pushed Auburn's offense near midfield, and Ronnie Brown returned to the field, rushing the ball four consecutive times for 16 yards and driving the Tigers into Virginia Tech territory. Facing a fourth down and one yard, Auburn elected to give the ball to Brown again. On the one-yard run, however, Brown fumbled the ball, which was recovered by Virginia Tech's Mikal Baagee with 8:38 remaining. Virginia Tech's offense came on to the field desperately needing to score quickly. Though the deficit was only 16 points, and could be made up with two touchdowns and two two-point conversions, the limited time remaining meant the task would be difficult, even if Virginia Tech scored quickly. The Hokies began the drive with a 17-yard pass by quarterback Bryan Randall. Justin Hamilton rushed for five yards, and Randall completed a six-yard pass for another first down. The Tigers helped matters by committing a 15-yard penalty, which put the Hokies inside Auburn territory. Three plays later, Randall capitalized on the opportunity by completing a 29-yard pass to Josh Morgan for a touchdown and the Hokies' first points of the game. Tech attempted a two-point conversion, but the pass attempt fell incomplete. With 6:57 remaining, Virginia Tech now trailed 16–6. After receiving the post-touchdown kickoff, Auburn began to run out the clock. The Tigers failed to pick up a first down, and after going three and out, punted the ball back to Virginia Tech. The Hokies started their drive at their two-yard line, and Randall began it successfully by completing a 20-yard pass, rushing for 10 yards, then completing a five-yard pass to bring the Hokies near midfield. On the fourth play of the drive, however, Randall was intercepted by Auburn's Derrick Graves. The Tigers, their offense again on the field, began running out the clock again. Tech attempted to interrupt Auburn's clock management by calling timeouts after each play, stopping the clock with each timeout. Virginia Tech forced Auburn into a three and out, and the Tigers again punted the ball away with 2:13 remaining. After the ball rolled into the end zone for a touchback, Virginia Tech began its final drive at its 20-yard line. On the first play of the drive, Bryan Randall completed an 80-yard touchdown pass to Josh Morgan. The score plus the extra point cut Auburn's lead to 16–13. With time in the game almost exhausted, Virginia Tech was forced to attempt an onside kick in order to have a chance to get another offensive drive. Because the Hokies had used their final timeouts to stop the clock on Auburn's previous drive, Auburn would be free to run out the game's final minutes. Despite the hopes of Virginia Tech for a last-second miracle, the Auburn Tigers recovered the kick, allowing them to run out the clock and clinch a 16–13 victory. ## Final statistics Auburn quarterback Jason Campbell completed 11 of 16 passes for 189 yards, one touchdown, and one interception, and was named the game's most valuable player. Despite his performance, he was neither the leading scorer on his team nor the overall best performer in the game. Auburn kicker John Vaughn was successful on all three of his field goal attempts and also succeeded on his sole extra point kick in the game, resulting in 10 Auburn points. On the opposite side of the ball, Virginia Tech quarterback Bryan Randall completed 21 of his 38 passes for 299 yards, two touchdowns, and two interceptions. Virginia Tech kicker Branden Pace missed his sole field goal attempt of the game, a 23-yard kick. On the ground, Auburn running back Ronnie Brown led all rushers with 68 yards on 14 carries. Second was Auburn's Carnell Williams, who gained 61 yards on 19 carries. Bryan Randall was the Hokies' leading rusher, accumulating 45 yards on nine carries. Tech's two running backs, Mike Imoh and Cedric Humes, were stymied by the Auburn defense and managed just 26 yards on 12 combined carries. Virginia Tech wide receiver Josh Morgan finished the game with three receptions for 126 yards and two touchdowns, making him the game's most prolific receiver. Auburn's Courtney Taylor was second, with five catches for 87 yards, and Tech's Josh Hyman was third with five catches for 71 yards. For the defense, Virginia Tech cornerback Jimmy Williams was the top performer. Williams had 10 tackles, including 3.5 tackles for loss, and one interception. Tech's Mikal Baaqee had eight tackles and one fumble recovery, making him the game's second-leading tackler. Auburn defender Derrick Graves was the most prolific tackler for the Tigers, making seven tackles and catching one interception of an errant Bryan Randall pass. Three players had one sack—two for Virginia Tech and one for Auburn. ## Postgame effects With the win, Auburn finished the season undefeated, with 13 wins and zero losses. Virginia Tech's loss gave it a final record of 10–3. The victory gave Auburn fans hope that if Oklahoma won the 2005 BCS National Championship Game, a split national championship would result. By contract, the winner of the BCS National Championship Game is voted number one in the USA Today Coaches' Poll. In contrast, the Associated Press poll has no such restriction. Auburn's thin margin of victory over Tech put the prospect of split national title in doubt, though not out of reach. USC's blowout 55–19 victory over Oklahoma, however, made it likely that USC would be the overwhelming choice for first place. When the final college football polls of the season were released, USC was voted number one by a large margin, though three voters in the Associated Press poll voted Auburn first. More than three years after the game, ESPN sportswriter Ted Miller rated the game second on his list of victims of the BCS system, just behind USC being left out of the championship game in 2003. The Tigers lost 18 players due to graduation, and several juniors elected to enter the 2005 NFL Draft as well. During the first round of the draft, Auburn had four players selected: Ronnie Brown, with the second overall pick, Carnell Williams (fifth), Carlos Rogers (ninth), and Jason Campbell (25th). Virginia Tech had three players selected in the 2005 draft: Eric Green (75th), Vincent Fuller (108th), and Jon Dunn (217th). Virginia Tech's appearance in the Sugar Bowl helped its recruiting efforts in the state of Virginia, with eight of the state's top recruits (ranked by the Roanoke Times newspaper), pledging to attend Tech. The visiting fans of Auburn and Virginia Tech injected tens of millions of dollars into the New Orleans economy, despite high food, travel, and lodging costs that forced some fans to cut discretionary spending during their trips. ## See also - Glossary of American football
2,270,210
Characters of Final Fantasy VIII
1,167,709,091
null
[ "Final Fantasy VIII", "Lists of Final Fantasy characters" ]
Final Fantasy VIII, a 1999 best-selling role-playing video game by Squaresoft, features an elite group of mercenaries called "SeeD", as well as soldiers, rebels, and political leaders of various nations and cities. Thirteen weeks after its release, the title had earned more than US\$50 million in sales, making it the fastest selling Final Fantasy title at the time. The game has shipped 8.15 million units worldwide as of March 2003. Additionally, Final Fantasy VIII was voted the 22nd-best game of all time by readers of the Japanese magazine Famitsu in 2006. The game's characters were created by Tetsuya Nomura, and are the first in the series to be realistically proportioned in all aspects of the game. This graphical shift, as well as the cast itself, has received generally positive reviews from gaming magazines and websites. The six main playable characters in Final Fantasy VIII are Squall Leonhart, a loner who avoids vulnerability by focusing on his duty; Rinoa Heartilly, an outspoken and passionate young woman who follows her heart; Quistis Trepe, an instructor with a serious yet patient attitude; Zell Dincht, an energetic martial artist with a fondness for hot dogs; Selphie Tilmitt, a cheerful girl who loves trains and flies the airship Ragnarok; and Irvine Kinneas, a marksman and womanizer who uses his charm to mask his insecurities. Temporarily playable characters include Laguna Loire, Kiros Seagill, and Ward Zabac, who appear in "flashback" sequences; SeeD cadet-turned-antagonist Seifer Almasy; and sorceress Edea Kramer. The main antagonist is Ultimecia, a sorceress from the future who wishes to compress time. ## Concept and design In Final Fantasy games, scenario writer Kazushige Nojima stresses the dynamic of the relationship between the player and the main character; thus, he puts significant thought into how that relationship will develop. With Final Fantasy VII, protagonist Cloud Strife's reserved nature led Nojima to include scenarios in which the player can select Cloud's responses to certain situations and dialogue. With Final Fantasy VIII, which also features a reserved lead protagonist in Squall, Nojima wanted to give players actual insight into what the protagonist is thinking, even while other characters remain uninformed: this led to the inner dialogues Squall has throughout the game. Character designer Tetsuya Nomura, while exchanging e-mails with director Yoshinori Kitase between development of Final Fantasy VII and VIII, suggested that the game should have a "school days" feel. Nojima approved of the idea, as he already had a story in mind in which the main characters were the same age. Thus, they created the concept of military academies, called "Gardens", in which students would train to become "SeeD" mercenaries. Nojima also planned for the two playable parties featured in the game—Squall's present day group and Laguna Loire's group from twenty years in the past—to highly contrast with. Laguna's group consists of a close-knit group of battle-hardened friends in their late twenties. On the other hand, Squall's party is young and inexperienced, and Squall himself does not initially understand the value of friendship. Kitase desired to give the game a foreign atmosphere ("foreign" being in relation to Japan), ultimately deciding on a European setting. The first character Nomura designed specifically for Final Fantasy VIII was Squall, initially giving him longer hair and a more feminine appearance. However, Kitase was unsatisfied and asked Nomura to shorten his hair and make him appear more masculine, which led to the design seen in-game. When designing Cloud, Nomura gave him distinctly spiky, bright blonde hair to emphasize his role as that game's protagonist. With Squall, Nomura wanted to try a unique angle to establish his role, giving him the characteristic gunblade scar across the bridge of his nose. A complete history was not yet conceived, so Nomura left the explanation for Squall's scar to Nojima. Squall's design was flourished by a fur lining along the collar of his jacket, included for the purpose of challenging the game's full motion video designers, who were also developing the CGI film Final Fantasy: The Spirits Within at the time. This is but one example of the demands he has consistently extended to the programmers of the series as technology has advanced. Most Final Fantasy games include summons: creatures who are brought into battle to attack enemies or support the party. In Final Fantasy VIII, summons are called "Guardian Forces", or GFs. Nomura felt they should be unique beings, without clothes or other human-like concepts. This was problematic, as he did not want them to "become [like] the actual monsters", so he took great care in their design. Ramuh—an old wizard summon from earlier Final Fantasy games—was replaced; other human-like designs were re-imagined as nude figures or with creature-like elements. Nomura, also the director of the Guardian Force animation sequences, wanted to create a greater impact than the summon cinematics of Final Fantasy VII. Leviathan was created as a test and included in a game demo. Garnering a positive reaction from players, Nomura decided to create the remaining sequences in a similar fashion. In a Famitsu Weekly interview with Kitase, Nomura, and Yuusuke Naoi, the team agreed that Final Fantasy VIII reflects Nomura's preferred technique, as opposed to VII, which featured characters that "weren't really his style". The team also decided to use realistically proportioned characters. The higher level of full motion video technology would have otherwise created an inconsistency between the in-game graphics and the higher definition full motion video graphics. Additionally, Kitase explained that the main logo of the game—Squall and Rinoa embracing—was inspired by the team's efforts to express emotion through body language. ## Creatures and races The world of Final Fantasy VIII is predominantly occupied by humans. Another prominent race is the "Shumi", a small tribe of creatures with yellow skin and large arms. The tribe lives in an underground village on the Trabian continent. The Shumi frown upon showing off their large hands; NORG, the owner of Balamb Garden, was exiled from the tribe for his ostentation. All Shumi undergo a biological metamorphosis at some point in their lives; a qualified Shumi will become an Elder while another may become a mute "Moomba". Moombas are covered in red fur, which the Shumi attribute to "the passionate ingenuity in their hearts". Additionally, Moombas have appeared in several Final Fantasy spin-offs, including Chocobo World and Chocobo Racing. Chocobos—large galliform birds common throughout the Final Fantasy series—are featured in the game. In this title, Chocobos are generally undomesticated and can be found in various forests throughout the world. Each forest has a minigame where the player must corral baby Chocobos to locate the mother. If the player catches a bird, a baby Chocobo (a Chicobo) named Boko will follow the player around. Boko has his own game called Chocobo World that can be downloaded from the PlayStation disc onto a PocketStation game unit. Series composer Nobuo Uematsu created two Chocobo themes for Final Fantasy VIII: "Mods de Chocobo" and "Odeka de Chocobo". Final Fantasy VIII also features an array of common real world creatures, such as cats and dogs. The game also includes numerous monsters, many of which have appeared earlier in the series. Popular recurring monsters include Adamantoise, Behemoth, Bomb, Cactuar, Iron Giant, Malboro, and Tonberry. ## Playable characters ### Squall Leonhart Squall Leonhart (スコール・レオンハート, Sukōru Reonhāto) is the main protagonist of Final Fantasy VIII. He is a young student at Balamb Garden who is identifiable by the scar on his face that a fellow student, Seifer, inflicted. He rarely speaks and has the reputation of being a lone wolf. As Squall's story unfolds, he becomes fascinated with and falls in love with Rinoa, despite never outwardly expressing such until the ending. Squall is characterized by forlorn memories of standing out in the rain at the orphanage where he grew up, wondering where "Sis" went. Squall's weapon is a gunblade, a sword that uses components of a revolver to send vibrations through the blade when triggered. His Limit Break is a series of sword strikes called Renzokuken. ### Rinoa Heartilly Rinoa Heartilly (リノア・ハーティリー, Rinoa Hātirī) is the primary female protagonist of Final Fantasy VIII. She is the 17-year-old daughter of General Caraway, a high-ranking officer in the Galbadian army, and Julia Heartilly, a successful pianist and singer. Rinoa is a member of the Forest Owls, a resistance faction seeking to liberate the small nation of Timber from Galbadian occupation. When Squall and his party of SeeD help the resistance movement fight Galbadia, Rinoa decides to stay with them; as a result she ends up falling in love with Squall. She has black hair with brown highlights and dark eyes. Outspoken, spirited, emotional, and honest with her feelings, she speaks her mind without reservation. Because of her ambition, she can often be stubborn. In battle, she uses a weapon called a "Blaster Edge", which consists of an arm holster and a projectile that returns like a boomerang. In her Combine Limit Break, she attacks in unison with her dog, Angelo. When Rinoa gains Sorceress powers, she acquires a second Limit Break, Angel Wing, which increases her spell-casting ability, along with rendering her in a state of "magic" berserk for the remainder of the battle. ### Laguna Loire Laguna Loire (ラグナ・レウァール, Raguna Rewāru) is a man whose past and relation to the main characters are revealed slowly throughout the game. Most of the sequences involving Laguna appear in the form of "dreams" experienced by the primary protagonists. Squall always experiences these dreams from Laguna's point of view, although he does not think too highly of Laguna. Laguna attacks with a Machine gun and his Limit Break is Desperado, which involves a swinging rope, a grenade, and a barrage of bullets. During the dream segments, he is a twenty-seven-year-old soldier in the Galbadian army who travels with his companions, Kiros Seagill and Ward Zabac. He is also an aspiring journalist. During the first two dream segments, Laguna and his team are shown getting lost and visiting the hotel where singer Julia Heartilly, Laguna's romantic interest, performs. After a scouting mission at Centra, the three soldiers are separated and Laguna is injured. A young woman named Raine nurses him back to health after he is brought to Winhill. He falls in love with and marries her. However, he is drawn away from his new home when a young girl in their care, Ellone, is kidnapped. Laguna tracks her down in Esthar, where he helps liberate the nation from the despotic rule of Sorceress Adel. The people of Esthar elect Laguna as their president and Ellone is sent back to Winhill without him. After Raine dies, her child (whom Ward and Kiros imply to be Squall in a conversation aboard the Ragnarok) and Ellone are sent to an orphanage. Laguna is unable to leave his post to visit her and remains president of Esthar to the present day. Ellone and Laguna are reunited in space, and Laguna helps the party prepare for their fight against Ultimecia. The concept of two main characters was planned since the beginning of the game's development. Nomura tried to create a contrast between Laguna's and Squall's occupations; thus, Laguna became a soldier with a light-hearted charisma, and Squall became a reserved mercenary student. The designers intended Laguna to be more similar to the previous protagonists in the series to complement Squall, who is different from the previous main characters. Laguna is ranked seventh in Electronic Gaming Monthlys list of the top ten video game politicians. Laguna Loire appears in Dissidia 012 Final Fantasy, where he is voiced by Hiroaki Hirata in the Japanese version and Armando Valdes-Kennedy in the English version. He is featured in his youthful Final Fantasy VIII appearance while his older and his Galbadian soldier forms. His costume of a knight is also available as downloadable content. Laguna was also planned to appear in Kingdom Hearts Birth by Sleep as the head of Mirage Arena. ### Seifer Almasy Seifer Almasy (サイファー・アルマシー, Saifā Arumashī) is a classmate and rival of Squall, who can only be controlled by the player during the Dollet sequence. He reappears as a boss later in the game. He acts as a foil to Squall in many respects, having dated Rinoa before she met Squall, and assuming a leadership position among his friends. Like Squall, Seifer wields a gunblade which he calls "Hyperion". His Limit Break, Fire Cross, allows him to use an attack called No Mercy. He later uses the more powerful techniques Demon Slice and Bloodfest against the player. Seifer has a short temper and is often depicted as a bully who desires attention. He is also fiercely independent and is often punished for his recklessness. He is the leader of Balamb Garden's disciplinary committee with his friends Fujin and Raijin. After joining Ultimecia, he becomes the leader of the Galbadian army. During the introduction sequence, Seifer cuts Squall across the left side of his face with his gunblade, leaving a scar. Squall retaliates with a backhand slash that leaves Seifer with a mirrored scar. At the following field exam in Dollet, Seifer acts independently from his teammates Squall and Zell, abandoning them; consequently, he fails and is not promoted to SeeD. Spurred by dreams of a brighter future, he defects to Sorceress Edea so he could be her "knight". From his point of view, Squall and the others are "evil" and he recognizes himself as a hero. As Seifer is brainwashed by the sorceress, he alienates himself from his friends. Eventually, Fujin and Raijin abandon him and he is defeated shortly afterward. Following Edea's defeat, the party confronts Seifer one last time as he now serves Ultimecia, and either they or Gilgamesh defeat him. Seifer escapes, kidnapping Rinoa and bringing her to Adel. At the end of the game, Seifer is seen fishing and having fun with Fujin and Raijin. Nomura had originally intended Seifer not only as Squall's rival, but also as part of the love triangle between him, Squall, and Rinoa. Although this concept was shelved in the final script, Seifer remains Squall's rival and his appearance was designed to contrast with Squall's. They have equivalent but mirrored scars on their faces and their jackets are of opposing colors and lengths. Both characters use gunblades; Squall's gunblade is larger and requires two hands, while Seifer's gunblade is lighter and can be wielded with one hand. A younger version of Seifer makes an appearance in Kingdom Hearts II as a member of the Twilight Town Disciplinary Committee with Fujin and Raijin. Seifer in the virtual Twilight Town is a rival of the main character, Roxas, and at one point mentions that he does not wish to cooperate with destiny. He is voiced by Takehito Koyasu and Will Friedle in the Japanese and English versions, respectively. He is also featured in the rhythm game Theatrhythm Final Fantasy as a sub-character representing Final Fantasy VIII. The book "Converging Traditions in the Digital Moving Image: Architectures of Illusion, Images of Truth" discusses that while Seifer is seen as a show-off and a troublemaker, protagonist Squall Leonhart identifies with him. IGN listed Seifer as the 91st best video game villain, stating that he makes for a great rival due to the similarities between him and Squall. ### Quistis Trepe Quistis Trepe (キスティス・トゥリープ, Kisutisu Turīpu) is an eighteen-year-old instructor at Balamb Garden, where Squall, Zell, and Seifer are students. She uses a chain whip in battle, and her Limit Break, Blue Magic, a common ability found throughout the Final Fantasy games, allows her to imitate monsters' attacks. Early in the game, Quistis is discharged as an instructor because she "[lacks] leadership qualities". Afterwards, she maintains a more informal relationship with the other characters as a fellow member of SeeD. As a child, Quistis stayed at an orphanage with most of the main characters. She then lived with foster parents, with whom she never developed any intimacy, before moving to Balamb Garden at age ten. She became a SeeD at fifteen and an instructor two years later. By this time, she has become very popular, having a number of fans who identify themselves as "Trepies". For her part, Quistis never shows any indication of being aware of their existence. Quistis initially joins Squall to prepare him for his upcoming field exam. She later takes Squall into her confidence and tells him personally about her demotion. Squall rudely tells her to go "talk to a wall", a famous comical line in the game, and not to burden him with her problems. This furthers the player's perception of Squall's awkwardness and anti-social tendencies. When Irvine refreshes the main characters' memories about the orphanage, they remember that Squall's asocial behavior began when Ellone, an older sister figure to Squall, left the orphanage unexpectedly. As a result of these revelations, Quistis recognizes that her feelings for Squall are more sisterly than romantic. Later, she criticizes Squall when he nearly abandons Rinoa, his romantic interest. When designing the characters, Nomura had wanted at least one female character to wear a skirt. Quistis was originally supposed to fill this part, but Nomura decided a long skirt worn over pants would look better. The role was eventually passed to Selphie. Nomura was surprised when the writers cast her as a teacher, despite being around the same age as the rest of the group. Quistis also appears in World of Final Fantasy where she is voiced by Miyuki Sawashiro in Japanese and Kristina Pesic in English. ### Selphie Tilmitt Selphie Tilmitt (セルフィ・ティルミット, Serufi Tirumitto) is a student at Balamb Garden who recently transferred from Trabia Garden. Selphie first appears when running into Squall while late for class. She asks Squall to show her around because she recently transferred. During the Dollet exam, Selphie joins Squall's team after Seifer abandons them. She becomes a full-fledged SeeD with Squall and Zell, and the three are assigned to the same team. She participates in many extracurricular activities, such as planning the Garden Festival and running the school's website. Selphie wields nunchaku in battle, and her Limit Break Slot allows the player to cast a random spell numerous times as well as certain magic used exclusively in her limit break. She also pilots the Ragnarok starship. ### Zell Dincht Zell Dincht (ゼル・ディン, Zeru Din) is a student at Balamb Garden with Squall and Seifer. Seventeen years old, Zell is a martial artist who fits the role of unarmed character, just like Tifa Lockhart did in the previous game, Final Fantasy VII. Zell attacks with punches and kicks, his weapons being gloves, and his Limit Break, Duel, requires the player to input button combinations on the controller to deal damage. Zell is slightly impulsive and overconfident in his own skill, but is loyal to his friends. Seifer gives him the nickname "chicken-wuss", which infuriates him. He also has a passion for hot dogs; a recurring gag is that they are always sold out by the time he reaches the cafeteria. Zell lived at the same orphanage as many of the other protagonists; this is where Seifer first began to bully him. He was later adopted by the Dincht family in the town of Balamb. His motivation for enrolling at Garden is to live up to the memory of his grandfather, a famous soldier. Zell was designed to look and act like the main character of a shōnen manga (Japanese comic books intended primarily for boys); his neighbors in Balamb describe him as a "'comic-bookish' type of hero". He also thinks of himself as Seifer's rival, despite not being the main character. The inspiration for the tattoo on his face came from an MTV music video that featured a man with a full body tattoo. Zell's ultimate weapon is named Ehrgeiz, directly referencing the game of the same name which came out around the same time Final Fantasy VIII did. Also, continuing the similarities to Tifa Lockhart of Final Fantasy VII, Zell has a limit break called Dolphin Blow as does Tifa, Zell's final limit break is My Final Heaven, while Tifa's was called just Final Heaven. He is voiced by Noriaki Sugiyama in Japanese. ### Irvine Kinneas Irvine Kinneas (アーヴァイン・キニアス, Āvain Kiniasu) is a student at Galbadia Garden, one of the three mercenary academies in the game. He is one of the Garden's elite sharpshooters, always carrying his rifle. His Limit Break is Shot, which deals damage and inflicts status effects depending on the type of ammunition. Irvine is depicted as a cowboy, tall and fair-skinned with long brown hair that he wears pulled back in a ponytail. He also enjoys flirting with the female characters, being known as well for his marksmanship as his charm. He acts like a carefree, but misunderstood loner; however, this is merely a façade to charm women and hide his lack of confidence. When Sorceress Edea becomes the Galbadian ambassador, Balamb and Galbadia Gardens order Squall's team to assassinate her; Irvine is introduced as the sniper for the mission. Moments before the assassination attempt, he explains to Squall that he always chokes under pressure. In spite of his nerves and under intense pressure, he fires an accurate shot, but Edea uses magic to stop the bullet. At Trabia Garden, Irvine reveals that he and most of the other party members had lived in the same orphanage, run by Cid and Edea Kramer. However, the others could not remember this because of their use of Guardian Forces (GF), magical beings who cause severe long-term memory loss as a side effect. Because Irvine had not used a GF until he joined the party, he is able to remember his past. During the game, Irvine gradually draws closer to Selphie, acting on the feeling he has had since living with her at the orphanage. With Irvine, Nomura tried to strike a balance between not overshadowing Squall and not becoming too unattractive. He gave Irvine a handsome appearance, but a casual personality, hoping that this would make him less attractive than Squall. Keeping with this idea, Nomura gave him goggles, but this idea was abandoned in favor of an American cowboy-like appearance to set him apart from other goggle-wearing characters in the Final Fantasy series. He is voiced by Daisuke Hirakawa in Japanese. ### Kiros Seagill Kiros Seagill (キロス・シーゲル, Kirosu Shīgeru) is one of Laguna's comrades in the Galbadian Army. He wields a pair of katar (कटार) or gauntlet-daggers, with which he repeatedly slices his enemies in his Limit Break, Blood Pain. His weapons' name is given as "katal" in the English localization of the game. Following the failed mission in Centra, Kiros is separated from Laguna and Ward. He heals quickly and decides to leave the Galbadian army, but soon finds that life without Laguna lacks excitement. His subsequent search for Laguna brings him to Winhill after nearly a year. When Laguna is forced to leave Winhill to find Ellone, Kiros accompanies him, helping him earn money as an amateur actor to fund the expedition. Kiros remains by Laguna's side throughout his adventures in Esthar, earning a place as Laguna's advisor when he becomes president. Like Ward, Kiros' interactions with Laguna are based on the staff's interactions during development. ### Ward Zabac Ward Zabac (ウォード・ザバック, Wōdo Zabakku) is Laguna's other comrade. An imposing man, he wields a large harpoon in battle; in his Limit Break, Massive Anchor, he uses it to crush his opponents from above. During the incident at Centra, he loses his voice in a battle with Esthar soldiers. After being separated from Laguna and Kiros, he becomes a janitor at the D-District Prison. When Laguna becomes president of Esthar, Ward joins Kiros as an advisor, directing affairs with gestures and ellipses. Laguna and Kiros can understand what he is saying by his reactions. Like Kiros, Ward's interactions with Laguna are based on the staff's interactions during development. ### Edea Kramer Edea Kramer (イデア・クレイマー, Idea Kureimā) is initially presented as a power-hungry sorceress who seizes control of Galbadia from President Deling. Her motives are unknown, but SeeD dispatches Squall to assassinate her. The mission fails after Rinoa is taken over by an unknown entity and Edea sends a bolt of ice through Squall's chest. Later, it is revealed that Edea is actually the wife of Headmaster Cid, and was known as "Matron" to Squall and the other kids that lived at the orphanage. It is eventually explained that Edea was not acting of her own will, but was possessed by a sorceress from the future named Ultimecia. When Ultimecia's control is broken, Edea takes the side of the SeeDs in the struggle and joins Squall's party for a short time. However, she accidentally gives her powers to Rinoa, making her a sorceress. Being a sorceress, Edea attacks with magical bursts of energy and her Limit Break, Ice Strike, consists of a magically conjured icicle, hurled like a javelin. ## Other characters ### Adel Adel (アデル, Aderu) is a sorceress from Esthar who initiated the Sorceress War some years ago before the start of the game. As the ruler of Esthar, she ordered her soldiers to abduct every girl to find a suitable successor for her powers, including the young Ellone. During the Esthar revolution, Laguna and Dr. Odine devised an artifact to cancel the sorceress power, and placed her in suspended animation in outer space. In the present, after Edea is released from Ultimecia's control, Ultimecia possesses the new sorceress, Rinoa, and commands her to free Adel, so she can become Ultimecia's new and more powerful vessel. Adel is successfully freed, so Rinoa is discarded as a host. However, in order to defeat Ultimecia, Dr. Odine plans for Ultimecia to once again possess Rinoa. Eventually, Squall's party defeats Adel when she tries to absorb Rinoa at the Lunatic Pandora, thus Adel's powers transfer to Rinoa, Ultimecia possesses her again, and using Ellone's powers, they start "Time Compression", which leads to the final battle. ### Cid Kramer Cid Kramer (シド・クレイマー, Shido Kureimā) is the headmaster of Balamb Garden. After the failed assassination attempt on Edea, the Garden Master, NORG, attempts to seize power from Cid and reconcile with Edea. This sparks an internal conflict, in which the students and personnel side with either Cid or NORG, but Squall and Xu quell the conflict and return Cid to power. Afterward, Cid aggressively confronts NORG, who started the conflict over financial issues. Cid is the husband of Sorceress Edea, with whom he ran an orphanage and founded the SeeD organization. They are estranged for most of the game, however, because they lead opposing factions until Ultimecia releases her magical possession of Edea. Because most Final Fantasy titles include a character named "Cid", Nomura wanted to design someone with differences from the past Cids in the series. He gave this version of Cid the appearance and personality of an older, benevolent character who would watch over Squall's party and offer them advice and motivation. Nojima decided that this type of good-natured character would work best as the headmaster of Balamb Garden. ### Ellone Ellone (エルオーネ, Eruōne) is a mysterious girl and the missing "Sis" of Squall's past. She has the ability to send a person's consciousness back in time and into the body of another, so they can experience the actions of that person. She uses this talent to send Squall's party into Laguna's past adventures, hoping that they would alter the past, but she eventually realizes that her abilities can only view history, not alter it. Ultimecia needs this power to achieve "Time Compression", so she uses Edea and the Galbadian military to find her. Ellone is an important character in the story, tying the relationships between some of the characters, and being the primary objective of Ultimecia. However, Ellone's importance is mostly told in the flashbacks, and explained gradually. After Ellone's parents were killed by Esthar soldiers, under orders of sorceress Adel, she lived with Raine in the small Winhill village, where she also developed a close relationship with her adoptive uncle, Laguna. These peaceful times lasted until she was finally captured by Esthar. Then, Laguna travelled to Esthar to rescue her, at the same time he participated in Esthar's rebellion to overthrow Adel. After Adel's incarceration in space, Laguna having to remain in Esthar as president, and then Raine's death, Ellone moved to Cid and Edea's orphanage, where she became an older sister figure to Squall and the other orphans, and eventually she also followed Cid to Balamb Garden. Early in the game, Squall's party finds Ellone in the library of Balamb Garden, but the characters don't have further interactions. It is later explained that the "Guardian Forces" (GF) which the SeeDs use in battle cause memory loss, thus explaining why Squall doesn't remember Ellone, Edea and his past in the orphanage. ### Fujin Fujin (風神, Fūjin) is a young woman with pale skin, short silver hair and an eye patch. She is a member of Balamb Garden's disciplinary committee with Seifer and Raijin; the three of them form a close "posse", even when Seifer leaves Garden. Fujin prefers to speak in terse sentences, often with only a single word, such as "RAGE!" and "LIES!" (in the Japanese version she only spoke in Kanji). Near the end of the game, she explains to Squall that she will temporarily break ties with Seifer because of his recent behavior. In battle, Fujin wields a chakram and uses wind-based magic. She shares her name with the Japanese god of wind, Fūjin. Fujin and Raijin were to appear in Final Fantasy VII, but the designers excluded them due to their similarity to the Turks. In Kingdom Hearts II, a younger version of Fujin, named "Fuu" (フウ), appears as a member of Seifer's gang. She is voiced by Rio Natsuki in the Japanese version and by Jillian Bowen in the English version. ### Raijin Raijin (雷神) is a member of Balamb Garden's disciplinary committee with Seifer and Fujin; the three form a close "posse", as he calls it. He has a habit of ending his sentences with "ya know" (もんよ, mon'yo, in the Japanese version). Like Fujin, he supports Seifer when he betrays SeeD and Garden to side with Edea. Near the end of the game, he stands by Fujin's plea to the party to help save Seifer from himself. In the ending FMV, he celebrates catching a large fish until Fujin kicks him into the water. In battle, Raijin uses thunder-based magic and a bō staff with large weights on either end. He shares his name with the Japanese god of thunder, Raijin. Raijin and Fujin were to appear in Final Fantasy VII, but the designers decided against it due to their similarity to the Turks. In Kingdom Hearts II, a younger version of Raijin, named "Rai" (ライ), appears as a member of Seifer's gang. He is voiced by Kazuya Nakai in the Japanese version, and by Brandon Adams in the English version. ### Ultimecia Ultimecia (アルティミシア, Arutimishia) is the main antagonist of Final Fantasy VIII. Because she operates through the body of a possessed Edea to gain control of Galbadia, Ultimecia's existence is revealed only after possessing Rinoa to release Sorceress Adel from her orbital prison to take as a new host. A sorceress from the future, Ultimecia is capable of reaching her consciousness into the distant past via a special "Junction Machine" to possess other sorceresses. She seeks to achieve "Time Compression", which would cause all eras to merge; this would extinguish all life but her own as she becomes an omnipresent goddess. This would give her power on a par to Hyne the Great, who, according to the background had created the world. In fact, Squall and the heroes do help Ultimecia start Time Compression, but they do so to confront her in her own time. After Squall and his party defeat Sorceress Adel, Adel transfers her power to Rinoa, then Ultimecia possesses Rinoa again, and Ellone uses her power to send their consciousness to the past, at which point Ultimecia starts Time Compression. At that moment, the heroes are able to travel to Ultimecia's distant future and defeat her. After the final battle and during an apparent decompression of time, the defeated Ultimecia transfers her powers to Edea at a point in the past. This action essentially triggers the sequence of events that form the game's plot, and creates a causal loop. Ultimecia is the villainess representing Final Fantasy VIII in Dissidia: Final Fantasy, Dissidia 012 and Dissidia NT, where she is voiced by Atsuko Tanaka in Japanese and Tasia Valenza in English. ### Minor characters #### Biggs and Wedge Biggs and Wedge are members of the Galbadian Army. Biggs is a major and Wedge is a lieutenant. After the main characters defeat the duo at Dollet, they are demoted in rank to lieutenant and private respectively. The protagonists encounter them again at the D-District Prison. A third meeting at the Lunatic Pandora does not result in conflict; instead, they quit the Galbadian army. They continue the Final Fantasy tradition of including two minor characters with the names "Biggs" and "Wedge". #### General Fury Caraway General Fury Caraway is a member of the Galbadian military who advises the main characters on their mission to assassinate Sorceress Edea. When Laguna left Galbadia, Caraway comforted Julia; eventually, they married and had a child, Rinoa. Caraway and Rinoa have a problematic relationship; he attempts to prevent her from participating in the assassination attempt. However, he later arranges her freedom from the D-District Prison. #### Vinzer Deling Vinzer Deling is the President of Galbadia. He appoints Sorceress Edea as a supposed "peace ambassador" to resolve Galbadia's political problems with other nations. His body double is defeated by SeeD and the Forest Owls resistance group. Edea kills him during her welcoming ceremony at Deling City and seizes power in Galbadia. #### Mayor Dobe and Flo Mayor Dobe is the leader of Fishermans Horizon, a town in the middle of a transoceanic highway between the continents of Galbadia and Esthar. He and his wife, Flo, detest violence and oppose the Garden's presence in their territory. Squall and his party save the Mayor from certain death when the Galbadian army invades the town. #### Forest Owls The Forest Owls are a small resistance faction that oppose the Galbadian occupation of Timber, a town in the eastern part of the continent. A man named Zone is the leader, and Rinoa and Watts are members. Most people of Timber are affiliated with a resistance group, although the Forest Owls are the only active ones. #### Julia Heartilly Julia Heartilly (ジュリア・ハーティリー, Juria Hātirī) is a pianist at a Galbadian hotel frequented by Laguna during his days as a soldier. After being secretly admired by Laguna for some time, Julia introduces herself, as depicted in one of the flashback sequences. Julia reveals to Laguna her dream of writing her own songs and becoming a singer. Laguna is shipped out on new orders the following day and the ensuing circumstances prevent him from returning. Julia eventually marries Galbadian military officer General Caraway and has a daughter, Rinoa. She also finds success with her song "Eyes on Me", which is also the game's theme song. She is killed several years before the start of the game in a car accident. Julia is the only character in the game with an explicit character theme, named "Julia", which is a piano arrangement of "Eyes on Me". #### Raine Raine (レイン, Rein), later Raine Loire (レイン・レウァール, Rein Rewāru), is Laguna's second love depicted in the flashbacks. She finds him injured at the bottom of a cliff and brings him to her hometown of Winhill to recover. She is irked at first by Laguna's bad habits and reluctance to express himself outright, but the two grow close and marry. After Laguna becomes President of Esthar, his duties thwart his efforts to return to Winhill. Raine dies after giving birth to a child, who, along with Ellone, is taken away to Edea's orphanage. It is strongly implied by Ward and Kiros, as well as by gaming writers and fans, that Squall is their child. #### Martine Martine is the head of Galbadia Garden. His superior, Balamb Garden's master NORG, orders him to use SeeD members to carry out the assassination plot against Sorceress Edea. When Squall and his team travel to Galbadia Garden after fleeing Timber, Martine orders them to carry out the mission. He hopes that using Balamb Garden's SeeDs would deflect responsibility for the plot onto NORG. His actions trigger the conflict within Balamb Garden when Garden Master NORG tries to kill Headmaster Cid to appease Sorceress Edea after the mission fails. Afterward, the Galbadian military seizes Galbadia Garden and Martine flees to the pacifist city of Fishermans Horizon. #### NORG NORG is an exiled Shumi who lent Cid the money to build and develop the Garden and took the position of Garden Master upon its completion. NORG is more concerned about the revenue acquired by SeeD as a mercenary organization rather than its noble duty of opposing the Sorceress; he is considered a "black sheep" of the Shumi tribe. After hearing about a failed assassination attempt on Sorceress Edea, NORG begins to distrust Headmaster Cid and tries to seize control of Balamb Garden, causing a conflict between factions loyal to NORG and Cid. Feigning loyalty to the Sorceress, he attempts to kill the SeeDs who carried out the failed assassination. After he is defeated in battle, he enters a cocoon-like state. Shumis from the Shumi village later appear at the site of his defeat. They appear to have removed him from his cocoon by cracking it open. They also apologize for NORG's behaviour. #### Dr. Odine Dr. Odine is a scientist and magic researcher from Esthar. He discovered the GFs and junctioning and engineered a machine that mimics Ellone's power. Seventeen years before the game, he developed the necessary technology to allow Laguna to entomb Adel. As a researcher of the Lunatic Pandora, he also helps to prevent it from reaching Tears' Point and initiating a Lunar Cry. Odine also plays a role in the plot to destroy Ultimecia, explaining how to survive time compression. #### Minor SeeD members Several other SeeD members assist Squall's party. Dr. Kadowaki is the Balamb Garden doctor who tends to Squall's wounds after his fight with Seifer in the opening sequence. She also helps Headmaster Cid after his confrontation with NORG. Nida (another Star Wars reference, along with Biggs and Wedge) is a student at Balamb Garden who passes the SeeD exam along with Squall. He pilots Balamb Garden after it becomes a mobile base. Lastly, Xu''' is a high-ranking SeeD who helps Squall during the Dollet mission and the Garden civil war between NORG and Cid. She is friends with Quistis and a member of Squall's staff once he becomes the leader of Balamb Garden. ## Merchandise The characters of Final Fantasy VIII have spawned action figures, jewellery and other goods in their likeness. In 1999, action figure lineups were distributed in Japan by Bandai, Kotobukiya, Banpresto, and Coca-Cola. Bandai also released them to Europe and Australasia the same year. In 2004, action figures of Squall, Rinoa and Selphie were distributed in North America by Diamond Comics. Posters of individual characters or a collage of characters are available on many fan websites, including Final Fantasy Spirit. Other products available include mouse pads, keychains, and pens depicting individual characters or sets of characters. ## Reception The characters of Final Fantasy VIII have received praise by reviewers. The Gaming Age reviewer was originally concerned with the shift to consistently realistically proportioned characters, but he ultimately found them more appealing. Moreover, the review stated that the character designs and graphical quality allowed the characters to "convey emotions much more dramatically". Game Revolution cited similar praise, agreeing that the change "really makes the graphics impressive". Jeff Lundigran of IGN commented that the "low-polygon characters of Final Fantasy VII are gone, replaced with sometimes surprisingly realistic high-polygon models that only look better the closer they get". GameSpot agreed with the transition, claiming that "involving, personal, and emotional stories are far more believable when they come from, well, people, not short, bizarrely shaped cartoon characters". The cast itself has received criticism from reviews. Lundigran criticized the manner in which romantic interactions play out, stating that "considering that the love story is so integral to everything that happens—not to mention forming the central image of the box art—it's incomprehensible why no one says 'I love you' to anyone, ever". With Squall, he felt that "FFVIII does break one cardinal rule: when your story is character centered, you'd better center it on a character the audience can care about. Squall, unfortunately, just doesn't fit the bill". However, GameSpot felt that Final Fantasy VIII shifts the story from the "epic" concepts of VII to the "personal", in that "the characters and their relationships are all extremely believable and complex; moreover, the core romance holds up even under the most pessimistic scrutiny". A later editorial by IGN's Ryan Clements echoed this sentiment, appreciating that Squall and Rinoa's single kiss during the finale serves "one of the player's main rewards for hours of dedication". Although the reviewer at Official U.S. PlayStation Magazine'' acknowledged possible fears over a romantic storyline, he wrote that "it's only later in the game, once you are really attached to all the distinct and complex characters, that the more emotional themes are gradually introduced".
3,199,247
Tropical Storm Vamei
1,172,790,571
Pacific and North Indian tropical cyclone in 2001
[ "2001 North Indian Ocean cyclone season", "2001 Pacific typhoon season", "December 2001 events in Asia", "January 2002 events in Asia", "Retired Pacific typhoons", "Tropical cyclones in 2001", "Tropical cyclones in Indonesia", "Tropical cyclones in Malaysia", "Western Pacific tropical storms" ]
Tropical Storm Vamei (also known as Typhoon Vamei) was a Pacific tropical cyclone that formed at about 85 nautical miles (100 mi; 160 km) from the equator—closer than any other tropical cyclone on record. The last storm of the 2001 Pacific typhoon season, Vamei developed on 26 December at 1.4° N in the South China Sea. It strengthened quickly and made landfall along extreme southeastern Peninsular Malaysia. Vamei rapidly weakened into a remnant low over Sumatra on 28 December, and the remnants eventually re-organized in the North Indian Ocean. Afterward, the storm encountered strong wind shear once again, and dissipated on 1 January 2002. Though Vamei was officially designated as a tropical storm, its intensity is disputed; some agencies classify it as a typhoon, based on sustained winds of 120 km/h (75 mph) and the appearance of an eye. The storm brought flooding and landslides to eastern Peninsular Malaysia, causing \$3.58 million in damage (2001 USD, \$ 2023 USD) and five deaths. ## Meteorological history On 19 December, a small low-level circulation was located along the northwest coastline of Borneo; at the same time a plume of cold air progressed southward through the South China Sea on the southeastern periphery of a ridge over the Far East. The vortex drifted southwestward, reaching open water by 21 December. The northerly air surge was deflected after interacting with the circulation, and at the same time a portion of the air surge crossed the equator. The southerly flow turned eastward, then northward, and in combination with the northerly flow it wrapped into the vortex, resulting in rapid development of the low-level circulation, just a short distance north of the equator. By 25 December, an area of scattered convection persisted about 370 km (230 mi) east of Singapore within an area of low wind shear, in association with the low-level circulation. Continuing slowly westward, the convection deepened and organized further, and at 12:00 UTC on 26 December the disturbance developed into a tropical depression about 230 km (140 mi) east of Singapore, or 156 km (97 mi) north of the equator. This was the first recorded occurrence of a tropical cyclone near the equator. The depression strengthened further and officially attained tropical storm status at 00:00 UTC on 27 December, based on the analysis by the Japan Meteorological Agency (JMA), though the Joint Typhoon Warning Center (JTWC) unofficially classified it as a tropical storm six hours prior. Shortly thereafter, an eye with a 39 km (24 mi) diameter became apparent on satellite imagery, along with rainbands extending southward to the opposite side of the equator. At 06:00 UTC, the JMA first classified the system as Tropical Storm Vamei, about 65 km (40 mi) northeast of Singapore, and the agency estimated the storm attained peak winds of 85 km/h (55 mph) at the same time. However, the JTWC upgraded Vamei to typhoon status with peak winds of 120 km/h (75 mph), based on a United States Navy ship report from within the eye; a second ship reported wind gusts of 195 km/h (120 mph) in the southern portion of the eyewall. The storm was small and compact, with gales extending about 45 km (28 mi) from its center. At about 0830 UTC on 27 December, Vamei made landfall approximately 60 km (37 mi) northeast of Singapore, in the southeastern portion of the Malaysian state of Johor. Initially, the Malaysian Meteorological Department (MetMalaysia) classified the cyclone as a tropical storm, though it was later re-assessed as a typhoon at landfall. Vamei weakened quickly as it crossed the extreme southern portion of the Malay Peninsula, and late on 27 December, the JMA downgraded it to tropical depression status before the cyclone emerged into the Straits of Malacca. The JTWC initially maintained it as a minimal tropical storm, though the agency downgraded the storm to depression status as the storm's center again approached land. Early on 28 December, Vamei moved ashore on northeastern Sumatra, and at 06:00 UTC, the JMA classified the storm as dissipated. However, convection persisted near the circulation over land, believed to have been caused by the process known as upper-level diffluence. On 29 December, what was originally believed to be a separate system reached the southeastern Bay of Bengal. In a post-season re-evaluation, the JTWC classified the system as a continuation of Vamei, based on analysis of satellite imagery that indicated the circulation of Vamei crossed Sumatra without dissipating. Convection re-developed, and late on 30 December, the JTWC classified the cyclone as a tropical storm about 390 km (240 mi) west-southwest of the northwestern tip of Sumatra; initially, due to being treated as a separate system, it was classified as Tropical Cyclone 05B. Vamei quickly developed good outflow and organization, though increased wind shear on 31 December rapidly weakened the storm; by late that day, the center was exposed, and Vamei quickly dissipated on 1 January 2002. ### Unusual formation Vamei formed and reached tropical storm strength at 1.4° N, only 84 nautical miles (156 km; 97 mi) from the equator. This broke the previous record of Typhoon Sarah in the 1956 Pacific typhoon season, which reached tropical storm strength at 2.2° N. Due to a lack of Coriolis effect near the equator, the formation of Vamei was previously considered impossible. However, a study by the Naval Postgraduate School indicated that the probability for a similar equatorial development was at least once every four centuries. Vamei developed in a vortex that appears every winter along the northwest coast of Borneo and is maintained by the interaction between monsoonal winds and the local topography. Often, the vortex remains near the coastline, and in an analysis of 51 winters, only six reported the vortex as being over the equatorial waters for four days or more. As the area in the South China Sea between Borneo and Singapore is only 665 km (413 mi) wide, a vortex needs to move slowly to develop. A persistent northerly wind surge for more than five days, which is needed to enhance the vortex, is present, on average, nine days each winter. The probability for a pre-existing tropical disturbance to develop into a tropical cyclone is between 10 and 30 percent. Thus, the conditions which resulted in the formation of Vamei are believed to occur once every 100–400 years. ## Preparations and impact Four days prior to Vamei moving ashore, the Malaysian Meteorological Department (MetMalaysia) issued storm advisories for potentially affected areas. Subsequently, the agency issued warnings for heavy rainfall, high winds, and rough seas. However, few citizens knew of the passage of the rare storm. Offshore of Malaysia, two U.S. Navy ships in Vamei's eyewall were damaged by strong winds. Upon moving ashore, the storm brought storm surge damage to portions of southeastern Peninsular Malaysia. Vamei brought strong winds and heavy rainfall to portions of Melaka, Negeri Sembilan, and Selangor as well as to Johor, where rainfall reached over 200 mm (7.9 in) in Senai. Additionally, monsoonal moisture, influenced by the storm, produced moderate to heavy precipitation across various regions of peninsular Malaysia. The passage of the cyclone resulted in flooding and mudslides, which forced the evacuation of more than 13,195 people in Johor and Pahang states into 69 shelters. Along Gunung Pulai, the rainfall caused a landslide which destroyed four houses and killed five people. River flooding was also reported, as a result of the precipitation from Vamei as well as previous rainfall. Damage from the flooding was estimated at RM13.7 million (2001 MYR, \$ million 2001 USD). About 40 percent of the damage occurred to crops at a farm in Kota Tinggi. Moderate damage to transportation, education, and health-care facilities was also reported. The Malaysian government provided affected families up to RM5,000 (2001 MYR, \$ 2001 USD) in assistance for food, clothing, and repairs. Vamei also brought heavy rainfall to Singapore, which caused air traffic disruptions at the Singapore Changi Airport. The passage of the cyclone resulted in many downed trees. ## Retirement In 2004, the name "Vamei" was retired and replaced with "Peipah", becoming the first retired name since the Japan Meteorological Agency began naming Pacific typhoons in 2000. ## See also - Cyclone Agni - List of tropical cyclones near the Equator - List of retired Pacific typhoon names
6,799,620
Diamonds Are Forever (novel)
1,165,501,493
1956 novel by Ian Fleming
[ "1956 British novels", "British novels adapted into films", "Gang rape in fiction", "James Bond books", "Jonathan Cape books", "Novels adapted into comics", "Novels adapted into radio programs", "Novels by Ian Fleming" ]
Diamonds Are Forever is the fourth novel by the British author Ian Fleming to feature his fictional British Secret Service agent James Bond. Fleming wrote the story at his Goldeneye estate in Jamaica, inspired by a Sunday Times article on diamond smuggling. The book was first published by Jonathan Cape in the United Kingdom on 26 March 1956. The story centres on Bond's investigation of a diamond-smuggling operation that originates in the mines of Sierra Leone and runs to Las Vegas. Along the way Bond meets and falls in love with one of the members of the smuggling gang, Tiffany Case. Much of Fleming's background research formed the basis for his non-fiction 1957 book The Diamond Smugglers. Diamonds Are Forever deals with international travel, marriage and the transitory nature of life. As with Fleming's previous novels, Diamonds Are Forever received broadly positive reviews at the time of publication. The story was serialised in the Daily Express newspaper, first in an abridged, multi-part form and then as a comic strip. In 1971 it was adapted into the seventh Bond film in the series and was the last Eon Productions film to star Sean Connery as Bond. ## Plot The British Secret Service agent James Bond is sent on an assignment by his superior, M. Acting on information received from Special Branch, M tasks Bond with infiltrating a smuggling ring transporting diamonds from mines in the Crown colony of Sierra Leone to the United States. Bond must infiltrate the smugglers' pipeline to uncover those responsible. Using the identity of "Peter Franks", a country house burglar turned diamond smuggler, he meets Tiffany Case, an attractive gang member who has developed an antipathy towards men after being gang-raped as a teenager. Bond discovers that the ring is operated by the Spangled Mob, a ruthless American gang run by the brothers Jack and Seraffimo Spang. He follows the trail from London to New York. To earn his fee for carrying the diamonds he is instructed by a gang member, Shady Tree, to bet on a rigged horse race in nearby Saratoga. There Bond meets his old friend Felix Leiter, a former CIA agent working at Pinkertons as a private detective investigating crooked horse racing. Leiter bribes the jockey to ensure the failure of the plot to rig the race, and asks Bond to make the pay-off. When he goes to make the payment, he witnesses two homosexual thugs, Wint and Kidd, attack the jockey. Bond calls Tree to enquire further about the payment of his fee and is told to go to the Tiara Hotel in Las Vegas. The Tiara is owned by Seraffimo Spang and operates as the headquarters of the Spangled Mob. Spang also owns an old Western ghost town, named Spectreville, restored to be his own private holiday retreat. At the hotel Bond finally receives payment through a rigged blackjack game where the dealer is Tiffany. After winning the money he is owed he disobeys his orders from Tree by continuing to gamble in the casino and wins heavily. Spang suspects that Bond may be a 'plant' and has him captured and tortured at Spectreville. With Tiffany's help he escapes from Spectreville aboard a railway push-car with Seraffimo Spang in pursuit aboard an old Western train. Bond changes the railway points and re-routes the train onto a dead-end, and shoots Spang before the resulting crash. Assisted by Leiter, Bond and Tiffany go via California to New York, where they board the RMS Queen Elizabeth to travel to London, a relationship developing between them as they go. Wint and Kidd observe their embarkation and follow them on board. They kidnap Tiffany, planning to kill her and throw her overboard. Bond rescues her and kills both gangsters; he makes it look like a murder-suicide. Tiffany subsequently informs Bond of the details of the pipeline. The story begins in Africa where a dentist bribes miners to smuggle diamonds in their mouths; he extracts the gems during routine appointments. From there, the dentist takes the diamonds to a rendezvous with a German helicopter pilot. Eventually the diamonds go to Paris and then on to London. There, after telephone instructions from a contact known as ABC, Tiffany meets a person who explains how the diamonds will be smuggled to New York City. After returning to London—where Tiffany moves into Bond's flat—Bond flies to Freetown in Sierra Leone, and then to the next diamond rendezvous. With the collapse of the rest of the pipeline, Jack Spang (who turns out to be ABC) shuts down his diamond-smuggling pipeline by killing its participants. Spang himself is killed when Bond shoots down his helicopter. ## Background and writing history By mid 1954 the author Ian Fleming had published two novels—Casino Royale (1953) and Live and Let Die (1954)—and had a third, Moonraker, being edited and prepared for production. That year he read a story in The Sunday Times about diamond smuggling from Sierra Leone. He considered this story as the possible basis for a new novel and, through an old school friend, he engineered a meeting with Sir Percy Sillitoe, the ex-head of MI5, then working in a security capacity for the diamond-trading company De Beers. The material Fleming gathered was used in both Diamonds Are Forever and The Diamond Smugglers, a non-fiction book published in 1957. After Fleming's friend, Sir William Stephenson, sent him a magazine article about the spa town of Saratoga Springs, Fleming flew to the US in August 1954, where he met his friends Ivar Bryce and Ernest Cuneo; the three travelled to the town in New York State. There, Fleming and Cuneo visited a mud-bath: en route to an up-market establishment they took the wrong directions and ended up at a run-down outlet, which became the inspiration for the Acme Mud and Sulphur Baths scene in the book. Fleming met the rich socialite, William Woodward, Jr., who drove a Studillac—a Studebaker with a powerful Cadillac engine. According to Henry Chancellor, "the speed and comfort of it impressed Ian, and he shamelessly appropriated this car" for the book. Woodward was killed by his wife shortly afterwards—she claimed she mistook him for a prowler—and when Diamonds Are Forever was published, it was dedicated to Bryce, Cuneo and "the memory of W. W. Jr., at Saratoga, 1954 and 55". Fleming also travelled to Los Angeles with Cuneo, visiting the Los Angeles Police Intelligence headquarters, where they met Captain James Hamilton, who provided Fleming with information on the Mafia organisation in the US. From Los Angeles Fleming travelled to Las Vegas, where he stayed at the Sands Hotel; he interviewed the hotel owner, Jack Entratter, where he learnt the background to the security systems and methods of cheating that he used in the novel. Fleming wrote Diamonds Are Forever at his Goldeneye estate in Jamaica in January and February 1955. He followed his usual practice, which he later outlined in Books and Bookmen magazine, in which he said: "I write for about three hours in the morning ... and I do another hour's work between six and seven in the evening. I never correct anything and I never go back to see what I have written ... By following my formula, you write 2,000 words a day." On completion Fleming wrote to his friend Hilary Bray: > I baked a fresh cake in Jamaica this year which I think has finally exhausted my inventiveness as it contains every single method of escape and every variety of suspenseful action that I had omitted from my previous books—in fact everything except the kitchen sink, and if you can think up a good plot involving kitchen sinks, please send it along speedily. He returned to London with the completed 183-page typescript in March that year; he had earlier settled on a title, which he based on an advertisement slogan "A Diamond is Forever" in the American edition of Vogue. Although Fleming provides no dates within his novels, John Griswold and Henry Chancellor—both of whom have written books on behalf of Ian Fleming Publications—have identified different timelines based on events and situations within the novel series as a whole. Chancellor put the events of Diamonds Are Forever in 1954; Griswold is more precise, and considers the story to have taken place in July and August 1953. ## Development ### Plot inspirations Fleming had previously travelled to the US on the RMS Queen Elizabeth; the experience provided background information for the final four chapters of the novel. His trip had included a railway journey on the Super Chief, during which he and Cuneo had visited the cab to meet the driver and engineer, and an excursion on the 20th Century Limited, both of which gave information Fleming used for Spang's train, the Cannonball. Fleming had a long-standing interest in trains and following his involvement in a near-fatal crash associated them with danger. In addition to Diamonds Are Forever, he used them in Live and Let Die, From Russia, with Love and The Man with the Golden Gun. As with several others of his works, Fleming appropriated the names of people he knew for the story's characters. The name of one of Fleming's two travelling companions from the US, Ernest Cuneo, was used as Ernie Cureo, Bond's taxi-driving ally in Las Vegas, and one of the homosexual villains, "Boofy" Kidd, was named after one of Fleming's close friends—and a relative of his wife—Arthur Gore, 8th Earl of Arran, known to his friends as "Boofy". Arran, an advocate of the relaxation of the British laws relating to homosexuality, heard about the use of his name before publication and complained to Fleming about it, but was ignored and the name was retained for the novel. During his trip to America Fleming had come across the name Spang—old German for "maker of shoe buckles"—which he appropriated for the villainous brothers. ### Characters The writer Jonathan Kellerman's introduction to the 2006 edition of Diamonds Are Forever describes Bond as a "surprisingly ... complex" character who, in contrast with the cinematic representation, is "nothing other than human. ... Fleming's Bond makes mistakes and pays for them. He feels pain and regret." The novelist Raymond Benson—who later wrote a series of Bond novels—writes that the character develops in Diamonds Are Forever, building on Fleming's characterisation in his previous three novels. This growth arises through Bond's burgeoning relationship with the book's main female character, Tiffany Case. He falls in love; the first time he has done so since Vesper Lynd in Casino Royale. According to Benson, Tiffany is portrayed as tough, but lonely and insecure, and "is Fleming's first fully developed female character." The cultural historians Janet Woollacott and Tony Bennett write that many of the main female characters in Fleming's novels are uncommon, and Tiffany—along with Pussy Galore from Goldfinger and Honeychile Rider from Dr. No—has been "damaged ... sexually" having previously been raped. The effect of the trauma has led to Tiffany working for the villain, which allows Bond to complete his mission, and align her to a more honest lifestyle. The literary analyst LeRoy L. Panek observes that Diamonds Are Forever along with Goldfinger and The Man with the Golden Gun have gangsters, rather than spies, as antagonists; the novel is the only one in the Bond canon without a connection to the Cold War. Panek, comparing the gangsters to Bond's normal adversaries, identifies them as "merely incompetent gunsels" when compared with the British agent, who can eliminate them with relative ease. The essayist Umberto Eco sees the Spangs as being a forerunner of the SPECTRE organisation Fleming uses in his later novels. Kingsley Amis, who later wrote a Bond novel, considered that there was "no decent villain", while Eco judges three of the villains—the two Spang brothers and Winter—as physically abnormal, as many of Bond's adversaries are. Anthony Synnott, in his examination of aesthetics in the Bond novels, also considers that the gangster Michael "Shady" Tree fits into the abnormal category, as he is a red-haired hunchback with "a pair of china eyes that were so empty and motionless that they might have been hired by a taxidermist". ## Style Diamonds Are Forever opens with a passage in which a scorpion hunts and eats its prey, and is subsequently killed by one of the diamond couriers. Eco sees this "cleverly presented" beginning as similar to the opening of a film, remarking that "Fleming abounds in such passages of high technical skill". When the writer William Plomer was proof-reading the manuscript he saw literary merit, and wrote to Fleming that the passages relating to the racing stables at Saratoga were "the work of a serious writer". Kellerman considers that "Fleming's depiction of Las Vegas in the '50s is wickedly spot on and one of the finest renditions of time and place in contemporary crime fiction. The story is robust and complex." Fleming used well-known brand names and everyday details to produce a sense of realism, which Amis called "the Fleming effect". Amis describes "the imaginative use of information, whereby the pervading fantastic nature of Bond's world ... [is] bolted down to some sort of reality, or at least counter-balanced." Benson considers that in Diamonds Are Forever the use of detail is "rich and flamboyant" which allows an "interesting and amusing" description of the US. Benson considers a weakness of the book to be a lack of structural development, although this is compensated by character development; Kellerman also believes the novel to be "rich in characterization". Benson analyses Fleming's writing style and identifies what he describes as the "Fleming Sweep": a stylistic point that sweeps the reader from one chapter to another using 'hooks' at the end of chapters to heighten tension and pull the reader into the next: Benson feels that the sweep in Diamonds Are Forever was "at full force" in the novel, which "maintain[s] a constant level of excitement" as a result. ## Themes According to Benson the main theme of Diamonds Are Forever is expressed in the title, with the permanency of the gemstones held in contrast to other aspects of the story, particularly love and life. Towards the end of the novel Fleming uses the lines "Death is forever. But so are diamonds", and Benson sees the gems as a metaphor for death and Bond as the "messenger of death". The journalist and author Christopher Hitchens observes that "the central paradox of the classic Bond stories is that, although superficially devoted to the Anglo-American war against communism, they are full of contempt and resentment for America and Americans"; Benson sees that Diamonds Are Forever contains examples of Fleming's feelings of superiority towards American culture, including his description of the sleaziness of Las Vegas. Amis, in his exploration of Bond in The James Bond Dossier, pointed out that Leiter is > ... such a nonentity as a piece of characterization ... he, the American, takes orders from Bond, the Britisher, and that Bond is constantly doing better than he, showing himself, not braver or more devoted, but smarter, wittier, tougher, more resourceful, the incarnation of little old England. The cultural historian Jeremy Black points to the theme of international travel in Diamonds Are Forever, which was still a novelty to most people in Britain at the time. This travel between a number of a locations exacerbates one of the problems identified by Black: that there was no centre to the story. In contrast to the other novels in the Bond canon, where Casino Royale had Royale, From Russia, with Love had Istanbul and Dr. No had Jamaica, Diamonds Are Forever had multiple locations and two villains and there was "no megalomaniac fervour, no weird self-obsession, at the dark centre of the plot". According to Fleming's biographer, Andrew Lycett, after the novel was completed, Fleming added four extra chapters "almost as an afterthought", detailing the events on the Queen Elizabeth. This introduced the question of marriage, and allowed Fleming to discuss matrimony through his characters, with Bond telling Case "Most marriages don't add two people together. They subtract one from the other." Lycett opines that the addition was because of the state of Fleming's own marriage which was going through a bad time. ## Publication and reception ### Publication history Diamonds Are Forever was published on 26 March 1956 by Jonathan Cape with a cover designed by Pat Marriott. As with the three previous Bond books, the first edition of 12,500 copies sold out quickly; the US edition was published in October 1956 by Macmillan. The novel was serialised in The Daily Express newspaper from 12 April 1956 onwards—the first of Fleming's novels he had sold to the newspaper—which led to an overall rise in the sales of the novels. From November 1956 sales of Diamonds Are Forever, and Fleming's other novels, all rose following the visit of the Prime Minister, Sir Anthony Eden, to Fleming's Goldeneye estate to recuperate following the Suez Crisis; Eden's stay was much reported in the British press. The book received boosts in sales in 1962 when Eon Productions adapted Dr. No for the cinema, and in 1971 when Diamonds Are Forever was produced for the big screen. In February 1958 Pan Books published a paperback version of the novel in the UK, which sold 68,000 copies before the end of the year. Since its initial publication the book has been issued in numerous hardback and paperback editions, translated into several languages and has never been out of print. In 2023 Ian Fleming Publications—the company that administers all Fleming's literary works—had the Bond series edited as part of a sensitivity review to remove or reword some racial or ethnic descriptors. The rerelease of the series was for the 70th anniversary of Casino Royale, the first Bond novel. ### Reception Julian Symons, reviewing Diamonds Are Forever in The Times Literary Supplement, thought that Fleming had some enviable qualities as a writer, including "a fine eye for places ... an ability to convey his own interest in the mechanics of gambling and an air of knowledgeableness". Symons also saw defects in Fleming's style, including "his inability to write convincing dialogue". For Symons, the novel was Fleming's "weakest book, a heavily padded story about diamond smuggling", where "the exciting passages are few". Milward Kennedy of The Manchester Guardian, thought that Fleming was "determined to be as tough as Chandler, if a little less lifelike", while Maurice Richardson, in The Observer, considered Bond "one of the most cunningly synthesised heroes in crime-fiction". Richardson wrote how "Fleming's method is worth noting, and recommending: he does not start indulging in his wilder fantasies until he has laid down a foundation of factual description." Elements of a review by Raymond Chandler for The Sunday Times were used as advertising for the novel; Chandler wrote that it was "about the nicest piece of book-making in this type of literature which I have seen for a long time ... Mr. Fleming writes a journalistic style, neat, clean, spare and never pretentious". Writing in The New York Times, Anthony Boucher—described by Fleming's biographer John Pearson as "throughout an avid anti-Bond and an anti-Fleming man"—was mixed in his review, thinking that "Mr. Fleming's handling of American and Americans is well above the British average", although he felt that "the narrative is loose-jointed and weakly resolved", while Bond resolves his assignments "more by muscles and luck than by any sign of operative intelligence". ## Adaptations Diamonds Are Forever was adapted as a daily comic strip for the Daily Express newspaper, and syndicated around the world. The original adaptation ran from 10 August 1959 to 30 January 1960. The strip was written by Henry Gammidge and illustrated by John McLusky. The novel was loosely adapted in a 1971 film starring Sean Connery and directed by Guy Hamilton. Diamonds Are Forever was the final Bond film undertaken by Sean Connery with Eon Productions, although he returned to the role of Bond twelve years later for Kevin McClory and Jack Schwartzman's Never Say Never Again. In July 2015 Diamonds Are Forever was broadcast on BBC Radio 4, starring Toby Stephens as Bond; it was directed by Martin Jarvis.
588,579
Ostend Manifesto
1,167,702,536
1854 document on US-Spain relations
[ "1854 documents", "1854 in Cuba", "1854 in the United States", "American political manifestos", "Cuba–United States relations", "Expansion of slavery in the United States", "History of United States expansionism", "Origins of the American Civil War", "Proposed states and territories of the United States", "Slavery in the United States" ]
The Ostend Manifesto, also known as the Ostend Circular, was a document written in 1854 that described the rationale for the United States to purchase Cuba from Spain while implying that the U.S. should declare war if Spain refused. Cuba's annexation had long been a goal of U.S. slaveholding expansionists. At the national level, American leaders had been satisfied to have the island remain in weak Spanish hands so long as it did not pass to a stronger power such as Britain or France. The Ostend Manifesto proposed a shift in foreign policy, justifying the use of force to seize Cuba in the name of national security. It resulted from debates over slavery in the United States, manifest destiny, and the Monroe Doctrine, as slaveholders sought new territory for the expansion of slavery. During the administration of President Franklin Pierce, a pro-Southern Democrat, Southern expansionists called for acquiring Cuba as a slave state, but the outbreak of violence following the Kansas–Nebraska Act left the administration unsure of how to proceed. At the suggestion of Secretary of State William L. Marcy, American ministers in Europe—Pierre Soulé for Spain, James Buchanan for Britain, and John Y. Mason for France—met to discuss strategy related to an acquisition of Cuba. They met secretly at Ostend, Belgium, and drafted a dispatch at Aachen, Prussia. The document was sent to Washington in October 1854, outlining why a purchase of Cuba would be beneficial to each of the nations and declaring that the U.S. would be "justified in wresting" the island from Spanish hands if Spain refused to sell. To Marcy's chagrin, Soulé made no secret of the meetings, causing unwanted publicity in both Europe and the U.S. The administration was finally forced to publish the contents of the dispatch, which caused it irreparable damage. The dispatch was published as demanded by the House of Representatives. Dubbed the "Ostend Manifesto", it was immediately denounced in both the Northern states and Europe. The Pierce administration suffered a significant setback, and the manifesto became a rallying cry for anti-slavery Northerners. The question of Cuba's annexation was effectively set aside until the late 19th century, when support grew for Cuban independence from Spain. ## Historical context Located 90 miles (140 km) off the coast of Florida, Cuba had been discussed as a subject for annexation in several presidential administrations. Presidents John Quincy Adams and Thomas Jefferson expressed great interest in Cuba, with Adams observing during his Secretary of State tenure that it had "become an object of transcendent importance to the commercial and political interests of our Union". He later described Cuba and Puerto Rico as "natural appendages to the North American continent"—the former's annexation was "indispensable to the continuance and integrity of the Union itself." As the Spanish Empire had lost much of its power, a no-transfer policy began with Jefferson whereby the U.S. respected Spanish sovereignty, considering the island's eventual absorption inevitable. The U.S. simply wanted to ensure that control did not pass to a stronger power such as Britain or France. Cuba was of special importance to Southern Democrats, who believed their economic and political interests would be best served by the admission of another slave state to the Union. The existence of slavery in Cuba, the island's plantation economy based on sugar, and its geographical location predisposed it to Southern influence; its admission would greatly strengthen the position of Southern slaveholders, whose economic position was under threat from abolitionists. Whereas immigration to Northern industrial centers had resulted in Northern control of the population-based House of Representatives, Southern politicians sought to maintain the balance of power in the Senate, where each state received equal representation. As slavery-free Western states were admitted, Southern politicians increasingly looked to Cuba as the next slave state. If Cuba were admitted to the Union as a single state, the island would have sent two senators and up to nine representatives to Washington. In the Democratic Party, the debate over the continued expansion of the United States centered on how quickly, rather than whether, to expand. Radical expansionists and the Young America movement were quickly gaining traction by 1848, and a debate about whether to annex the Yucatán portion of Mexico that year included significant discussion of Cuba. Even John C. Calhoun, described as a reluctant expansionist who strongly disagreed with intervention on the basis of the Monroe Doctrine, concurred that "it is indispensable to the safety of the United States that this island should not be in certain hands", likely referring to Britain. In light of a Cuban uprising, President James K. Polk refused solicitations from filibuster backer John L. O'Sullivan and stated his belief that any acquisition of the island must be an "amicable purchase." Under orders from Polk, Secretary of State James Buchanan prepared an offer of \$100 million, but "sooner than see [Cuba] transferred to any power, [Spanish officials] would prefer seeing it sunk into the ocean." The Whig administrations of presidents Zachary Taylor and Millard Fillmore did not pursue the matter and took a harsher stand against filibusters such as Venezuelan Narciso Lopez, with federal troops intercepting several expeditions bound for Cuba. When Franklin Pierce took office in 1853, however, he was committed to Cuba's annexation. ## The Pierce administration At Pierce's presidential inauguration, he stated, "The policy of my Administration will not be controlled by any timid forebodings of evil from expansion." While slavery was not the stated goal nor Cuba mentioned by name, the antebellum makeup of his party required the Northerner to appeal to Southern interests, so he favored the annexation of Cuba as a slave state. To this end, he appointed expansionists to diplomatic posts throughout Europe, notably sending Pierre Soulé, an outspoken proponent of Cuban annexation, as United States Minister to Spain. The Northerners in his cabinet were fellow doughfaces (Northerners with Southern sympathies) such as Buchanan, who was made Minister to Great Britain after a failed bid for the presidency at the Democratic National Convention, and Secretary of State William L. Marcy, whose appointment was also an attempt to placate the "Old Fogies." This was the term for the wing of the party that favored slow, cautious expansion. In March 1854, the steamer Black Warrior stopped at the Cuban port of Havana on a regular trading route from New York City to Mobile, Alabama. When it failed to provide a cargo manifest, Cuban officials seized the ship, its cargo, and its crew. The so-called Black Warrior Affair was viewed by Congress as a violation of American rights; a hollow ultimatum issued by Soulé to the Spanish to return the ship served only to strain relations, and he was barred from discussing Cuba's acquisition for nearly a year. While the matter was resolved peacefully, it fueled the flames of Southern expansionism. Meanwhile, the doctrine of manifest destiny had become increasingly sectionalized as the decade progressed. While there were still Northerners who believed the United States should dominate the continent, most were opposed to Cuba's annexation, particularly as a slave state. Southern-backed filibusters, including Narciso López, had failed repeatedly since 1849 to 1851 to overthrow the colonial government despite considerable support among the Cuban people for independence, and a series of reforms on the island made Southerners apprehensive that slavery would be abolished. They believed that Cuba would be "Africanized," as the majority of the population were slaves, and they had seen the Republic of Haiti established by former slaves. The notion of a pro-slavery invasion by the U.S. was rejected in light of the controversy over the Kansas–Nebraska Act. During internal discussions, supporters of gaining Cuba decided that a purchase or intervention in the name of national security was the most acceptable method of acquisition. ## Writing the Manifesto Marcy suggested Soulé confer with Buchanan and John Y. Mason, Minister to France, on U.S. policy toward Cuba. He had previously written to Soulé that, if Cuba's purchase could not be negotiated, "you will then direct your effort to the next desirable object, which is to detach that island from the Spanish dominion and from all dependence on any European power"—words Soulé may have adapted to fit his own agenda. Authors David Potter and Lars Schoultz both note the considerable ambiguity in Marcy's cryptic words, and Samuel Bemis suggests he may have referred to Cuban independence, but acknowledges it is impossible to know Marcy's true intent. In any case, Marcy had also written in June that the administration had abandoned thoughts of declaring war over Cuba. But Robert May writes, "the instructions for the conference had been so vague, and so many of Marcy's letters to Soulé since the Black Warrior incident had been bellicose, that the ministers misread the administration's intent." After a minor disagreement about their meeting site, the three American diplomats met in Ostend, Belgium from October 9–11, 1854, then adjourned to Aachen, Prussia, for a week to prepare a report of the proceedings. The resulting dispatch, which would come to be known as the Ostend Manifesto, declared that "Cuba is as necessary to the North American republic as any of its present members, and that it belongs naturally to that great family of states of which the Union is the Providential Nursery". Prominent among the reasons for annexation outlined in the manifesto were fears of a possible slave revolt in Cuba parallel to the Haitian Revolution (1791–1804) in the absence of U.S. intervention. The Manifesto urged against inaction on the Cuban question, warning, > We should, however, be recreant to our duty, be unworthy of our gallant forefathers, and commit base treason against our posterity, should we permit Cuba to be Africanized and become a second St. Domingo (Haiti), with all its attendant horrors to the white race, and suffer the flames to extend to our own neighboring shores, seriously to endanger or actually to consume the fair fabric of our Union. Racial fears, largely spread by Spain, raised tension and anxiety in the U.S. over a potential black uprising on the island that could "spread like wildfire" to the southern U.S. The Manifesto stated that the U.S. would be "justified in wresting" Cuba from Spain if the colonial power refused to sell it. Soulé was a former U.S. Senator from Louisiana and member of the Young America movement, who sought a realization of American influence in the Caribbean and Central America. He is credited as the primary architect of the policy expressed in the Ostend Manifesto. The experienced and cautious Buchanan is believed to have written the document and moderated Soulé's aggressive tone. Soulé highly favored expansion of Southern influence outside the current Union of States. His belief in Manifest Destiny led him to prophesy "absorption of the entire continent and its island appendages" by the U.S. Mason's Virginian roots predisposed him to the sentiments expressed in the document, but he later regretted his actions. Buchanan's exact motivations remain unclear despite his expansionist tendencies, but it has been suggested that he was seduced by visions of the presidency, which he would go on to win in 1856. One historian concluded in 1893, "When we take into account the characteristics of the three men we can hardly resist the conclusion that Soulé, as he afterwards intimated, twisted his colleagues round his finger." To Marcy's chagrin, the flamboyant Soulé made no secret of the meetings. The press in both Europe and the U.S. were well aware of the proceedings if not their outcome, but were preoccupied with wars and midterm elections. In the latter case, the Democratic Party became a minority in the United States Congress, and editorials continued to chide the Pierce administration for its secrecy. At least one newspaper, the New York Herald, published what Brown calls "reports that came so close to the truth of the decisions at Ostend that the President feared they were based on leaks, as indeed they may have been". Pierce feared the political repercussions of confirming such rumors, and he did not acknowledge them in his State of the Union address at the end of 1854. The administration's opponents in the House of Representatives called for the document's release, and it was published in full four months after being written. ## Fallout When the document was published, Northerners were outraged by what they considered a Southern attempt to extend slavery. American free-soilers, recently angered by the strengthened Fugitive Slave Law (passed as part of the Compromise of 1850 and requiring officials of free states to cooperate in the return of slaves), decried as unconstitutional what Horace Greeley of the New York Tribune labeled "The Manifesto of the Brigands." During the period of Bleeding Kansas, as anti- and pro-slavery supporters fought for control of the state, the Ostend Manifesto served as a rallying cry for the opponents of the Slave Power. The incident was one of many factors that gave rise to the Republican Party, and the manifesto was criticized in the Party's first platform in 1856 as following a "highwayman's" philosophy of "might makes right." But, the movement to annex Cuba did not fully end until after the American Civil War. The Pierce Administration was irreparably damaged by the incident. Pierce had been highly sympathetic to the Southern cause, and the controversy over the Ostend Manifesto contributed to the splintering of the Democratic Party. Internationally, it was seen as a threat to Spain and to imperial power across Europe. It was quickly denounced by national governments in Madrid, London, and Paris. To preserve what favorable relations the administration had left, Soulé was ordered to cease discussion of Cuba; he promptly resigned. The backlash from the Ostend Manifesto caused Pierce to abandon expansionist plans. It has been described as part of a series of "gratuitous conflicts ... that cost more than they were worth" for Southern interests intent on maintaining the institution of slavery. James Buchanan was easily elected President in 1856. Although he remained committed to Cuban annexation, he was hindered by popular opposition and the growing sectional conflict. It was not until thirty years after the Civil War that the so-called Cuban Question again came to national prominence. ## See also - Annexation of Santo Domingo - Cuba–United States relations - Golden Circle (proposed country) - Spain–United States relations
37,856,032
Three-cent nickel
1,151,819,718
US copper-nickel three-cent coin (1865–1889)
[ "1865 introductions", "Goddess of Liberty on coins", "Three-cent coins of the United States" ]
The copper-nickel three-cent piece, often called a three-cent nickel piece or three-cent nickel, was designed by US Mint Chief Engraver James B. Longacre and struck by the United States Bureau of the Mint from 1865 to 1889. It was initially popular, but its place in commerce was supplanted by the five-cent piece, or nickel. With precious metal federal coinage hoarded during the economic turmoil of the American Civil War, including the silver three-cent piece, and even the copper-nickel cent commanding a premium, Congress issued paper money in denominations as small as three cents to replace the hoarded coins in commerce. These small slips of paper became ragged and dirty, and the public came to hate "shinplasters". After the issuance in 1864 of a lighter bronze cent and a two-cent piece of that metal, both of which circulated freely, there were proposals for a three-cent piece in copper-nickel to replace the three-cent note. The advocates were led by Pennsylvania industrialist Joseph Wharton, who then controlled the domestic supply of nickel ore. On the last legislative day of the congressional session, March 3, 1865, a bill for a three-cent piece in copper-nickel alloy was introduced in Congress, passed both houses without debate, and was signed by President Abraham Lincoln. The three-cent nickel piece initially circulated well, but became less popular when the five-cent nickel was introduced in 1866, a larger, more convenient coin, with a value of five cents better fitting the decimal system. After 1870, most years saw low annual mintages for the three-cent nickel, and in 1890 Congress abolished it. The last were struck in 1889; many were melted down to coin more five-cent pieces. The issue is not widely collected, and prices for rare dates remain low by the standards of American collectible coinage. ## Background The great influx of bullion from the California Gold Rush and other finds caused the price of silver relative to gold to increase starting in 1848, and silver coins were hoarded or exported for melting. In 1851, a bill for a three-cent piece in 75% silver and 25% copper was introduced in Congress by New York Senator Daniel S. Dickinson, who wanted to lower postage rates from five to three cents. This percentage of silver was less than the normal 90% so that the coins would circulate at a time of hoarding. The copper large cent did not circulate in the Pacific Coast region or South due to prejudice against coins that did not contain precious metal, and some means of allowing the purchase of a postage stamp without the use of copper cents was necessary. Dickinson's bill passed on March 3, 1851, and in addition to authorizing the new three-cent silver, lowered rates for most domestic mails. By 1854, the imbalance had abated, and Congress increased the silver content of the three-cent piece to the standard 90% for silver coins, though its weight was reduced. The large cent was replaced by a smaller version made of 88% copper and 12% nickel in 1857. In 1861, the Civil War began, and when efforts to finance the war via borrowing failed, the Treasury stopped paying out gold in December 1861. The United States shifted to a paper money-based economy with little disruption. By June 1862, the price of silver had risen to the point where coins of that metal vanished from circulation, many exported to Canada, where they were both acceptable in circulation, and could be exchanged for gold. This departure of low-value coins was far more disruptive to commerce than the loss of the high-denomination gold coins, and change in transactions was made by a variety of makeshifts. These included currency issues by cities and businesses, encased postage stamps, and federally issued fractional currency—paper notes in denominations as small as three cents. The low-value paper currency, whether issued by government or business, were called shinplasters by the public, which disliked them. On the Pacific Coast, where paper money was not favored, silver and gold continued to circulate. Since fractional currency in three-cent denominations did not appear until late 1864, the cent was the only means then circulating of making change from the five-cent note, and came, in 1862 and 1863, to command a premium when sold in lots, of about 4%. The Philadelphia Mint tried to keep up with demand, limiting public purchases of cents to five dollars, and sending shipments to major cities. Despite these attempts, Mint Director James Pollock noted in his annual reports that cents were almost unobtainable, hoarded despite the fact that their metallic value remained less than one cent each. Numismatist Neil Carothers theorized that they were put aside by the public as the only circulating federal coinage, made of metal at a time when the public was forced to accept flimsy pieces of paper instead of silver and gold. With cents from the Philadelphia Mint selling at a premium, many private token issues were issued in 1863, and passed as cents in commerce. Mint officials took notice that the tokens, often made of bronze rather than the copper-nickel alloy then being used in the cent, were not hoarded and began to consider issuing bronze coins. When Pollock proposed legislation for bronze one-, two-, and three-cent pieces, it was opposed by industrialist Joseph Wharton, owner of the major source of nickel in the United States at the time, a mine at Gap, Pennsylvania. Pollock's bill, as introduced, provided for one- and two-cent pieces of bronze, and the Wharton interests opposed it. According to Carothers, > Congress declined to compromise with the nickel interests ... In the House, its opponents managed to delay its passage for a month. Thaddeus Stevens, one of the most influential men in the House, fought it bitterly, admitting, however, that he objected to it because it adversely affected Wharton's interests. The Coinage Act of 1864 passed into law on April 22 of that year. After entering circulation several months later, the bronze cent and two-cent piece circulated in trade without being hoarded. The bronze alloy was easier to strike than the copper-nickel one, allowing details to be brought forth sharply and extending the life of coinage dies. ## Legislation Nickel, formerly used in the cent, now had no place in American coinage. This was unsatisfactory to Wharton, who sought its return. Although Pollock made no mention of further nickel coinage in his 1864 annual report, Wharton in April of that year published a pamphlet proposing that all non-precious metal coinage be composed of 75% copper and 25% nickel. The copper-nickel cents had contained only 12% nickel, and even so had been difficult for the Mint to strike due to the hardness of the metal, the use of which damaged equipment and quickly broke dies. An alloy of 25% nickel would be even more difficult to coin. Wharton argued that the tough alloy would be difficult to counterfeit. Congress had by the Act of March 3, 1863 authorized fractional currency in the denomination of 3 cents; when these notes reached circulation the following year they proved wildly unpopular. The 1864 law which had substituted bronze for copper-nickel had also outlawed "copperheads" or private token issues. Even though these could now only be issued anonymously, and so could not be redeemed, the copperheads were preferred to the 3-cent shinplasters. Some copperhead tokens even read "Substitute for shinplasters". The notes soon became filthy and ragged, making them even more disliked. They were more difficult to value in quantity than notes with denominations divisible by five. According to Walter Breen, "This was the moment Wharton's supporters had been waiting for." Wharton and his advocates argued that the three-cent notes should be redeemed with equivalent coins. They contended that were Congress to order a three-cent bronze coin, such a piece would be as big as an obsolete large cent, and might be used to deceive the blind into accepting the pre-1857 cent rather than the more valuable coin. Pollock, previously an opponent of nickel coinage, had a change of heart and became a supporter. There are several slightly varying accounts of why the bill for the three-cent nickel passed. Breen told of the pressure advocates for nickel put on House Coinage Committee chairman John Adam Kasson, finally winning him over to the position that even 25% nickel coins (which would be hard on the Mint's equipment) would be better than the continued use of shinplasters, and presenting him with a draft of a bill for a three-cent piece of that alloy. The bill made the new coin legal tender to sixty cents. The 1864 act had made the cent legal tender to ten cents, and the two-cent piece to twenty; both limits were reduced to four cents. The bill did not abolish the three-cent silver piece, which was still being struck in small quantities. The new copper-nickel coins would be issued in exchange for three-cent shinplasters—the Currency Bureau was instructed to print no more three-cent notes. The bill passed the House of Representatives on the evening of March 3, 1865. At the time, it was usual to extend the final day of the congressional session in odd-numbered years to noon on March 4, and this occurred. The Senate took up the bill late on the morning of March 4. Action was repeatedly interrupted, first by Ohio Senator John Sherman reporting progress on an appropriations bill, then by Iowa's James Grimes stating that ticket holders for the inaugural festivities at noon were being soaked by rain outside the Capitol, causing some debate as to whether they should be admitted early. Once female guests were admitted (males were left outside), the Senate passed the three-cent nickel bill without debate, and it was shortly thereafter signed by President Abraham Lincoln. Q. David Bowers said of the sudden passage of the legislation "We can only guess what happened behind the scenes". Carothers wrote that Kasson had opposed nickel coinage, but nevertheless introduced the bill for it during the rush of the final day of the congressional session, "There was no report and no explanation ... The influences that brought about the passage of the measure in this fashion were never revealed." Numismatic historian Don Taxay suggested that by March 3, 1865, "the wide circulation of the bronze cent and two-cent piece had made a three-cent coin superfluous." ## Design Mint Chief Engraver James B. Longacre had, since 1849, designed coins with various visages of the goddess Liberty, based on a bust, Venus Accroupie, he had seen in a Philadelphia museum on loan from the Vatican. Although the Liberty as used on the three-cent nickel piece is closest to Longacre's experimental cents of 1857 and quarter eagles of 1860, she resembles most of the Chief Engraver's other depictions of Liberty. On the three-cent piece, she wears a coronet with her name on it, and a ribbon binds her hair. For the reverse, Longacre combined the Roman numeral III as rendered on the silver three-cent piece with the laurel wreath used on the 1859 Indian Head cent reverse. Breen suggested that the similarity of design to other Longacre coins has contributed to the low level of collector interest in the three-cent nickel. According to Lange, "resourceful as always, J.B. Longacre simply revised an existing image of Liberty for the obverse of the nickel three-cent piece. The same classical profile that appears on the Indian Head cent, the gold dollar, and the \$3 piece is seen fitted with a new hairstyle and a studded coronet inscribed Liberty." The act that authorized the three-cent nickel contained a provision requiring the use of the motto "In God We Trust" on all pieces large enough to bear it, but the new coin was deemed too small. No change was made to the design of the three-cent piece in nickel during its lifetime. ## Production ### Early years (1865–73) The three-cent nickel piece was very popular when it entered circulation in mid-1865. More convenient than the larger two-cent bronze piece, it largely replaced that coin, starting the two-cent on its way to decreased popularity and abolition in 1873. The hard alloy, though, caused high levels of die breakage. Between 1865 and 1876, some 17 million three-cent pieces were used by the government to redeem the three-cent fractional currency notes. The Wharton nickel interests were not satisfied by the issuance of the three-cent piece, and soon began to agitate for the passage of a five-cent coin, to be made of the same alloy as the three-cent piece. The Act of May 16, 1866 introduced the five-cent nickel piece, or "nickel", as it has come to be known. According to David Lange in his history of the Mint, the five-cent piece has "become one of the mainstays of the country's coinage". The new five-cent coin was legal tender up to a dollar. The introduction of the five-cent copper-nickel piece greatly decreased the popularity and use of the three-cent piece. The three-cent piece had debuted in 1865 with a mintage of over eleven million and nearly five million in 1866; thereafter strikings declined, falling to under a million by 1871, a figure the coin would thereafter exceed only twice. The public had preferred small bronze coins to paper money, then the three-cent nickel piece rather than the bronze; they now preferred the five-cent nickel to the three. One reason for this was that the base metal five-cent piece would be redeemed by the government if presented in \$100 lots pursuant to a provision in the authorizing legislation. There was no such provision for the three-cent nickel piece; neither was there any for the other base metal coins. Following Pollock's resignation in 1866 over his objections to President Johnson's Reconstruction policies, the new Mint Director was Henry Linderman, who in his first annual report in 1867, described the redemption clause in the nickel's authorizing legislation as "a most wise and just provision", urging its extension to the cent, two-cent piece, and three-cent piece. Postmasters were compelled to take three-cent nickel pieces in exchange for stamps, but had difficulty in depositing them in the Treasury in payment of their obligations, as the government would take no more than sixty cents worth of them in a single transaction. Private individuals and firms similarly refused them beyond the legal tender limit; those with a surplus of base metal coins often sold them at a discount. Congress took no action on a redemption bill, and in 1868 Linderman wrote again in his annual report, urging that the public be allowed to redeem small-denomination coins, as commerce was flooded with them. He disclosed that he had been redeeming the old copper-nickel cents with three-cent pieces and nickels. Carothers pointed out that exchanging the copper-nickel pieces for cents violated the 1865 and 1866 acts, that stated the three-cent piece and nickel could not be purchased with cents, but only for greenbacks or specie. Linderman strongly advocated a redemption law to relieve the glut of small coins: > But the government that sold these tokens at par for their face value, or paid them as money to its creditors, now turns round and refuses to receive them back in payment from its own officers ... Was there ever an act of the government of a respectable people that, for meanness, can compare with this? An individual that practiced such a confidence game would be branded as a two-penny thief, and would soon be consigned to a house of correction. A government that practices such frauds upon the people cannot hope long to receive the respect of anybody. In 1866, Treasury Department official John Jay Knox was sent to examine the San Francisco Mint. After his return to Washington, he submitted a report that recommended many changes to how the Mint did business, including reform of the base-metal coinage. Knox complained that the various enactments for non-specie coinage were "entirely disconnected and incongruous". Linderman submitted legislation to discontinue fractional currency of less than 25 cents, and authorizing copper-nickel coins of one, three and five cents, legal tender and redeemable, and in the case of the three-cent piece, larger and heavier than the existing coin. Linderman's bill was introduced by Pennsylvania Representative William D. Kelley in February 1868. It passed the House in amended form, but was not voted on in the Senate. Kelley tried again in the following term of Congress, and the bill met the same fate as its predecessor. Pollock returned to office as Mint Director in 1869. Although Pollock opposed redemption, Treasury Secretary George S. Boutwell did not, and a bill allowing for redemption of base-metal coins in lots of at least \$20 was signed into law by President Ulysses S. Grant on March 3, 1871. By then, early versions of what became the Coinage Act of 1873 were being considered by Congress. This was a major piece of legislation that reformed the laws relating to the Mint. As introduced by Ohio Senator John Sherman on April 28, 1870, it included Linderman's proposal for the use of copper-nickel in the minor coins. The debate over the bill stretched over the next three years. The use of nickel was a sticking point for the legislation; some congressmen alleged that the whole point of the bill was to benefit Wharton. Between 1870 and 1872, different versions of the bill, with a larger three-cent piece, twice passed the House and once the Senate, but differences between the houses could not be reconciled. After the second House passage, in May 1872, the Senate Finance Committee struck the provisions for copper-nickel coinage. After a conference committee met, both houses passed a version that left the cent, three-cent nickel piece, and nickel unaltered, and it was signed by President Grant on February 12, 1873. The act eliminated the two-cent piece, silver three-cent piece, silver half dime and the standard silver dollar (the last denomination was reinstated in 1878). The three-cent piece was made legal tender to twenty-five cents, as were the other two base-metal coins, the cent and nickel (the surviving silver coins were legal tender to five dollars). Numismatic writer Breen deemed the decision to eliminate the silver three-cent piece and the half dime, which might directly compete with the two copper-nickel coins, a favor to Wharton. Carothers called the abolition of the silver three- and five-cent pieces "a necessity if the 3 cent and 5 cent nickel pieces were to be continued after the revival of silver coinage". ### Decline and end (1873–90) On January 18, 1873, Philadelphia Mint Chief Coiner A. Loudon Snowden formally complained to Pollock that on the new year's coins, the digit "3" too closely resembled an "8". Pollock ordered Chief Engraver William Barber (Longacre had died in office on January 1, 1869) to redo the logotypes for the date. Thus, most denominations of American coinage dated 1873, including the three-cent nickel piece, have varieties: the Close (or Closed) 3 from early in the year, and the Open 3 from after Barber made his modifications. A total of 390,000 Closed 3 and 783,000 Open 3 of the three-cent nickel were minted. Numismatist Bruce C. Goldstein indicated that several factors combined to keep the nickel three-cent piece in decline after the passage of the 1873 act. Less and less fractional currency was being redeemed, as almost a decade had passed since the issuance of three-cent notes. Rich silver strikes in the West lowered the price of that metal to the point where old silver coins emerged from hoarding and circulated again. These factors, combined with ample stocks of cents and nickels, made the three-cent nickel, a non-silver coin of odd denomination, less desirable. By 1876, the mintage for circulation had declined to 162,000. None were struck for circulation in 1877 and 1878, though some proof coins were minted for sale to the public. Although more than a million were minted in 1881, another blow to the three-cent piece occurred on October 1, 1883, when first-class mail rates were lowered from three to two cents for the first 0.5 ounces (14 g). Although the rate for pieces weighing up to 1 ounce (28 g) initially remained at three cents, the two-cent rate was extended to one ounce effective July 1, 1885. Deprived of the original reason for the denomination's existence, no three-cent pieces were struck for circulation in 1886 (though several thousand proof coins were produced), and in the three remaining years of the piece's life, a total of less than 60,000 circulation strikes were minted. As the production of three-cent pieces dwindled, the other non-specie coins prospered, with record numbers of cents being struck in the 1880s to address the need to make change, and for penny arcade machines. The nickel proved popular in slot machines and street railways, which often set fares at five cents. With silver again circulating, the three-cent piece became more unpopular because it was almost the same diameter as the dime, leading to confusion and small frauds. Beginning in 1880, in their annual reports, the Mint Director and Treasury Secretary appealed to Congress to discontinue the three-cent piece. The last three-cent pieces were struck in 1889, and the denomination was discontinued, along with the gold dollar and the three-dollar piece, by the Act of September 26, 1890. Many of the coins from 1888 and 1889 were still held at the Treasury Department and were melted after passage of the act, the fate of millions more as they flowed back from banks. The resultant metal contributed to large mintages of the Liberty Head nickel between 1890 and 1893. One proposal to revive the three-cent piece was made in 1911, when Mayors Brand Whitlock of Toledo, Ohio, and Newton D. Baker of Cleveland sent a joint memorial to Congress urging its return. The following year, a subcommittee of the House Committee on Coinage, Weights and Measures held a hearing on bills to authorize a copper-nickel three-cent piece and to change the composition of the cent to copper-nickel. Mint Director George E. Roberts testified and indicated he had no objection to a three-cent piece, as there was at least limited demand for a coin larger than the cent and smaller than the nickel. In 1936, a bill for a three-cent nickel was among various coin legislation considered by the Senate Banking Committee. In 1942, Congress granted the Treasury Secretary the temporary authority to change the composition of the nickel because of wartime metal shortages, and if public demand for the five-cent piece required it, the Mint could strike three-cent pieces. Nothing came of any of the proposals. The three-cent piece was confirmed as fully legal tender by the Coinage Act of 1965, which proclaimed all coin and currency of the United States good to any amount for payment of public and private debt. By then, that coin had long since passed from the scene. ## Collecting According to the 2018 edition of R.S. Yeoman's A Guide Book of United States Coins, only the pieces from 1882 to 1887 catalog for more than \$100 in worn Good-4 condition; common dates list for \$15 to \$20 in that condition. The highest value listed is for the 1877, struck only in proof with a mintage of 900, at \$2,000. David F. Fanning, in his 2001 article on the three-cent pieces, suggested that rarer specimens of the nickel three-cent piece are relatively inexpensive compared with coins of similar mintage in more popular series, such as the Morgan dollar. The design of the three-cent nickel piece remained stable throughout its run, and there are few varieties. An overdate is known, 1887/6. The die that struck those coins was originally dated 1886, when no circulation strikes were made. So the die would not be wasted, the Mint altered the last digit from a 6 to a 7; evidence of both numbers is visible. Some 1865 pieces in proof condition display a wreath on the reverse that comes much closer to touching the rim than on later issues. These were most likely pattern coins but they are accepted as issued because the Mint placed them in some 1865 proof sets. Many of the three-cent nickel pieces were not fully struck, and are missing details of the design; this is because the head of Liberty is directly opposite the III, and the Mint had trouble getting the hard metal alloy to adequately flow to the high points of both sides.
20,287,616
Microsoft Security Essentials
1,168,048,552
Free antivirus product produced by Microsoft for the Windows operating system
[ "2009 software", "Antivirus software", "Microsoft software", "Windows security software", "Windows-only freeware" ]
Microsoft Security Essentials (MSE) is an antivirus software (AV) product that provides protection against different types of malicious software, such as computer viruses, spyware, rootkits, and Trojan horses. Prior to version 4.5, MSE ran on Windows XP, Windows Vista, and Windows 7, but not on Windows 8 and later versions, which have built-in AV components known as Windows Defender. MSE 4.5 and later versions do not run on Windows XP. The license agreement allows home users and small businesses to install and use the product free of charge. It replaces Windows Live OneCare, a discontinued commercial subscription-based AV service, and the free Windows Defender, which only protected users from spyware until Windows 8. Built upon the same scanning engine and virus definitions as other Microsoft antivirus products, it provides real-time protection, constantly monitoring activities on the computer, scanning new files as they are created or downloaded, and disabling detected threats. It lacks the OneCare personal firewall and the Forefront Endpoint Protection centralized management features. Microsoft's announcement of its own AV software on 18 November 2008, was met with mixed reactions from the AV industry. Symantec, McAfee, and Kaspersky Lab—three competing independent software vendors—dismissed it as an unworthy competitor, but AVG Technologies and Avast Software appreciated its potential to expand consumers' choices of AV software. AVG, McAfee, Sophos, and Trend Micro claimed that the integration of the product into Microsoft Windows would be a violation of competition law. The product received generally positive reviews, praising its user interface, low resource usage, and freeware license. It secured AV-TEST certification in October 2009, having demonstrated its ability to eliminate all widely encountered malware. It lost that certification in October 2012; in June 2013, MSE achieved the lowest possible protection score, zero. However, Microsoft significantly improved this product during the couple of years preceding February 2018, when MSE achieved AV-TEST's "Top Product" award after detecting 80% of the samples used during its test. According to a March 2012 report by anti-malware specialist OPSWAT, MSE was the most popular AV product in North America and the second most popular in the world, which has resulted in the appearance of several rogue antivirus programs that try to impersonate it. ## Features Microsoft Security Essentials automatically checks and downloads the latest virus definitions from Microsoft Update which is updated three times a day. Users may alternatively download the updates manually from the Microsoft Security Portal website. On 30 September 2011, a faulty definition update caused the product to incorrectly tag Google Chrome as malware. The issue was resolved within three hours. MSE originally ran on Windows XP, Windows Vista and Windows 7, although versions 4.5 and later do not run on Windows XP and Microsoft stopped producing automatic definition updates for Windows XP on 14 July 2015 (however, manual definition updates are still available for Windows XP users who run older versions of MSE). MSE is built upon the same foundation as other Microsoft security products; they all use the same anti-malware engine known as Microsoft Malware Protection Engine (MSMPENG) and virus definitions. It does not have the personal firewall component of OneCare and the centralized management features of Forefront Endpoint Protection. MSE provides real-time protection, constantly monitoring activities on the computer, scanning new files as they are created or downloaded from the Internet. It quarantines detected threats and prompts for user input on how to deal with them. If no response is received within ten minutes, suspected threats are handled according to the default actions defined in the application's settings. Depending on those settings, it may also create System Restore checkpoints before removing the detected malware. As a part of real-time protection, MSE reports all suspicious behaviors of monitored programs to Microsoft Active Protection Service (MAPS, formerly Microsoft SpyNet) by default. If the report matches a newly discovered malware threat with an unreleased virus definition, the new definition will be downloaded to remove the threat. Hardware requirements for the product depend on the operating system; on a computer running Windows Vista or Windows 7, it requires a 1 GHz processor, 1 GB of RAM, a computer monitor with a display resolution of at least 800 × 600 pixels, 200 MB of free hard disk space and a stable Internet connection. ## Development On 18 November 2008, Microsoft announced plans for a free consumer security product, codenamed Morro. This development marked a change in Microsoft's consumer AV marketing strategy: instead of offering a subscription-based security product with a host of other tools, such as backup and a personal firewall, Morro would offer free AV protection with a smaller impact on system resources. Amy Barzdukas, senior director of product management for the Online Services and Windows Division at Microsoft, announced that Morro would not directly compete with other commercial AV software; rather it was focused on the 50 to 60 percent of PC users who did not have or would not pay for AV protection. By 17 June 2009, the official name of Morro was revealed: Microsoft Security Essentials. On 23 June 2009, Microsoft released a public beta to 75,000 users in the United States, Israel, China and Brazil. Anticipated to be available in 20 markets and 10 languages, the product was scheduled for release before the end of 2009; the final build was released on 29 September 2009. ### Version 2.0 Almost a year after the initial release, Microsoft quietly released the second version. It entered the technical preview stage on 19 July 2010, and the final build was released on 16 December 2010. It includes Network Inspection System (NIS), a network intrusion detection system that works on Windows Vista and Windows 7, as well as a new anti-malware engine that employs heuristics in malware detection. Version 2.0 integrates with Internet Explorer to protect users against web-based threats. NIS requires a separate set of definition updates. ### Version 4.0 Sixteen months after the release of version 2.0, Microsoft skipped version 3.0 and released Microsoft Security Essentials 4.0. A public beta program started on 18 November 2011, when Microsoft sent out invitations to potential participants without announcing a version number. The first beta version was released on 29 November 2011, and the final build on 24 April 2012. Microsoft subsequently initiated a pre-release program that provides volunteers with the latest beta version and accepts feedback. ### Version 4.5 On 21 February 2014, version 4.5 entered beta stage. On the same day, Microsoft announced that starting with this version, Windows XP would not be supported. Older versions would continue to receive automatic virus definition updates until 14 July 2015 (afterwards the users of older versions may continue to manually update definitions using Microsoft's site). ### Version 4.10 The latest version of 4.10 was released on 29 November 2016. It was version 4.10.209.0 for Windows Vista and Windows 7. This update fixes a bug that was introduced earlier in version 4.10.205.0 which removed the "Scan with Microsoft Security Essentials" entry from the right-click context menu on files and folders. ### Discontinuation Support for MSE has officially ended for Windows Vista and Windows XP. Older versions still function on those systems; however, the latest definition updates are no longer compatible. Although support for Windows 7 ended on 14 January 2020 Microsoft will continue to update virus definitions for existing users until 2023. Microsoft Security Essentials does not run on Windows 8 and later, which has its own security subsystem, Windows Defender. On 13 September 2011, at the Build conference in Anaheim, California, Microsoft unveiled the developer preview of Windows 8, which had a security component capable of preventing an infected USB flash memory from compromising the system during the boot process. On 15 September, Windows 8 developer's blog confirmed that Windows Defender in Windows 8 would take over the role of virus protection. In an included video, Jason Garms of Microsoft showed how Windows Defender is registered with Action Center as an AV and spyware protection tool, and how it blocks drive-by malware. On 3 March 2012, Softpedia reviewed the consumer preview of Windows 8 and noted the similarity in appearance of Windows Defender and Microsoft Security Essentials 4.0 Beta. According to Softpedia, Windows 8 Setup requires Microsoft Security Essentials to be uninstalled before upgrading from Windows 7. ## Licensing The product's license agreement allows home users to download, install and use it on an unlimited number of computers in their households free of charge, as long as each computer has a legitimately licensed copy of Microsoft Windows. Since October 2010, small businesses have also been allowed to install the product on up to 10 devices, but use in academic institutions and governmental locations is forbidden, as is reverse-engineering, decompiling or disassembling the product or working around its designed limitations. MSE requires no registration or personal information to be submitted during installation; however, the validity of the operating system's license is verified during and after installation using the Windows Genuine Advantage system. If said license is found to be invalid, the software will notify the user and will cease to operate after a period of time. ## Reception ### Industry response The announcement and debut of Microsoft Security Essentials was met with mixed responses from the AV industry. Symantec, McAfee and Kaspersky Lab, three competing vendors, claimed it to be inferior to their own software. Jens Meggers, Symantec's vice president of engineering for Norton products, dismissed it as "very average – nothing outstanding". Tom Powledge of Symantec urged his customers to be mindful of what protection they chose, bearing in mind that OneCare offered "substandard protection" and an "inferior user experience". Joris Evers, director of worldwide public relations for McAfee stated "with OneCare's market share of less than 2%, we understand Microsoft's decision to shift attention to their core business." Justin Priestley of Kaspersky stated that Microsoft "continued to hold a very low market share in the consumer market, and we don't expect the exit of OneCare to change the playing field drastically." Avast Software said that it had an ambivalent view towards the product. Vincent Steckler, Avast Software CEO wrote in a blog post "MSE is not the silver bullet but it is also not the bad sequel to One Care [sic] that some claim." A representative of AVG Technologies stated, "We view this as a positive step for the AV landscape. AVG has believed in the right to free antivirus software for the past eight years." However, AVG raised the issue of distributing the software product and said, "Microsoft will have to do more than simply make the product available," adding that integration of Microsoft Security Essentials with Microsoft Windows would be a violation of competition law. McAfee, Sophos and later Trend Micro affirmed that an antitrust lawsuit would surely have followed if Microsoft had bundled the product with Windows. The announcement of Microsoft Security Essentials affected the stocks of AV vendors. On 19 November 2008, after Microsoft announced codename Morro, Symantec and McAfee shares fell 9.44 and 6.62 percent respectively. On 10 June 2009, after announcing an upcoming beta version, Microsoft shares rose 2.1 percent, while Symantec and McAfee fell 0.5 and 1.3 percent respectively. Daniel Ives, an analyst with FBR Capital Markets, said that Microsoft Security Essentials would be a "long-term competitive threat", although near-term impact would be negligible. ### Reviews and awards The public beta version received several reviews, citing its low resource usage, straightforward user interface and price point. Brian Krebs of The Washington Post reported that a quick scan on a Windows 7 computer took about 10 minutes and a full scan about 45 minutes. Ars Technica reviewed it positively, citing its organized interface, low resource usage, and its status as freeware. Nick Mediati of PCWorld noted MSE's "clear-cut" and "cleanly designed" tabbed user interface. He did, however, find some of the settings to be cryptic and confusing, defaulting to "recommended action", with the only explanation of what that action is to be found in the help file. He was also initially confused because the user interface failed to mention that Microsoft Security Essentials automatically updates itself, rather than having to be manually updated via the Update tab; an explanation of this feature was included in the final release. Neil Rubenking of PC Magazine successfully installed the beta version on 12 malware-infected systems and commented on its small installation package (about 7 MB, depending on the operating system) and speedy installation. But the initial virus definition update took between 5 and 15 minutes, and the full installation occupied about 110 MB of disk space. Rubenking noted that the beta version sets Windows Update into fully automatic mode, although it can be turned off again through Windows Control Panel. Some full scans took more than an hour on infected systems; a scan on a clean system took 35 minutes. An on-demand scan test Rubenking conducted in June 2009 with the beta version found 89 percent of all malware samples: 30 percent of the commercial keyloggers, 67 percent of rootkits, but only half of the scareware samples. The product's real-time protection found 83 percent of all malware and blocked the majority of it: 40 percent of the commercial keyloggers and 78 percent of the rootkits were found. On 7 January 2010, Microsoft Security Essentials won the Best Free Software award from PC Advisor. In December the same year, it secured the Bronze award from AV-Comparatives for proactive detection of 55 percent of new or unknown malware, the Silver award for low false-positives (six occurrences) and the Bronze award for overall performance. In October 2009, AV-TEST conducted a series of trials on the final build of the product in which it detected and caught 98.44 percent of 545,034 computer viruses, computer worms and software Trojan horses as well as 90.95 percent of 14,222 spyware and adware samples. It also detected and eliminated all 25 tested rootkits, generating no false-positives. Between June 2010 to January 2013, AV-TEST tested Microsoft Security Essentials 14 times; in 11 out of 14 cases, MSE secured AV-TEST certification of outperforming AV industry average ratings. Microsoft Security Essentials 2.0 was tested and certified in March 2011. The product achieved a protection score of 2.5 out of 6, a repair score of 3.5 out of 6 and a usability score of 5.5 out of 6. Report details show that although version 2.0 was able to find all malware samples of the WildList (widespread malware), it was not able to stop all Internet-based attacks because it lacks personal firewall and anti-spam capabilities. In an April 2012 test, version 2.1 achieved scores of 3.0, 5.5 and 5.0 for protection, repair and usability. Version 4.0 for Windows 7 SP1 (x64) was tested in June 2012 and achieved scores of 2.5, 5.5 and 5.5 for protection, repair and usability. In October 2012, the product lost its AV-TEST certification when Microsoft Security Essentials 4.1 achieved scores of 1.5, 3.5 and 5.5 for its protection, repair and usability. In AV-TEST's 2011 annual review, Microsoft Security Essentials came last in protection, seventh in repair and fifth in usability. In the 2012 review, it came last in protection and best in usability; however, having lost its certificate, it was not qualified for the usability award. In June 2013, MSE achieved the lowest possible protection score, zero. ## Market share On 29 September 2010, a year after its initial release, Microsoft announced that MSE had more than 30 million users. The Security Industry Market Share Analysis report of June 2011, published by OPSWAT, describes it as one of the most popular AV products in the world, with 10.66 percent of the global market and 15.68 percent of the North American market. The same report shows Microsoft as the number one AV vendor in North America with 17.07 percent market share, and the number four AV vendor worldwide. John Dunn of PCWorld, who analyzed the report, noted that the tendency to use free AV software is something new: "After all, free antivirus suites have been around for years but have tended to be seen as the poor relations to paid software." He named Microsoft Security Essentials as an influence on PC users to adopt free AV software. A September 2011 OPSWAT report found that MSE had further increased its market share to become the second most popular AV product in the world, and remained the most popular in North America. OPSWAT reported in March 2012 that the product had maintained its position, and that Microsoft's market share had improved by 2 percent worldwide and 3 percent in North America. Seth Rosenblatt of CNET News commented on how the product's share rose from 7.27 in 2010 to 10.08 in 2012, stating that "use of the lightweight security suite exploded last year". ## Impersonation by malware The popularity of Microsoft Security Essentials has led to the appearance of malware abusing its name. In February 2010, a rogue security package calling itself "Security Essentials 2010" appeared on the internet, carrying the Alureon virus. Designated TrojanDownloader:Win32/Fakeinit by Microsoft, it bears no visual resemblance to the Microsoft product. It reappeared in November 2010, this time calling itself "Security Essentials 2011". A more dangerous rogue software appeared in August 2010. Designated Rogue:Win32/FakePAV or Unknown Win32/Trojan, it closely resembles Microsoft Security Essentials and uses sophisticated social engineering to deceive users and infect their systems, under the guise of five different fictional anti-malware products. It also terminates, and prevents the launch of 156 different programs, including Registry Editor, Windows Command Prompt, Internet Explorer, Mozilla Firefox, Opera, Safari, and Google Chrome. ## See also - Comparison of antivirus software - Comparison of firewalls - Internet security - Microsoft Defender - Windows Security Center
58,885,745
Quine–Putnam indispensability argument
1,172,178,666
Argument in the philosophy of mathematics
[ "Philosophical arguments", "Philosophy of mathematics", "Willard Van Orman Quine" ]
The Quine–Putnam indispensability argument is an argument in the philosophy of mathematics for the existence of abstract mathematical objects such as numbers and sets, a position known as mathematical platonism. It was named after the philosophers Willard Quine and Hilary Putnam, and is one of the most important arguments in the philosophy of mathematics. Although elements of the indispensability argument may have originated with thinkers such as Gottlob Frege and Kurt Gödel, Quine's development of the argument was unique for introducing to it a number of his philosophical positions such as naturalism, confirmational holism, and the criterion of ontological commitment. Putnam gave Quine's argument its first detailed formulation in his 1971 book Philosophy of Logic. He later came to disagree with various aspects of Quine's thinking, however, and formulated his own indispensability argument based on the no miracles argument in the philosophy of science. A standard form of the argument in contemporary philosophy is credited to Mark Colyvan; whilst being influenced by both Quine and Putnam, it differs in important ways from their formulations. It is presented in the Stanford Encyclopedia of Philosophy: - We ought to have ontological commitment to all and only the entities that are indispensable to our best scientific theories. - Mathematical entities are indispensable to our best scientific theories. - Therefore, we ought to have ontological commitment to mathematical entities. Nominalists, philosophers who reject the existence of abstract objects, have argued against both premises of this argument. An influential argument by Hartry Field claims that mathematical entities are dispensable to science. This argument has been supported by attempts to demonstrate that scientific and mathematical theories can be reformulated to remove all references to mathematical entities. Other philosophers, including Penelope Maddy, Mary Leng, Elliott Sober, and Joseph Melia, have argued that we do not need to believe in all of the entities that are indispensable to science. The arguments of these writers inspired a new explanatory version of the argument, which Alan Baker and Mark Colyvan support, that argues mathematics is indispensable to specific scientific explanations as well as whole theories. ## Background In his 1973 paper "Mathematical Truth", Paul Benacerraf raised a problem for the philosophy of mathematics. According to Benacerraf, mathematical sentences such as "two is a prime number" seem to imply the existence of mathematical objects. He supported this claim with the idea that mathematics should not have its own special semantics, or in other words, the meaning of mathematical sentences should follow the same rules as non-mathematical sentences. For example, according to this reasoning, if the sentence "Mars is a planet" implies the existence of the planet Mars, then the sentence "two is a prime number" should also imply the existence of the number two. But according to Benacerraf, if mathematical objects existed, they would be unknowable to us. This is because mathematical objects, if they exist, are abstract objects; objects that cannot cause things to happen and that have no spatio-temporal location. Benacerraf argued, on the basis of the causal theory of knowledge, that we would not be able to know about such objects because they cannot come into causal contact with us. This is called Benacerraf's epistemological problem because it concerns the epistemology of mathematics, that is, how we come to know what we do about mathematics. The philosophy of mathematics is split into two main strands; platonism and nominalism. Platonism holds that there exist abstract mathematical objects such as numbers and sets whilst nominalism denies their existence. Each of these views faces issues due to the problem raised by Benacerraf. Because nominalism rejects the existence of mathematical objects, it faces no epistemological problem but it does face problems concerning the idea that mathematics should not have its own special semantics. Platonism does not face problems concerning the semantic half of the dilemma but it has difficulty explaining how we can have any knowledge about mathematical objects. The indispensability argument aims to overcome the epistemological problem posed against platonism by providing a justification for belief in abstract mathematical objects. It is part of a broad class of indispensability arguments most commonly applied in the philosophy of mathematics, but which also includes arguments in the philosophy of language and ethics. In the most general sense, indispensability arguments aim to support their conclusion based on the claim that the truth of the conclusion is indispensable or necessary for a certain purpose. When applied in the field of ontology—the study of what exists—they exemplify a Quinean strategy for establishing the existence of controversial entities that cannot be directly investigated. According to this strategy, the indispensability of these entities for formulating a theory of other less controversial entities counts as evidence for their existence. In the case of philosophy of mathematics, the indispensability of mathematical entities for formulating scientific theories is taken as evidence for the existence of those mathematical entities. ## Overview of the argument Mark Colyvan presents the argument in the Stanford Encyclopedia of Philosophy in the following form: - We ought to have ontological commitment to all and only the entities that are indispensable to our best scientific theories. - Mathematical entities are indispensable to our best scientific theories. - Therefore, we ought to have ontological commitment to mathematical entities. Here, an ontological commitment to an entity is a commitment to believing that that entity exists. The first premise is based on two fundamental assumptions; naturalism and confirmational holism. According to naturalism, we should look to our best scientific theories to determine what we have best reason to believe exists. Quine summarized naturalism as "the recognition that it is within science itself, and not in some prior philosophy, that reality is to be identified and described". Confirmational holism is the view that scientific theories cannot be confirmed in isolation and must be confirmed as wholes. Therefore, according to confirmational holism, if we should believe in science, then we should believe in all of science, including any of the mathematics that is assumed by our best scientific theories. The argument is mainly aimed at nominalists that are scientific realists as it attempts to justify belief in mathematical entities in a manner similar to the justification for belief in theoretical entities such as electrons or quarks; Quine held that such nominalists have a "double standard" with regards to ontology. The indispensability argument differs from other arguments for platonism because it only argues for belief in the parts of mathematics that are indispensable to science. It does not necessarily justify belief in the most abstract parts of set theory, which Quine called "mathematical recreation ... without ontological rights". Some philosophers infer from the argument that mathematical knowledge is a posteriori because it implies mathematical truths can only be established via the empirical confirmation of scientific theories to which they are indispensable. This also indicates mathematical truths are contingent since empirically known truths are generally contingent. Such a position is controversial because it contradicts the traditional view of mathematical knowledge as a priori knowledge of necessary truths. Whilst Quine's original argument is an argument for platonism, indispensability arguments can also be constructed to argue for the weaker claim of sentence realism—the claim that mathematical theory is objectively true. This is a weaker claim because it does not necessarily imply there are abstract mathematical objects. ## Major concepts ### Indispensability The second premise of the indispensability argument states mathematical objects are indispensable to our best scientific theories. In this context, indispensability is not the same as ineliminability because any entity can be eliminated from a theoretical system given appropriate adjustments to the other parts of the system. Therefore, dispensability requires an entity is eliminable without sacrificing the attractiveness of the theory. The attractiveness of the theory can be evaluated in terms of theoretical virtues such as explanatory power, empirical adequacy and simplicity. Furthermore, if an entity is dispensable to a theory, an equivalent theory can be formulated without it. This is the case, for example, if each sentence in one theory is a paraphrase of a sentence in another or if the two theories predict the same empirical observations. According to the Stanford Encyclopedia of Philosophy, one of the most influential argument against the indispensability argument comes from Hartry Field. It rejects the claim that mathematical objects are indispensable to science; Field has supported this argument by reformulating or "nominalizing" scientific theories so they do not refer to mathematical objects. As part of this project, Field has offered a reformulation of Newtonian physics in terms of the relationships between space-time points. Instead of referring to numerical distances, Field's reformulation uses relationships such as "between" and "congruent" to recover the theory without implying the existence of numbers. John Burgess and Mark Balaguer have taken steps to extend this nominalizing project to areas of modern physics, including quantum mechanics. Philosophers such as David Malament and Otávio Bueno dispute whether such reformulations are successful or even possible, particularly in the case of quantum mechanics. Field's alternative to platonism is mathematical fictionalism, according to which mathematical theories are false because they make claims about abstract mathematical objects even though abstract objects do not exist. As part of his argument against the indispensability argument, Field has tried to explain how it is possible for false mathematical statements to be used by science without making scientific predictions false. His argument is based on the idea that mathematics is conservative. A mathematical theory is conservative if, when combined with a scientific theory, it does not imply anything about the physical world that the scientific theory alone would not have already. This explains how it is possible for mathematics to be used by scientific theories without making the predictions of science false. In addition, Field has attempted to specify how exactly mathematics is useful in application. Field thinks mathematics is useful for science because mathematical language provides a useful shorthand for talking about complex physical systems. Another approach to denying that mathematical entities are indispensable to science is to reformulate mathematical theories themselves so they do not imply the existence of mathematical objects. Charles Chihara, Geoffrey Hellman, and Putnam have offered modal reformulations of mathematics that replace all references to mathematical objects with claims about possibilities. ### Naturalism The naturalism underlying the indispensability argument is a form of methodological naturalism, as opposed to metaphysical naturalism, that asserts the primacy of the scientific method for determining the truth. In other words, according to Quine's naturalism, our best scientific theories are the best guide to what exists. This form of naturalism rejects the idea that philosophy precedes and ultimately justifies belief in science, instead holding that science and philosophy are continuous with one another as part of a single, unified investigation into the world. As such, this form of naturalism precludes the idea of a prior philosophy that can overturn the ontological commitments of science. This is in contrast to alternative forms of naturalism, such as a form supported by David Armstrong that holds a principle called the Eleatic principle. According to this principle there are only causal entities and no non-causal entities. Quine's naturalism claims such a principle cannot be used to overturn our best scientific theories' ontological commitment to mathematical entities because philosophical principles cannot overrule science. Quine held his naturalism as a fundamental assumption but later philosophers have provided arguments to support it. The most common arguments in support of Quinean naturalism are track-record arguments. These are arguments that appeal to science's successful track record compared to philosophy and other disciplines. David Lewis famously made such an argument in a passage from his 1991 book Parts of Classes, deriding the track record of philosophy compared to mathematics and arguing that the idea of philosophy overriding science is absurd. Critics of the track record argument have argued that it goes too far, discrediting philosophical arguments and methods entirely, and contest the idea that philosophy can be uniformly judged to have had a bad track record. Quine's naturalism has also been criticized by Penelope Maddy for contradicting mathematical practice. According to the indispensability argument, mathematics is subordinated to the natural sciences in the sense that its legitimacy depends on them. But Maddy argues mathematicians do not seem to believe their practice is restricted in any way by the activity of the natural sciences. For example, mathematicians' arguments over the axioms of Zermelo–Fraenkel set theory do not appeal to their applications to the natural sciences. Similarly, Charles Parsons has argued that mathematical truths seem immediately obvious in a way that suggests they do not depend on the results of our best theories. ### Confirmational holism Confirmational holism is the view that scientific theories and hypotheses cannot be confirmed in isolation and must be confirmed together as part of a larger cluster of theories. An example of this idea provided by Michael Resnik is of the hypothesis that an observer will see oil and water separate out if they are added together because they do not mix. This hypothesis cannot be confirmed in isolation because it relies on assumptions such as the absence of any chemical that will interfere with their separation and that the eyes of the observer are functioning well enough to observe the separation. Because mathematical theories are likewise assumed by scientific theories, confirmational holism implies the empirical confirmations of scientific theories also support these mathematical theories. According to a counterargument by Maddy, the theses of naturalism and confirmational holism that make up the first premise of the indispensability argument are in tension with one another. Maddy said naturalism tells us that we should respect the methods used by scientists as the best method for uncovering the truth, but scientists do not seem to act as though we should believe in all of the entities that are indispensable to science. To illustrate this point, Maddy uses the example of atomic theory; she said that despite the atom being indispensable to scientists' best theories by 1860, their reality was not universally accepted until 1913 when they were put to a direct experimental test. Maddy also appeals to the fact that scientists use mathematical idealizations, such as assuming bodies of water to be infinitely deep without regard for the trueness of such applications of mathematics. According to Maddy, this indicates that scientists do not view the indispensable use of mathematics for science as justification for the belief in mathematics or mathematical entities. Overall, Maddy said we should side with naturalism and reject confirmational holism, meaning we do not need to believe in all of the entities that are indispensable to science. Another counterargument due to Elliott Sober claims that mathematical theories are not tested in the same way as scientific theories. Whilst scientific theories compete with alternatives to find which theory has the most empirical support, there are no alternatives for mathematical theory to compete with because all scientific theories share the same mathematical core. As a result, according to Sober, mathematical theories do not share the empirical support of our best scientific theories so we should reject confirmational holism. Since these counterarguments have been raised, a number of philosophers—including Resnik, Alan Baker, Patrick Dieveney, David Liggins, Jacob Busch, and Andrea Sereni—have argued that confirmational holism can be eliminated from the argument. For example, Resnik has offered a pragmatic indispensability argument that "claims that the justification for doing science ... also justifies our accepting as true such mathematics as science uses". ### Ontological commitment Another key part of the argument is the concept of ontological commitment. To say that we should have an ontological commitment to an entity means we should believe that entity exists. Quine believed that we should have ontological commitment to all the entities to which our best scientific theories are themselves committed. According to Quine's "criterion of ontological commitment", the commitments of a theory can be found by translating or "regimenting" the theory from ordinary language into first-order logic. This criterion says that the ontological commitments of the theory are all of the objects over which the regimented theory quantifies; the existential quantifier for Quine was the natural equivalent of the ordinary language term "there is", which he believed obviously carries ontological commitment. Quine thought it is important to translate our best scientific theories into first-order logic because ordinary language is ambiguous, whereas logic can make the commitments of a theory more precise. Translating theories to first-order logic also has advantages over translating them to higher-order logics such as second-order logic. Whilst second-order logic has the same expressive power as first-order logic, it lacks some of the technical strengths of first-order logic such as completeness and compactness. Second-order logic also allows quantification over properties like "redness", but whether we have ontological commitment to properties is controversial. According to Quine, such quantification is simply ungrammatical. Jody Azzouni has objected to Quine's criterion of ontological commitment, saying that the existential quantifier in first-order logic need not be interpreted as always carrying ontological commitment. According to Azzouni, the ordinary language equivalent of existential quantification "there is" is often used in sentences without implying ontological commitment. In particular, Azzouni points to the use of "there is" when referring to fictional objects in sentences such as "there are fictional detectives who are admired by some real detectives". According to Azzouni, for us to have ontological commitment to an entity, we must have the right level of epistemic access to it. This means, for example, that it must overcome some epistemic burdens for us to be able to postulate it. But according to Azzouni, mathematical entities are "mere posits" that can be postulated by anyone at any time by "simply writing down a set of axioms", so we do not need to treat them as real. More modern presentations of the argument do not necessarily accept Quine's criterion of ontological commitment and may allow for ontological commitments to be directly determined from ordinary language. ### Mathematical explanation In his counterargument, Joseph Melia argues that the role of mathematics in science is not genuinely explanatory and is solely used to "make more things sayable about concrete objects". He appeals to a practice he calls weaseling, which occurs when a person makes a statement and then later withdraws something implied by that statement. An example of weaseling is the statement: "Everyone who came to the seminar had a handout. But the person who came in late didn't get one." Whilst this statement can be interpreted as being self-contradictory, it is more charitable to interpret it as coherently making the claim: "Except for the person who came in late, everyone who came to the seminar had a handout." Melia said a similar situation occurs in scientists' use of statements that imply the existence of mathematical objects. According to Melia, whilst scientists use statements that imply the existence of mathematics in their theories, "almost all scientists ... deny that there are such things as mathematical objects". As in the seminar-handout example, Melia said it is most charitable to interpret scientists not as contradicting themselves, but rather as weaseling away their commitment to mathematical objects. According to Melia, because this weaseling is not a genuinely explanatory use of mathematical language, it is acceptable to not believe in the mathematical objects that scientists weasel away. Inspired by Maddy's and Sober's arguments against confirmational holism, as well as Melia's argument that we can suspend belief in mathematics if it does not play a genuinely explanatory role in science, Colyvan and Baker have defended an explanatory version of the argument. This version of the argument attempts to remove the reliance on confirmational holism by replacing it with an inference to the best explanation. It states we are justified in believing in mathematical objects because they appear in our best scientific explanations, not because they inherit the empirical support of our best theories. It is presented by the Internet Encyclopedia of Philosophy in the following form: - There are genuinely mathematical explanations of empirical phenomena. - We ought to be committed to the theoretical posits in such explanations. - Therefore, we ought to be committed to the entities postulated by the mathematics in question. An example of mathematics' explanatory indispensability presented by Baker is the periodic cicada, a type of insect that has life cycles of 13 or 17 years. It is hypothesized that this is an evolutionary advantage because 13 and 17 are prime numbers. Because prime numbers have no non-trivial factors, this means it is less likely predators can synchronize with the cicadas' life cycles. Baker said that this is an explanation in which mathematics, specifically number theory, plays a key role in explaining an empirical phenomenon. Other important examples are explanations of the hexagonal structure of bee honeycombs, the existence of antipodes on the Earth's surface that have identical temperature and pressure, the connection between Minkowski space and Lorentz contraction, and the impossibility of crossing all seven bridges of Königsberg only once in a walk across the city. The main response to this form of the argument, which philosophers such as Melia, Chris Daly, Simon Langford, and Juha Saatsi adopted, is to deny there are genuinely mathematical explanations of empirical phenomena, instead framing the role of mathematics as representational or indexical. ## Historical development ### Precursors and influences on Quine The argument is historically associated with Willard Quine and Hilary Putnam but it can be traced to earlier thinkers such as Gottlob Frege and Kurt Gödel. In his arguments against mathematical formalism—a view that argues that mathematics is akin to a game like chess with rules about how mathematical symbols such as "2" can be manipulated—Frege said in 1903 that "it is applicability alone which elevates arithmetic from a game to the rank of a science". Gödel, concerned about the axioms of set theory, said in a 1947 paper that if a new axiom were to have enough verifiable consequences, it "would have to be accepted at least in the same sense as any well‐established physical theory". Frege's and Gödel's arguments differ from the later Quinean indispensability argument because they lack features such as naturalism and subordination of practice, leading some philosophers, including Pieranna Garavaso, to say that they are not genuine examples of the indispensability argument. Whilst developing his philosophical view of confirmational holism, Quine was influenced by Pierre Duhem. At the beginning of the twentieth century, Duhem defended the law of inertia from critics who said that it is devoid of empirical content and unfalsifiable. These critics based this claim on the fact that the law does not make any observable predictions without positing some observational frame of reference and that falsifying instances can always be avoided by changing the choice of reference frame. Duhem responded by saying that the law produces predictions when paired with auxiliary hypotheses fixing the frame of reference and is therefore no different from any other physical theory. Duhem said that although individual hypotheses may make no observable predictions alone, they can be confirmed as parts of systems of hypotheses. Quine extended this idea to mathematical hypotheses, claiming that although mathematical hypotheses hold no empirical content on their own, they can share in the empirical confirmations of the systems of hypotheses in which they are contained. This thesis later came to be known as the Duhem–Quine thesis. Quine described his naturalism as the "abandonment of the goal of a first philosophy. It sees natural science as an inquiry into reality, fallible and corrigible but not answerable to any supra-scientific tribunal, and not in need of any justification beyond observation and the hypothetico-deductive method." The term "first philosophy" is used in reference to Descartes' Meditations on First Philosophy, in which Descartes used his method of doubt in an attempt to secure the foundations of science. Quine said that Descartes' attempts to provide the foundations for science had failed and that the project of finding a foundational justification for science should be rejected because he believed philosophy could never provide a method of justification more convincing than the scientific method. Quine was also influenced by the logical positivists, such as his teacher Rudolf Carnap; his naturalism was formulated in response to many of their ideas. For the logical positivists, all justified beliefs were reducible to sense data, including our knowledge of ordinary objects such as trees. Quine criticized sense data as self-defeating, saying that we must believe in ordinary objects to organize our experiences of the world. He also said that because science is our best theory of how sense-experience gives us beliefs about ordinary objects, we should believe in it as well. Whilst the logical positivists said that individual claims must be supported by sense data, Quine's confirmational holism means scientific theory is inherently tied up with mathematical theory and so evidence for scientific theories can justify belief in mathematical objects despite them not being directly perceived. ### Quine and Putnam Whilst he eventually became a platonist due to his formulation of the indispensability argument, Quine was sympathetic to nominalism from the early stages of his career. In a 1946 lecture, he said: "I will put my cards on the table now and avow my prejudices: I should like to be able to accept nominalism." In 1947, he released a paper with Nelson Goodman titled "Steps toward a Constructive Nominalism" as part of a joint project to "set up a nominalistic language in which all of natural science can be expressed". In a letter to Joseph Henry Woodger the following year, however, Quine said that he was becoming more convinced "the assumption of abstract entities and the assumptions of the external world are assumptions of the same sort". He subsequently released the 1948 paper "On What There Is", in which he said that "[t]he analogy between the myth of mathematics and the myth of physics is ... strikingly close", marking a shift towards his eventual acceptance of a "reluctant platonism". Throughout the 1950s, Quine regularly mentioned platonism, nominalism, and constructivism as plausible views, and he had not yet reached a definitive conclusion about which is correct. It is unclear exactly when Quine accepted platonism; in 1953, he distanced himself from the claims of nominalism in his 1947 paper with Goodman, but by 1956, Goodman was still describing Quine's "defection" from nominalism as "still somewhat tentative". According to Lieven Decock, Quine had accepted the need for abstract mathematical entities by the publication of his 1960 book Word and Object, in which he wrote "a thoroughgoing nominalist doctrine is too much to live up to". However, whilst he released suggestions of the indispensability argument in a number of papers, he never gave it a detailed formulation. Putnam gave the argument its first explicit presentation in his 1971 book Philosophy of Logic in which he attributed it to Quine. He stated the argument as "quantification over mathematical entities is indispensable for science, both formal and physical; therefore we should accept such quantification; but this commits us to accepting the existence of the mathematical entities in question." He also wrote Quine had "for years stressed both the indispensability of quantification over mathematical entities and the intellectual dishonesty of denying the existence of what one daily presupposes". Putnam's endorsement of Quine's version of the argument is disputed. The Internet Encyclopedia of Philosophy states: "In his early work, Hilary Putnam accepted Quine's version of the indispensability argument." Liggins also states that the argument has been attributed to Putnam by many philosophers of mathematics. Liggins and Bueno, however, said Putnam never endorsed the argument and only presented it as an argument from Quine. Putnam has said he differed with Quine in his attitude to the argument from at least 1975. Features of the argument that Putnam came to disagree with include its reliance on a single, regimented, best theory. In 1975, Putnam formulated his own indispensability argument based on the no miracles argument in the philosophy of science, which argues the success of science can only be explained by scientific realism without being rendered miraculous. He wrote that year: "I believe that the positive argument for realism [in science] has an analogue in the case of mathematical realism. Here too, I believe, realism is the only philosophy that doesn't make the success of the science a miracle." The Internet Encyclopedia of Philosophy terms this version of the argument "Putnam's success argument" and presents it in the following form: - Mathematics succeeds as the language of science. - There must be a reason for the success of mathematics as the language of science. - No positions other than realism in mathematics provide a reason. - Therefore, realism in mathematics must be correct. According to the Internet Encyclopedia of Philosophy, the first and second premises of the argument have been seen as uncontroversial, so discussion of this argument has been focused on the third premise. Other positions that have attempted to provide a reason for the success of mathematics include Field's reformulations of science, which explain the usefulness of mathematics as a conservative shorthand. Putnam has criticized Field's reformulations for only applying to classical physics and for being unlikely to be able to be extended to future fundamental physics. ### Continued development of the argument Chihara, in his 1973 nominalist book Ontology and the Vicious Circle Principle, was one of the earliest philosophers to attempt to reformulate mathematics in response to Quine's arguments. Field's Science Without Numbers followed in 1980 and dominated discussion about the indispensability argument throughout the 1980s and 1990s. With the introduction of arguments against the first premise of the argument, initially by Maddy in the 1990s and continued by Melia and others in the 2000s, Field's approach has come to be known as "Hard Road Nominalism" due to the difficulty of creating technical reconstructions of science that it requires. Approaches attacking the first premise, in contrast, have come to be known as "Easy Road Nominalism". Colyvan's formulation in his 1998 paper "In Defence of Indispensability" and his 2001 book The Indispensability of Mathematics is often seen as the standard or "canonical" formulation of the argument within more recent philosophical work. Colyvan's version of the argument has been influential in debates in contemporary philosophy of mathematics. It differs in key ways from the arguments presented by Quine and Putnam. Quine's version of the argument relies on translating scientific theories from ordinary language into first-order logic to determine its ontological commitments whereas the modern version allows ontological commitments to be directly determined from ordinary language. Putnam's arguments were for the objectivity of mathematics but not necessarily for mathematical objects. Putnam has explicitly distanced himself from this version of the argument, saying, "from my point of view, Colyvan's description of my argument(s) is far from right", and has contrasted his indispensability argument with "the fictitious 'Quine–Putnam indispensability argument'". Colyvan has said "the attribution to Quine and Putnam [is] an acknowledgement of intellectual debts rather than an indication that the argument, as presented, would be endorsed in every detail by either Quine or Putnam". ## Influence According to James Franklin, the indispensability argument is widely considered to be the best argument for platonism in the philosophy of mathematics. The Stanford Encyclopedia of Philosophy identifies it as one of the major arguments in the debate between mathematical realism and mathematical anti-realism; according to the Stanford Encyclopedia of Philosophy, some within the field see it as the only good argument for platonism. Quine's and Putnam's arguments have also been influential outside philosophy of mathematics, inspiring indispensability arguments in other areas of philosophy. For example, David Lewis, who was a student of Quine, used an indispensability argument to argue for modal realism in his 1986 book On the Plurality of Worlds. According to his argument, quantification over possible worlds is indispensable to our best philosophical theories, so we should believe in their concrete existence. Other indispensability arguments in metaphysics are defended by philosophers such as David Armstrong, Graeme Forbes, and Alvin Plantinga, who have argued for the existence of states of affairs due to the indispensable theoretical role they play in our best philosophical theories of truthmakers, modality, and possible worlds. In the field of ethics, David Enoch has expanded the criterion of ontological commitment used in the Quine–Putnam indispensability argument to argue for moral realism. According to Enoch's "deliberative indispensability argument", indispensability to deliberation is just as ontologically committing as indispensability to science, and moral facts are indispensable to deliberation. Therefore, according to Enoch, we should believe in moral facts. ## Primary sources This section provides a list of the primary sources that are referred to or quoted in the article but not used to source content. [Philosophy of mathematics](Category:Philosophy_of_mathematics "wikilink") [Philosophical arguments](Category:Philosophical_arguments "wikilink") [Willard Van Orman Quine](Category:Willard_Van_Orman_Quine "wikilink")
5,335,922
Blakeney Point
1,163,332,096
National nature reserve on the north coast of Norfolk, England
[ "Beaches of Norfolk", "Blakeney, Norfolk", "Coastal features of Norfolk", "Landforms of Norfolk", "National Trust properties in Norfolk", "National nature reserves in England", "Nature reserves in Norfolk", "North Norfolk", "Protected areas established in 1912", "Special Protection Areas in England", "Spits of England" ]
Blakeney Point (designated as Blakeney National Nature Reserve) is a national nature reserve situated near to the villages of Blakeney, Morston and Cley next the Sea on the north coast of Norfolk, England. Its main feature is a 6.4 km (4.0 mi) spit of shingle and sand dunes, but the reserve also includes salt marshes, tidal mudflats and reclaimed farmland. It has been managed by the National Trust since 1912, and lies within the North Norfolk Coast Site of Special Scientific Interest, which is additionally protected through Natura 2000, Special Protection Area (SPA), International Union for Conservation of Nature (IUCN) and Ramsar listings. The reserve is part of both an Area of Outstanding Natural Beauty (AONB), and a World Biosphere Reserve. The Point has been studied for more than a century, following pioneering ecological studies by botanist Francis Wall Oliver and a bird ringing programme initiated by ornithologist Emma Turner. The area has a long history of human occupation; ruins of a medieval monastery and "Blakeney Chapel" (probably a domestic dwelling) are buried in the marshes. The towns sheltered by the shingle spit were once important harbours, but land reclamation schemes starting in the 17th century resulted in the silting up of the river channels. The reserve is important for breeding birds, especially terns, and its location makes it a major site for migrating birds in autumn. Up to 500 seals may gather at the end of the spit, and its sand and shingle hold a number of specialised invertebrates and plants, including the edible samphire, or "sea asparagus". The many visitors who come to birdwatch, sail or for other outdoor recreations are important to the local economy, but the land-based activities jeopardize nesting birds and fragile habitats, especially the dunes. Some access restrictions on humans and dogs help to reduce the adverse effects, and trips to see the seals are usually undertaken by boat. The spit is a dynamic structure, gradually moving towards the coast and extending to the west. Land is lost to the sea as the spit rolls forward. The River Glaven can become blocked by the advancing shingle and cause flooding of Cley village, Cley Marshes nature reserve, and the environmentally important reclaimed grazing pastures, so the river has to be realigned every few decades. ## Description Blakeney Point, like most of the northern part of the marshes in this area, is part of the parish of Cley next the Sea. The main spit runs roughly west to east, and joins the mainland at Cley Beach before continuing onwards as a coastal ridge to Weybourne. It is approximately 6.4 km (4.0 mi) long, and is composed of a shingle bank which in places is 20 m (66 ft) in width and up to 10 m (33 ft) high. It has been estimated that there are 2.3 million m<sup>3</sup> (82 million ft<sup>3</sup>) of shingle in the spit, 97 per cent of which is derived from flint. The Point was formed by longshore drift and this movement continues westward; the spit lengthened by 132.1 m (433 ft) between 1886 and 1925. At the western end, the shingle curves south towards the mainland. This feature has developed several times over the years, giving the impression from the air of a series of hooks along the south side of the spit. Salt marshes have formed between the shingle curves and in front of the coasts sheltered by the spit, and sand dunes have accumulated at the Point's western end. Some of the shorter side ridges meet the main ridge at a steep angle due to the southward movement of the latter. There is an area of reclaimed farmland, known as Blakeney Freshes, to the west of Cley Beach Road. Norfolk Coast Path, an ancient long distance footpath, cuts across the south eastern corner of the reserve along the sea wall between the farmland and the salt marshes, and further west at Holme-next-the-Sea the trail joins Peddars Way. The tip of Blakeney Point can be reached by walking up the shingle spit from the car park at Cley Beach, or by boats from the quay at Morston. The boat gives good views of the seal colonies and avoids the long walk over a difficult surface. The National Trust has an information centre and tea room at the quay, and a visitor centre on the Point. The centre was formerly a lifeboat station and is open in the summer months. Halfway House, or the Watch House, is a building 2.4 km (1.5 mi) from Cley Beach car park. Originally built in the 19th century as a look-out for smugglers, it was used in succession as a coast guard station, by the Girl Guides, and as a holiday let. ## History ### To 1912 Norfolk has a long history of human occupation dating back to the Palaeolithic, and has produced many significant archaeological finds. Both modern and Neanderthal people were present in the area between 100,000 and 10,000 years ago, before the last glaciation, and humans returned as the ice retreated northwards. The archaeological record is poor until about 20,000 years ago, partly because of the very cold conditions that existed then, but also because the coastline was much further north than at present. As the ice retreated during the Mesolithic (10,000–5,000 BCE), the sea level rose, filling what is now the North Sea. This brought the Norfolk coastline much closer to its present line, so that many ancient sites are now under the sea in an area now known as Doggerland. Early Mesolithic flint tools with characteristic long blades up to 15 cm (5.9 in) long found on the present-day coast at Titchwell Marsh date from a time when it was 60–70 km (37–43 mi) from the sea. Other flint tools have been found dating from the Upper Paleolithic (50,000–10,000 BCE) to the Neolithic (5,000–2,500 BCE). An "eye" is an area of higher ground in the marshes, dry enough to support buildings. Blakeney's former Carmelite friary, founded in 1296 and dissolved in 1538, was built in such a location, and several fragments of plain roof tile and pantiles dating back to the 13th century have been found near the site of its ruins. Originally on the south side of the Glaven, Blakeney Eye had a ditched enclosure during the 11th and 12th centuries, and a building known as "Blakeney Chapel", which was occupied from the 14th century to around 1600, and again in the late 17th century. Despite its name, it is unlikely that it had a religious function. Nearly a third of the mostly 14th- to 16th-century pottery found within the larger and earlier of the two rooms was imported from the continent, suggesting significant international trade at this time. The spit sheltered the Glaven ports, Blakeney, Cley-next-the-Sea and Wiveton, which were important medieval harbours. Blakeney sent ships to help Edward I's war efforts in 1301, and between the 14th and 16th centuries it was the only Norfolk port between King's Lynn and Great Yarmouth to have customs officials. Blakeney Church has a second tower at its east end, an unusual feature in a rural parish church. It has been suggested that it acted as a beacon for mariners, perhaps by aligning it with the taller west tower to guide ships into the navigable channel between the inlet's sandbanks; that this was not always successful is demonstrated by a number of wrecks in the haven, including a carvel-built wooden ship. Land reclamation schemes, especially those by Henry Calthorpe in 1640 just to the west of Cley, led to the silting up of the Glaven shipping channel and relocation of Cley's wharf. Further enclosure in the mid-1820s aggravated the problem, and also allowed the shingle ridge at the beach to block the former tidal channel to the Salthouse marshes to the east of Cley. In an attempt to halt the decline, Thomas Telford was consulted in 1822, but his recommendations for reducing the silting were not implemented, and by 1840 almost all of Cley's trade had been lost. The population stagnated, and the value of all property decreased sharply. Blakeney's shipping trade benefited from the silting up of its nearby rival, and in 1817 the channel to the Haven was deepened to improve access. Packet ships ran to Hull and London from 1840, but this trade declined as ships became too large for the harbour. ### National Trust era In the decades preceding World War I, this stretch of coast became famous for its wildfowling; locals were looking for food, but some more affluent visitors hunted to collect rare birds; Norfolk's first barred warbler was shot on the point in 1884. In 1901, the Blakeney and Cley Wild Bird Protection Society created a bird sanctuary and appointed as its "watcher", Bob Pinchen, the first of only six men, up to 2012, to hold that post. In 1910, the owner of the Point, Augustus Cholmondeley Gough-Calthorpe, 6th Baron Calthorpe, leased the land to University College London (UCL), who also purchased the Old Lifeboat House at the end of the spit. When the baron died later that year, his heirs put Blakeney Point up for sale, raising the possibility of development. In 1912, a public appeal initiated by Charles Rothschild and organised by UCL Professor Francis Wall Oliver and Dr Sidney Long enabled the purchase of Blakeney Point from the Calthorpe estate, and the land was then donated to the National Trust. UCL established a research centre at the Old Lifeboat House in 1913, where Oliver and his college pioneered the scientific study of Blakeney Point. The building is still used by students, and also acts as an information centre. Despite formal protection, the tern colony was not fenced off until the 1960s. In 1930, the Point's first "watcher", Bob Pinchen, retired and was replaced by Billy Eales, who had assisted Pinchen the previous summer to "learn the job". His son, Ted Eales, succeeded him as warden when he died in early 1939. Ted Eales went on to work as a wildlife cameraman for Anglia Television during the winter and served as the Point's warden until retiring in March 1980. Subsequent wardens have included Joe Reed and wildlife presenter Ajay Tegala. Bob Pinchen, Ted Eales and Ajay Tegala have all written books about their experiences on Blakeney Point as watcher, warden and ranger respectively. The Point was designated as a Site of Special Scientific Interest (SSSI) in 1954, along with the adjacent Cley Marshes reserve, and subsumed into the newly created 7,700-hectare (19,000-acre) North Norfolk Coast SSSI in 1986. The larger area is now additionally protected through Natura 2000, Special Protection Area (SPA) and Ramsar listings, IUCN category IV (habitat/species management area) and is part of the Norfolk Coast Area of Outstanding Natural Beauty. The Point became a National Nature Reserve (NNR) in 1994, and the coast from Holkham NNR to Salthouse, together with Scolt Head Island, became a Biosphere Reserve in 1976. ## Fauna and flora ### Birds Blakeney Point has been designated as one of the most important sites in Europe for nesting terns by the government's Joint Nature Conservation Committee. In the early 1900s, the small colonies of common and little terns were badly affected by egg-taking, disturbance and shooting, but as protection improved the common terns population rose to 2,000 pairs by mid-century, although it subsequently declined to no more than 165 pairs by 2000, perhaps due to predation. Sandwich terns were a scarce breeder until the 1970s, but there were 4,000 pairs by 1992. Blakeney is the most important site in Britain for both Sandwich and little terns, the roughly 200 pairs of the latter species amounting to eight per cent of the British population. The 2,000 pairs of black-headed gulls sharing the breeding area with the terns are believed to protect the colony as a whole from predators like red foxes. Other nesting birds include about 20 pairs of Arctic terns and a few Mediterranean gulls in the tern colony, ringed plovers and oystercatchers on the shingle and common redshanks on the salt marsh. The waders' breeding success has been compromised by human disturbance and predation by gulls, weasels and stoats, with ringed plovers particularly affected, declining to 12 pairs in 2012 compared to 100 pairs twenty years previously. The pastures contain breeding northern lapwings, and species such as sedge and reed warblers and bearded tits are found in patches of common reed. The Point juts into the sea on a north-facing coast, which means that migrant birds may be found in spring and autumn, sometimes in huge numbers when the weather conditions force them towards land. Numbers are relatively low in spring, but autumn can produce large "falls", such as the hundreds of European robins on 1 October 1951 or more than 400 common redstarts, on 18 September 1995. The common birds are regularly accompanied by scarcer species like greenish warblers, great grey shrikes or Richard's pipits. Seabirds may be sighted passing the Point, and migrating waders feed on the marshes at this time of year. Vagrant rarities have turned up when the weather is appropriate, including a Fea's or Zino's petrel in 1997, a trumpeter finch in 2008, and an alder flycatcher in 2010. Ornithologist and pioneering bird photographer Emma Turner started ringing common terns on the Point in 1909, and the use of this technique for migration studies has continued since. A notable recovery was a Sandwich tern killed for food in Angola, and a Radde's warbler trapped for ringing in 1961 was only the second British record of this species at that time. In the winter, the marshes hold golden plovers and wildfowl including common shelduck, Eurasian wigeon, brent geese and common teal, while common scoters, common eiders, common goldeneyes and red-breasted mergansers swim offshore. ### Other animals Blakeney Point has a mixed colony of about 500 harbour and grey seals. The harbour seals have their young between June and August, and the pups, which can swim almost immediately, may be seen on the mud flats. Grey seals breed in winter, between November and January; their young cannot swim until they have lost their first white coat, so they are restricted to dry land for their first three or four weeks, and can be viewed on the beach during this period. Grey seals colonised a site in east Norfolk in 1993, and started breeding regularly at Blakeney in 2001. It is possible that they now outnumber harbour seals off the Norfolk coast. Seal-watching boat trips run from Blakeney and Morston harbours, giving good views without disturbing the seals. The corpses of 24 female or juvenile harbour seals were found in the Blakeney area between March 2009 and August 2010, each with spirally cut wounds consistent with the animal having been drawn through a ducted propeller. The rabbit population can grow to a level at which their grazing and burrowing adversely affects the fragile dune vegetation. When rabbit numbers are reduced by myxomatosis, the plants recover, although those that are toxic to rabbits, like ragwort, then become less common due to increased competition from the edible species. The rabbits may be killed by carnivores such as red foxes, weasels and stoats. Records of mammals that are rare in the NNR area include red deer swimming in the haven, a hedgehog and a beached Sowerby's beaked whale. An insect survey in September 2009 recorded 187 beetle species, including two new to Norfolk, the rove beetle Phytosus nigriventris and the fungus beetle Leiodes ciliaris, and two very rarely seen in the county, the sap beetle Nitidula carnaria and the clown beetle Gnathoncus nanus. There were also 24 types of spider, and the five ant species included the nationally rare Myrmica specioides. The rare millipede Thalassisobates littoralis, a specialist of coastal shingle habitats, was found here in 1972, and a red-veined darter appeared in 2012. Tens of thousands of migrant turnip sawflies were recorded for a few days in late summer 2006, along with red-eyed damselflies. The silver Y moth also appears in large numbers in some years. The many inhabitants of the tidal flats include lugworms, polychaete worms, sand hoppers and other amphipod crustaceans, and gastropod molluscs. These molluscs feed on the algae growing on the surface of the mud, and include the tiny Hydrobia, an important food for waders because of its abundance at densities of more than 130,000 m<sup>−2</sup>. Bivalve molluscs include the edible common cockle, although it is not harvested here. ### Plants Grasses such as sea couch grass and sea poa grass have an important function in the driest areas of the marshes, and on the coastal dunes, where marram grass, sand couch-grass, lyme-grass and grey hair-grass help to bind the sand. Sea holly, sand sedge, bird's-foot trefoil and pyramidal orchid are other specialists of this arid habitat. Some specialised mosses and lichens are found on the dunes, and help to consolidate the sand; a survey in September 2009 found 41 lichen species. The plant distribution is influenced by the dunes' age as well as their moisture content, the deposits becoming less alkaline as calcium carbonate from animal shells is leached out of the sand to be replaced by more acidic humus from plant decomposition products. Marram grass is particularly discouraged by the change in acidity. A similar pattern is seen with mosses and lichens, with the various areas of the dunes containing different species according to the acidity of the sand. At least four moss species have been identified as important in dune stabilization, since they help to consolidate the sand, add nutrients as they decompose, and pave the way for more exacting plant species. The moss and lichen flora of Blakeney Point differs markedly from that of lime-rich dunes on the western coasts of the UK. Non-native tree lupins have become established near the Lifeboat House, where they now grow wild. The shingle ridge attracts biting stonecrop, sea campion, yellow horned poppy, sea thrift, bird's foot trefoil and sea beet. In the damper areas, where the shingle adjoins salt marsh, rock sea lavender, matted sea lavender and scrubby sea-blite also thrive, although they are scarce in Britain away from the Norfolk coast. The saltmarsh contains European glasswort and common cord grass in the most exposed regions, with a succession of plants following on as the marsh becomes more established: first sea aster, then mainly sea lavender, with sea purslane in the creeks, and smaller areas of sea plantain and other common marsh plants. Six previously unknown diatom species were found in the waters around the point in 1952, along with six others not previously recorded in Britain. European glasswort is picked between May and September and sold locally as "samphire". It is a fleshy plant which when blanched or steamed has a taste which leads to its alternative name of "sea asparagus", and it is often eaten with fish. It can also be eaten raw when young. Glasswort is also a favourite food for the rabbits, which will venture onto the saltmarsh in search of this succulent plant. ## Recreation The 7.7 million day visitors and 5.5 million who made overnight stays on the Norfolk coast in 1999 are estimated to have spent £122 million, and secured the equivalent of 2,325 full-time jobs in that area. A 2005 survey at six North Norfolk coastal sites, including Blakeney, Cley and Morston found that 39 per cent of visitors gave birdwatching as the main purpose of their visit. The villages nearest to the Point, Blakeney and Cley, had the highest per capita spend per visitor of those surveyed, and Cley was one of the two sites with the highest proportion of pre-planned visits. The equivalent of 52 full-time jobs in the Cley and Blakeney area are estimated to result from the £2.45 million spent locally by the visiting public. In addition to birdwatching and boat trips to see the seals, sailing and walking are the other significant tourist activities in the area. The large number of visitors at coastal sites sometimes has negative effects. Wildlife may be disturbed, a frequent difficulty for species that breed in exposed areas such as ringed plovers and little terns, and also for wintering geese. During the breeding season, the main breeding areas for terns and seals are fenced off and signposted. Plants can be trampled, which is a particular problem in sensitive habitats such as sand dunes and vegetated shingle. A boardwalk made from recycled plastic crosses the large sand dunes near the end of the Point, which helps to reduce erosion. It was installed in 2009 at a cost of £35,000 to replace its much less durable wooden predecessor. Dogs are not allowed from April to mid-August because of the risk to ground-nesting birds, and must be on a lead or closely controlled at other times. The Norfolk Coast Partnership, a grouping of conservation and environmental bodies, divide the coast and its hinterland into three zones for tourism development purposes. Blakeney Point, along with Holme Dunes and Holkham dunes, is considered to be a sensitive habitat already suffering from visitor pressure, and therefore designated as a red-zone area with no development or parking improvements to be recommended. The rest of the reserve is placed in the orange zone, for locations with fragile habitats but less tourism pressure. ## Coastal changes The spit is a relatively young feature in geological terms, and in recent centuries it has been extending westwards and landwards through tidal and storm action. This growth is thought to have been enhanced by the reclamation of the salt marshes along this coast in recent centuries, which removed a natural barrier to the movement of shingle. The amount of shingle moved by a single storm can be "spectacular"; the spit has sometimes been breached, becoming an island for a time, and this may happen again. The northernmost part of Snitterley (now Blakeney) village was lost to the sea in the early Middle Ages, probably due to a storm. In the last two hundred years, maps have been accurate enough for the distance from the Blakeney Chapel ruins to the sea to be measured. The 400 m (440 yd) in 1817 had become 320 m (350 yd) by 1835, 275 m (301 yd) in 1907, and 195 m (213 yd) by the end of the 20th century. The spit is moving towards the mainland at about 1 m (1.1 yd) per year; and several former raised islands or "eyes" have already disappeared, first covered by the advancing shingle, and then lost to the sea. The massive 1953 flood overran the main beach, and only the highest dune tops remained above water. Sand was washed into the salt marshes, and the extreme tip of the point was breached, but as with other purely natural parts of the coast, like Scolt Head Island, little lasting damage was done. Landward movement of the shingle meant that the channel of the Glaven was becoming blocked increasingly often by 2004. This led to flooding of Cley village and the environmentally important Blakeney freshwater marshes. The Environment Agency considered several remedial options. It concluded that attempting to hold back the shingle or breaching the spit to create a new outlet for the Glaven would be expensive and probably ineffective, and doing nothing would be environmentally damaging. The Agency decided to create a new route for the river to the south of its original line, and work to realign a 550 m (600 yd) stretch of river 200 m (220 yd) further south was completed in 2007 at a cost of about £1.5 million. The Glaven had previously been realigned from an earlier, more northerly, course in 1922. The ruins of Blakeney Chapel are now to the north of the river embankment, and essentially unprotected from coastal erosion, since the advancing shingle will no longer be swept away by the stream. The chapel will be buried by a ridge of shingle as the spit continues to move south, and then lost to the sea, perhaps within 20–30 years. ## Cited texts
1,115,083
Chew Stoke
1,157,233,404
Village in Somerset, England
[ "Civil parishes in Somerset", "Villages in Bath and North East Somerset" ]
Chew Stoke is a small village and civil parish in the affluent Chew Valley, in Somerset, England, about 8 miles (13 km) south of Bristol and 10 miles north of Wells. It is at the northern edge of the Mendip Hills, a region designated by the United Kingdom as an Area of Outstanding Natural Beauty, and is within the Bristol/Bath green belt. The parish includes the hamlet of Breach Hill, which is approximately 2 miles (3.2 km) southwest of Chew Stoke itself. Chew Stoke has a long history, as shown by the number and range of its heritage-listed buildings. The village is at the northern end of Chew Valley Lake, which was created in the 1950s, close to a dam, pumping station, sailing club, and fishing lodge. A tributary of the River Chew, which rises in Strode, runs through the village. The population of 991 is served by one shop, one working public house, a primary school and a bowling club. Together with Chew Magna, it forms the ward of Chew Valley North in the unitary authority of Bath and North East Somerset. Chew Valley School and its associated leisure centre are less than a mile (1.6 km) from Chew Stoke. The village has some areas of light industry but is largely agricultural; many residents commute to nearby cities for employment. ## History ### Prehistory Archaeological excavations carried out between 1953 and 1955 by Philip Rahtz and Ernest Greenfield from the Ministry of Works found evidence of extensive human occupation of the area. Consecutive habitation, spanning thousands of years from the Upper Palaeolithic, Mesolithic, and Neolithic periods (Old, Middle, and New Stone Age), to the Bronze and Iron Ages had left numerous artefacts behind. Discoveries have included stone knives, flint blades, and the head of a mace, along with buildings and graves. ### Romano-Celtic temple Chew Stoke is the site of a Romano-Celtic double-octagonal temple, possibly dedicated to the god Mercury. The temple, on Pagans Hill, was excavated by Philip Rahtz between 1949 and 1951. It consisted of an inner wall, which formed the sanctuary, surrounded by an outer wall forming an ambulatory, or covered walkway 56.5 feet (17.2 m) across. It was first built in the late 3rd century but was twice rebuilt, finally collapsing in the 5th century. The positioning of the temple on what is now known as Pagans Hill may seem apt, but there is no evidence for any link between the existence of the temple and the naming of the road. ### Middle Ages During the Middle Ages, farming was the most important activity in the area, and farming, both arable and dairy, continues today. There were also orchards producing fruits such as apples, pears, and plums. Evidence exists of lime kilns, used in the production of mortar for the construction of local churches. In the Domesday Book of 1086, Chew Stoke was listed as Chiwestoche, and was recorded as belonging to Gilbert fitz Turold. He conspired with Robert Curthose, the Duke of Normandy, against King William Rufus, and subsequently all his lands were seized. The next recorded owner was Lord Beauchamp of Hache. He became "lord of the manor" when the earls of Gloucester, with hereditary rights to Chew Stoke, surrendered them to him. According to Stephen Robinson, the author of Somerset Place Names, the village was then known as Chew Millitus, suggesting that it may have had some military potential. The name "Stoke", from the old English stoc, meaning a stockade, may support that idea. The parish was part of the hundred of Chew. ### Bilbie family of bell and clockmakers The Bilbie family of bell founders and clockmakers lived and worked in Chew Stoke for more than 200 years, from the late 17th century until the 19th century. They produced more than 1,350 church bells, which were hung in churches all over the West Country. Their oldest surviving bell, cast in 1698, is still giving good service in the local St Andrew's Church. The earliest Bilbie clocks date from 1724 and are highly prized. They are mostly longcase clocks, the cheapest with 30-hour movements in modest oak cases, but some have high quality eight-day movements with additional features, such as showing the high tide at Bristol docks. These latter clocks were fitted into quality cabinet maker cases and command high prices. ### Recent history In the 20th century, Chew Stoke expanded slightly with the influx of residents from the Chew Valley Lake area. These new residents were moved to Chew Stoke when the lake was created in the 1950s. In World War II, 42 children and three teachers, who had been evacuated from Avenmore school in London, were accommodated in the village. On 10 July 1968, torrential rainfall, with 175 millimetres (7 in) falling in 18 hours on Chew Stoke, double the area's average rainfall for the whole of July, led to widespread flooding in the Chew Valley, and water reached the first floor of many buildings. The damage in Chew Stoke was not as severe as in some of the surrounding villages, such as Pensford; however, fears that the Chew Valley Lake dam would be breached caused considerable anxiety. On 4 February 2001, Princess Anne opened the Rural Housing Trust development at Salway Close. Each year, over a weekend in September (usually the first), a "Harvest Home" is held with horse and pet shows, bands, a funfair, and other entertainments. The Harvest Home was cancelled in 1997 as a mark of respect following the death of Princess Diana in the previous week. The Radford's factory site, where refrigeration equipment was formerly manufactured, was identified as a brownfield site suitable for residential development in the 2002 Draft Local Plan of Bath and North East Somerset. That plan has generated controversy about balancing land use to meet residential, social, and employment needs. During November 2012 a series of floods affected many parts of Britain. On 22 November a man died after his car was washed down a flooded brook in Chew Stoke and trapped against a small bridge. ## Governance Chew Stoke has its own nine-member parish council with responsibility for local issues, including setting an annual precept (local rate) to cover the council's operating costs and producing annual accounts for public scrutiny. The parish council evaluates local planning applications and works with the local police, district council officers, and neighbourhood watch groups on matters of crime, security, and traffic. The council's role also includes initiating projects for the maintenance and repair of parish facilities, as well as consulting with the district council on the maintenance, repair, and improvement of highways, drainage, footpaths, public transport, and street cleaning. Conservation matters (including trees and listed buildings) and environmental issues are also the responsibility of the council. The village is part of the ward of Chew Valley North in the unitary authority of Bath and North East Somerset, which has the wider responsibility for providing services such as education, refuse collection, and tourism. The ward is currently represented by Councillor Malcolm Hannay, a member of the Conservative Party. It is also part of the North East Somerset, and was part of the South West England constituency of the European Parliament prior to Britain leaving the European Union in January 2020. The police service is provided by Avon and Somerset Constabulary with two Community Support Officer and one police officer covering the wider Chew Valley area. The Avon Fire and Rescue Service have a fire station at Chew Magna. ## Geography The area of Chew Stoke is surrounded by arable land and dairy farms on the floor of the Chew Valley. It is located along the Strode Brook tributary of the River Chew, on the northwest side of the Chew Valley Lake. While much of the area has been cleared for farming, trees line the tributary and many of the roads. The village is built along the main thoroughfare, Bristol Road, which runs northeast to southwest. An older centre is located along Pilgrims Way, which loops onto Bristol Road and features an old stone packhorse bridge—now pedestrianised—and a 1950s Irish bridge, used as a ford in winter. The bridge is 7 feet 6 inches (2.29 m) wide and has 36 inches (910 mm) parapets. Houses line both of these roads, with residential cul-de-sacs and lanes extending from them. Chew Stoke is approximately 8 miles (13 km) south of Bristol, 10 miles (16 km) north of Wells, 15 miles (24 km) west of Bath, 17 miles (27 km) east of Weston-super-Mare, and 9 miles (14 km) southwest of Keynsham. It is 1.3 miles (2.1 km) south of Chew Magna on the B3130 road that joins the A37 and A38. The A368 crosses the valley west of the lake. The "Chew Valley Explorer" bus route 672/674, running from Bristol Bus Station to Cheddar, provides public transport access. This service is operated by CT coaches and Eurotaxis and subsidised by Bath and North East Somerset council. In 2002, a 1.9-mile (3.1 km) cycle route, the Chew Lake West Green Route, was opened around the western part of the lake from Chew Stoke. It forms part of the Padstow to Bristol West Country Way, National Cycle Network Route 3. It has all-weather surfacing, providing a smooth off-road facility for ramblers, mobility-challenged visitors, and cyclists of all abilities. Funding was provided by Bath and North East Somerset Council, with the support of Sustrans and the Chew Valley Recreational Trail Association. The minor roads around the lake are also frequently used by cyclists. Bristol Airport is approximately 10 miles (16 km) away, and the nearest train stations are Keynsham, Bath Spa, and Bristol Temple Meads. ## Demography The population of Chew Stoke, according to the census of 1801, was 517. This number increased slowly during the 19th century to a maximum of 819 but fell to around 600 by the end of the century. The population remained fairly stable until World War II. During the latter half of the 20th century, the population of the village rose to 905 people. Data for 1801–1971 is available at Britain Through Time; data for 1971–2001 is available from BANES The 2001 Census gives detailed information about the Chew Valley North ward, which includes both Chew Magna and Chew Stoke. The ward had 2,307 residents, living in 911 households, with an average age of 42.3 years. Of those, 77% of residents described their health as 'good', 21% of 16- to 74-year-olds had no work qualifications, and the area had an unemployment rate of 1.3%. In the Index of Multiple Deprivation 2004, the ward was ranked at 26,243 out of 32,482 wards in England, where 1 was the most deprived and 32,482 the least deprived. A small number of light industrial/craft premises exist at "Fairseat Workshops", formerly the site of a dairy. However, they provide little employment, and many residents commute to jobs in nearby cities. ## Landmarks ### St Andrew's Church St Andrew's Church, a Grade II\* listed building on the outskirts of Chew Stoke, was constructed in the 15th century and underwent extensive renovation in 1862. The inside of the church is decorated with 156 angels in wood and stone, and the church includes a tower with an unusual spirelet on the staircase turret. In the tower hang bells cast by the Bilbie family. The reconstructed Moreton Cross in the churchyard was moved there when Chew Valley Lake was flooded, and the base of the cross shaft, about 80 feet (24 m) southwest of the tower, is thought to date from the 14th century and is itself a Grade II\* listed building, as is the Webb monument in the churchyard. The churchyard gate, at the southeast entrance, bears a lamp provided by public subscription to commemorate Queen Victoria's Jubilee of 1897 and is a Grade II listed structure. In the church are bronze plaques commemorating the eleven local people who died in World War I and the six who were killed in World War II. There is also a stained glass window showing a saint with a sword standing on a snake, and crossed flags commemorating those from World War II. There is also a memorial plaque to the local Bilbie family of bell founders and clockmakers inside the church, and just inside the porch, on the left of the church door, is a stone figure holding an anchor, which was moved to the church from Walley Court with the flooding of the lake. There is an unconfirmed story that this was given to the Gilbert family, then living at the court, by Queen Elizabeth I. ### Rectory The Rectory, at the end of Church Lane, opposite the church hall, is believed to have been built in 1529 by Sir John Barry, rector 1524–46. It has since undergone substantial renovations, including the addition of a clock tower for the Rev. W.P. Wait and further alterations c.1876 for Rev. J. Ellershaw. The clock tower has since been removed. The building has an ornate south front with carvings of shields bearing the coat of arms of the St Loe family, who were once chief landowners in the area, alone or impaled with arms of Fitzpane, Ancell, de la Rivere, and Malet. It is Grade II\* listed. ### New rectory The Reverend John Ellershaw built the new rectory in the 1870s. The last rector to occupy it was Lionel St Clair Waldy from 1907 to 1945. It was then bought by Douglas Wills, who donated it and the rectory field to Winford Hospital as a convalescent home for 16 children. It was later used as a nurses' home before being sold for private use. It is now split into several residential units. ### Grade II listed buildings As with many cities and towns in the United Kingdom, the age of a number of the buildings in Chew Stoke, including the church, school, and several houses, reflects the long history of the village. For example, Chew Stoke School has approximately 170 pupils between 4 and 11 years old. After the age of 11, most pupils either attend Chew Valley School or any of the independent schools in the area. These two buildings were built in 1858 on the site of a former charity school founded in 1718. The architect was S.B. Gabriel of Bristol. Additional classrooms were built in 1926, and further alterations and extensions were carried out in 1970. An obelisk on Breach Hill Lane, dating from the early-to-mid-19th century, is said to have been built as a waterworks marker. It has a square limestone plinth about 3 feet (1 m) high. The obelisk is about 32 feet (10 m) high with a pyramidal top and small opening at the top on two sides. The importance of farming is reflected in the age of many of the farmhouses. Manor Farm, on Scot Lane (not to be confused with at least two other Manor Farms in the locality) is thought to date from 1495 and, as such, is probably the oldest building in the village. Presently (2007) occupied by Mr and Mrs Slater; the building has recently (2002) undergone a sympathetic extension to incorporate an old semi-derelict barn onto the main house for use as a garage and workshop. Mr Slater, a Chartered Engineer, is interested in bringing the art of clock making back to the village. Rookery Farmhouse, in Breach Hill Lane, is dated at 1720, with later 18th century additions to either side of the central rear wing. An attached stable, 20 feet (6 m) northeast of the farmhouse, is also a Grade II listed building. School Farmhouse, in School Lane, dates from the late 17th century and has a studded oak door in the side of the house. Wallis Farmhouse, farther along School Lane, is dated at 1782. Yew Tree Farmhouse, one of the oldest buildings in the area, is a cruck built farmhouse of which there are very few in North Somerset. It was included in the dendrochronology project carried out by the Somerset Vernacular Building Research Group 1996–1998 and the crucks gave a felling date of 1386, the house has been extensively altered and added to over later centuries. North Hill Farmhouse also has 15th century origins. Paganshill Farmhouse dates from the 17th century. Fairseat Farmhouse is from the 18th century and includes a plaque recording that John Wesley preached at the house on 10 September 1790. In August of that year, Fairseat Farmhouse was "registered among the records of this County as a House set apart for the worship of God and religious exercise for Protestant Dissenters." At that time the house belonged to Anna Maria Griffon. In the garden is a large evergreen oak (Ilex) which measured 98 feet (30 m) across until half of it broke away in a gale in 1976. The Methodist Chapel was built in 1815/16 after religious services had been established at Fairseat Farm, and the chapel was rebuilt in the late 19th century with limestone walls with stone dressings and a slate hipped roof with brick eaves stacks and crestings. In the hamlet of Stoke Villice, which is south of the main village, there is a 19th-century milestone inscribed "8 miles to Bristol" that also has listed status. ## Education Chew Stoke Church of England Voluntary Aided Primary School serves the village itself and surrounding villages in the Chew Valley. It is a Church of England voluntary controlled school linked with the St. Andrew's parish church. It has about 170 pupils between 4 and 11 years old. After the age of 11, most pupils attend Chew Valley School. The school was founded as a charity in 1718 making it one of the oldest schools in Somerset. Its original buildings were demolished in 1858 and replaced with new ones to designs by S.B. Gabriel that are now Grade II listed. The school bell was donated by the Bilbie family of bell founders based in the village. Additional classrooms were built in 1926, and further alterations and extensions were built in 1970, 1995 and 2001. In July 2018, the school celebrated its 300th birthday making it one of the oldest state schools in England. A service was held at St Andrew's Church led by the Bishop of Taunton – The Right Reverend Ruth Worsley, and was followed by a tea party at the school, and the planting of a time capsule.
40,131,108
Not My Life
1,173,750,471
2011 film by Robert Bilheimer
[ "2010s American films", "2010s English-language films", "2010s crime films", "2011 documentary films", "2011 films", "American crime films", "American documentary films", "Documentary films about child abuse", "Documentary films about organized crime", "Documentary films about pedophilia", "Documentary films about poverty", "Documentary films about prostitution", "Documentary films about slavery", "Documentary films about violence against women", "Films about child prostitution", "Films about human trafficking", "Films directed by Robert Bilheimer", "Forced prostitution", "Works about sex trafficking" ]
Not My Life is a 2011 American independent documentary film about human trafficking and contemporary slavery. The film was written, produced, and directed by Robert Bilheimer, who had been asked to make the film by Antonio Maria Costa, executive director of the United Nations Office on Drugs and Crime. Bilheimer planned Not My Life as the second installment in a trilogy, the first being A Closer Walk and the third being the unproduced Take Me Home. The title Not My Life came from a June 2009 interview with Molly Melching, founder of Tostan, who said that many people deny the reality of contemporary slavery because it is an uncomfortable truth, saying, "No, this is not my life." Filming of Not My Life took four years to complete, and documented human trafficking in 13 countries: Albania, Brazil, Cambodia, Egypt, Ghana, Guatemala, India, Italy, Nepal, Romania, Senegal, Uganda, and the United States. The first and last scenes of the film take place in Ghana, and show children who are forced to fish in Lake Volta for 14 hours a day. The film also depicts sex trafficking victims, some of whom are only five or six years old. Fifty people are interviewed in the film, including investigative journalist Paul Radu of Bucharest, Katherine Chon of the Polaris Project, and Iana Matei of Reaching Out Romania. Don Brewster of Agape International Missions says that all of the girls they have rescued from child sex tourism in Cambodia identify Americans as the clients who were the most abusive to them. The film was dedicated to Richard Young, its cinematographer and co-director, after he died in December 2010. It had its premiere the following month at the Lincoln Center for the Performing Arts in New York City. The narration was completely rerecorded in 2011, replacing Ashley Judd's voice with that of Glenn Close. The version of the film that was aired by CNN International as part of the CNN Freedom Project was shorter than the version shown at the premiere. In 2014, a re-edited version of the film was released. Not My Life addresses many forms of slavery, including the military use of children in Uganda, involuntary servitude in the United States, forced begging and garbage picking in India, sex trafficking in Europe and Southeast Asia, and other kinds of child abuse. The film also focuses on the people and organizations engaged in working against human trafficking. The film asserts that most victims of human trafficking are children. Actress Lucy Liu said that people who watch Not My Life "will be shocked to find [human trafficking] is happening in America." Lucy Popescu of CineVue criticized the film for focusing on the victims, arguing that the perpetrators of trafficking should have been dealt with more prominently. Not My Life was named Best World Documentary at the Harlem International Film Festival in September 2012. ## Themes Not My Life is a documentary film about human trafficking and contemporary slavery. It addresses many forms of slavery, including the military use of children in Uganda, involuntary servitude in the United States, unfree labor in Ghana, forced begging and garbage picking in India, sex trafficking in Europe and Southeast Asia, and other kinds of child abuse. The focus of the film is on trafficking victims, especially women and children, the latter of whom are often betrayed by adults that they trust. The film also focuses on the people and organizations engaged in working against human trafficking, including members of the Federal Bureau of Investigation (FBI), Free the Slaves, Girls Educational and Mentoring Services (GEMS), International Justice Mission (IJM), the Somaly Mam Foundation, Terre des hommes, Tostan, UNICEF, United Nations Office on Drugs and Crime (UNODC), and the United States Department of State (US DoS). Not My Life has been called "a cautionary tale". It depicts the commodification of millions of people and identifies the practices of traffickers as undermining international economics, security, sustainability and health. Not My Life calls attention to the fact that, in the United States, the sentencing for human trafficking is less severe than for drug trafficking. The film indicates a relationship between contemporary slavery and globalization. It asserts that most human trafficking victims are children, although the filmmakers have recognized the fact that millions of adults are also trafficked. The film depicts human trafficking as a matter of good and evil, provides interviews with survivors of human trafficking, and presents analysis from anti-trafficking advocates. Throughout the film, Robert Bilheimer encourages viewers to personally combat human trafficking. Bilheimer was sparing in his use of statistics in the film, feeling that overloading viewers with figures might numb them to the issues. According to Nancy Keefe Rhodes of Stone Canoe, a U.S. literary journal, the film's audiences are likely to have the preconception that human trafficking is not slavery in the same sense that the Atlantic slave trade was, and many people believe that slavery was abolished a long time ago with such instruments as the U.S. Emancipation Proclamation and Thirteenth Amendment. Rhodes writes that society now uses the word "slavery" in modern contexts only as a metaphor, so that references to actual contemporary slavery can be dismissed as hyperbole, and she describes the film's goal as to "reclaim the original term [slavery] and convince us that what is happening now is what happened then: highly organized and pervasive, intentional, highly profitable and ... fully as coercive and wantonly cruel." Rhodes says that the word "slavery" has started to be used in its original sense again in recent years, but that audiences' views on contemporary slavery are nonetheless influenced by the slave-like imagery in such films as Hustle & Flow (2005) and Black Snake Moan (2007). The Academy Award-winning Hustle & Flow portrays a pimp as the hero, while Black Snake Moan features Christina Ricci as a young nymphomaniac; the marketing for Black Snake Moan centered on evocative, sexualized slave imagery, including a scantily-clad Ricci in chains. According to Rhodes, Bilheimer "rescue[s] modern slaves from representation as exotic creatures, to restore their humanity" and allow audiences to relate to them. For this purpose, Bilheimer tells stories of individuals in the context of their communities and families. While Bilheimer had previously done extensive social justice work with religious organizations, he did not proselytize in the film, despite the many opportunities the film afforded him to do so. ## Contents Fifty people are interviewed in Not My Life, including Katherine Chon of the Polaris Project, investigative journalist Paul Radu of Bucharest, Vincent Tournecuillert of Terre des hommes, Iana Matei of Reaching Out Romania, UNICEF Director of Programmes Nicholas Alipui, Susan Bissell of UNICEF's Child Protection Section, Antonio Maria Costa of UNODC, Somaly Mam of the Somaly Mam Foundation, Molly Melching of Tostan in Senegal, and Suzanne Mubarak, who was First Lady of Egypt at the time. The sex trafficking victims shown in the film include children as young as five and six years old. Not My Life begins with a black screen on which the words "Human trafficking is slavery" appear in white. A sequence filmed in Ghana follows, showing children who are forced to fish in Lake Volta for 14 hours a day. Many of the children die as a result of the working conditions. A 10-year-old boy swims through the murky water towards the camera, looking into it, and holds his breath underwater while trying to unsnarl a fishing net. Next, Senegalese talibes, Muslim boys who attend Quranic schools, appear. There are approximately 50,000 talibes in Senegal who are forced to beg on the streets to make money for their teachers; children who do not meet their quotas are beaten. Many of these children suffer from skin and stomach diseases because of their diet of spoiled food—one demonstrates his diseased hands to the camera, only for an adult to pull him away by the ear. The film then moves to India and depicts children, mostly wearing flip-flops, illegally sorting through hazardous waste in Ghazipur and New Delhi landfills. Romani families are shown in Central and Eastern Europe, and the narration indicates that Romani boys are often trafficked for the purpose of forced child begging, and that Romani girls are regularly trafficked as child prostitutes. The narrator says that the profits of human trafficking "are built on the backs and in the beds of our planet's youth." In Zoha Prison in Romania, there are interviews with traffickers serving prison sentences that the film suggested were too short in light of the severity of the crime of human trafficking. The typical sentence for this crime is six or seven years, while the sentence for trafficking in drugs is normally twenty years. Two Romanian traffickers, Traian and Ovidiu, attest to having starved, punched, and kicked the girls they trafficked. Ovidiu recounts a story, in an interview filmed in February 2007, about kidnapping a prostitute and selling her for sex when he was 14. He expresses no remorse for these actions. The sentences served by Traian and Ovidiu were short enough that, by the time the film was released, they were no longer in prison. Ana, a girl they trafficked, is also interviewed in the film, saying that she lost a tooth in one of her beatings. She describes being pregnant at the time, but not telling this to her captors because of fears for the unborn child's safety. Radu is interviewed in this portion of the film, as is Tournecuillert, who speaks about his experiences in Albania, where he heard about the sex trafficking of girls and how some of the girls would be shot or burned to death as a warning to the other girls. He describes how Albanian girls are often rounded up to be sexually trafficked in Italy. He further explains that, normally, before they leave Albania, the traffickers kill one of the girls in front of the others—usually by burning or shooting—to demonstrate what will happen to others who try to escape. Matei adds that, for the sake of amusement, some of the girls would be buried alive with only their heads remaining above ground. Eugenia Bonetti, a nun, speaks about her work helping girls escape from slavery in Italy. Another interview is with a Wichita, Kansas woman named Angie who was prostituted with another girl, Melissa, in the American Midwest when they were teenagers. Angie recounts how they were expected to have sex with truck drivers and steal their money. She describes an incident when, after Melissa found pictures of a man's grandchildren in his wallet, they realized he was old enough to be their own grandfather. "I wanted to die," she says, close to tears. Outside the film, Bilheimer said that Angie's trafficker expected her to engage in forty sex acts a night, and threatened to kill her if she refused. "It's not just truck drivers," FBI agent Mike Beaver says. "We're seeing them purchased and abused by both white collar and blue collar individuals." This statement segues into a Washington, D.C., scene wherein two girls in their early teens are shown by a curb on K Street, changing into prostitutes' attire. Angie was rescued during Operation Stormy Nights, an anti-human-trafficking operation carried out by the FBI, in 2004. Bilheimer said that, while there is no way of being certain how many girls like Angie are being sexually trafficked in the U.S., "diligent people out there have arrived at a bare minimum figure of ... one hundred thousand girls, eight to fifteen [performing] ten sex acts a day" adding up to "a billion unpunished crimes of sexual violence on an annual basis." Another American victim of sexual trafficking, Sheila White, describes an incident in 2003 when she was beaten up next to the Port Authority of New York and New Jersey. She says that nobody even asked her if she needed help. White eventually escaped from being trafficked and went on to work with GEMS to raise awareness on the issue in New York. In 2012, after the film was released, Barack Obama, President of the United States, recognized White's work and told her story during a speech to the Clinton Global Initiative. The next scenes in the film depict child labour in Nepal, and indicate that child workers in the textile industry are commonly targeted by sex traffickers. A brothel raid in India, led by Balkrishna Acharya of the Rescue Foundation in Mumbai, is then shown. Ten young girls are rescued from a four-by-three-foot closet and a crawl space. The madam reacts furiously, perceiving the raid as taking away her livelihood. Then, the trafficking of children into the sex industry is depicted in Cambodia. Some scenes take place in Svay Pak, Phnom Penh, one of the cheapest sex tourism destinations in the Mekong Delta. Women of the Somaly Mam Foundation are depicted working with girls who have been sexually trafficked. A large number of these girls are pictured one by one, each child fading into the next against the backdrop of a doorway. An interview with one of the Somaly Mam Foundation workers, Sophea Chhun, reveals that her daughter, Sokny, was kidnapped in 2008 at age 23. "Most likely Sokny too was sold," Chhun says, claiming that "the police treated it like she wasn't important"—perhaps, she suggests, because Sokny was an adopted child. Don Brewster of Agape International Missions is interviewed, and says that all of the girls they have rescued from child sex tourism in Cambodia identify Americans as the clients who were the most abusive to them. Bilheimer agreed with this assertion in an interview outside the film. In Guatemala City, Guatemalan trafficker Efrain Ortiz is shown being arrested, and the film indicates that he was later given a prison sentence of 95 years. Ortiz had two sons he had been using for waste collection and five daughters he had been committing incest with. Bilheimer accompanies IJM representatives Pablo Villeda, Amy Roth, and Gary Haugen as they and the local police arrest Ortiz; he is charged with exploitation of children and violence against women. Ortiz looks surprised as he is handcuffed. Haugen, President of IJM, went on to be named a Trafficking in Persons (TIP) Hero in the 2012 US DoS TIP Report. Grace Akallo, a Ugandan woman who was once abducted by Joseph Kony to be used as a child soldier in the Lord's Resistance Army (LRA), is interviewed, saying that "this kind of evil must be stopped." She was forced to kill another girl as part of her initiation into the LRA, a very common practice among armies that employ child soldiers. The film states that she was ultimately rehabilitated and became a mother. Bishop Desmond Tutu, who Bilheimer had previously interviewed for The Cry of Reason, appears towards the end of the film, saying, "Each of us has the capacity to be a saint." Bilheimer included Tutu in Not My Life because he felt that audiences might be in need of pastoral counseling after watching the film. The final scene of Not My Life returns to the boy holding his breath underwater in Ghana. His name is revealed to be Etse, and it is stated that he and six other trafficking victims shown in the film have been rescued. Some of the last words in the film are spoken by Brazilian human rights advocate Leo Sakomoto: "I can't see a good life while there are people living like animals. Not because I'm a good person, not because it's my duty, but because they are human—like me." ## Production ### Background The project that became Not My Life was initiated by the executive director of the UNODC, Antonio Maria Costa, who wanted to commission a film that would "bring the issue of modern slavery to the attention of public opinion, globally—especially the well-meaning, law-abiding and God-fearing people who do not believe something so horrible is happening in their own neighborhood." With this goal in mind, Costa approached Worldwide Documentaries, an East Bloomfield, New York-based organization that had produced two films with which he was familiar: The Cry of Reason, which documents internal resistance to South African apartheid by way of Beyers Naudé's story; and A Closer Walk, which is about the epidemiology of HIV/AIDS. Costa e-mailed Bilheimer, Director of Worldwide Documentaries, asking him to create the film he envisioned. Costa said that he choose Bilheimer because the director had developed a "solid reputation [for] addressing difficult topics... combining artistic talent, a philosophical view about development and a humanitarian sentiment about what to do about the issues." Bilheimer accepted Costa's proposition, and subsequently wrote, produced, and directed Not My Life as an independent film. Bilheimer, who had received an Academy Award nomination for The Cry of Reason, said that "the unrelenting, unpunished, and craven exploitation of millions of human beings for labor, sex, and hundreds of sub-categories thereof is simply the most appalling and damaging expression of so-called human civilization we have ever seen." Bilheimer's wife, Heidi Ostertag, is Worldwide Documentaries' Senior Producer, and she co-produced Not My Life with him. She said that she found making a film about human trafficking difficult because "people do not want to talk about this issue." Bilheimer found that the connections he had made during the production of A Closer Walk were also useful when producing Not My Life because the poor and the outcast are at the greatest risk of both HIV/AIDS and human trafficking; there is, for this reason, much overlap between the groups victimized by these two afflictions. Bilheimer attempted to fashion the film in such a way that every part of it would illustrate a statement made by Abraham Lincoln: "If slavery is not wrong, nothing is wrong." When making this film, Bilheimer held that a contemporary abolitionist movement did not yet exist. He described his purpose in creating the film as to raise awareness and initiate such a movement. He also wished to communicate to his audiences that not all human trafficking is sexual. Traffickers "commit unspeakable, wanton acts of violence against their fellow human beings," he said, "and are rarely punished for their crimes." Production of Not My Life was supported by the United Nations Global Initiative to Fight Human Trafficking (UN.GIFT), UNICEF, and UNODC, providing Worldwide Documentaries with \$1 million in funding secured by Costa. ### Filming Bilheimer said that the level of cruelty he saw in shooting Not My Life was greater than anything he had seen when documenting apartheid in South Africa for The Cry of Reason. Bilheimer attested to becoming more aware of the global extent of human trafficking as he went about making Not My Life. The film's title came from a June 2009 interview with Molly Melching, founder of Tostan, an organization dedicated to human rights education operating in ten African countries. As Bilheimer and Melching spoke in Thiès, Senegal, discussing how people often deny the reality of contemporary slavery because it is an uncomfortable truth, Melching said, "People can say, 'No, this is not my life.' But my life can change. Let's change together." Filming of Not My Life took place over four years in Africa, Asia, Europe, and North and South America documenting human trafficking in thirteen countries: Albania, Brazil, Cambodia, Egypt, Ghana, Guatemala, India, Italy, Nepal, Romania, Senegal, Uganda, and the United States. Shooting in Ghana took place over four 18-hour days, during which the film crew had to travel over washboard roads in Land Rovers and did not sleep. Filming in Svay Pak took place in March 2010, and shooting in Abusir, Egypt took place the following month. In Guatemala, Bilheimer facilitated the arrest of trafficker Ortiz by renting a car for the police to use, in order to film the arrest as part of Not My Life. Bilheimer said that, during the making of the film, he and his crew were surprised to discover that traffickers employ similar methods of intimidation across the globe, "almost as if there were ... unwritten bylaws and tactics ... The lies are the same." ### Editing Susan Tedeschi, Derek Trucks, and Dave Brubeck performed the theme song for Not My Life, Bob Dylan's "Lord Protect My Child", which was produced by Chris Brubeck. After the initial screenings in early 2011, the film went through a series of revisions, taking into account information gathered from more than thirty screenings for focus groups. Later that year, the narration was completely rerecorded; Bilheimer replaced Ashley Judd's voice with that of Glenn Close, who had previously worked with him on A Closer Walk. The version of the film that was aired by CNN International was shorter than the version shown at the premiere. An even shorter version, only 30 minutes long, was created with school audiences in mind. Content relating to the Egyptian mixed-sex schools instituted by Suzanne Mubarak was gathered, but Bilheimer eventually removed much of this content from the film because the Arab Spring made the information in this portion of the film outdated, despite the continued existence of most of these schools. Much of the interview with Molly Melching was removed as well. During the editing of Not My Life, Bilheimer cut the interview with Tutu, but later re-added a single quotation. In this interview, Bilheimer told Tutu about meeting normally reasonable, compassionate women who, when speaking about human traffickers, say things like "Hang him up by his balls and then cut 'em off!" Tutu, head of the truth and reconciliation commission, surprised Bilheimer with his response, saying that "these people have committed monstrous acts," but that, according to Christianity, traffickers can still be redeemed and become saints. As had occurred with Bilheimer's previous film, A Closer Walk, Not My Life had several preview screenings before its official release. The United States Agency for International Development (USAID) hosted a preview screening at the Willard InterContinental Washington in September 2009 as part of a day-long symposium on human trafficking. A preview screening in Egypt, including the material shot in that country that was later removed, took place in December 2010 at the International Forum against Human Trafficking in Luxor. Later that month, on December 15, the film's cinematographer and co-director Richard Young died. Not My Life was subsequently dedicated to him. Bilheimer said that Young had believed in the film far more than he himself had. ## Release The film had its official premiere in Alice Tully Hall at the Lincoln Center for the Performing Arts in New York City on January 19, 2011. Melanne Verveer, United States Ambassador-at-Large for Global Women's Issues, gave a speech, saying, "Each and every one of us is called to act. I hope that tonight each of us will make their own commitment." Additional screenings were held in Los Angeles and Chicago later that month. That October, Not My Life had its international premiere in London. CNN International aired the film in two parts a few days later as part of the CNN Freedom Project. The Swedish premiere was attended by Crown Princess Victoria. Bilheimer recognized that people combatting human trafficking are in need of resources, so he considered various options with respect to the intellectual property of Not My Life, ultimately deciding to release the film at charge in addition to selling licenses for unlimited private screenings. On November 1, an 83-minute version of the film was released on DVD by Worldwide Documentaries, which also began digitally distributing the film and selling the unlimited licenses. LexisNexis, the governments of Arizona and Minnesota, and the U.S. Fund for UNICEF all purchased licenses. The latter organization planned to use the film as part of an anti-human-trafficking campaign. Not My Life was screened at the 2012 UNIS-UN Conference in New York City, the theme of which was human exploitation. Segments from the film were included in "Can You Walk Away?", a 2012 exhibition on contemporary slavery at President Lincoln's Cottage at the Soldiers' Home in Washington, DC. A hotel chain presented the film to its staff in London in preparation for the 2012 Summer Olympics to raise awareness about the types of human trafficking that might take place in conjunction with the events. Bilheimer initiated an Indiegogo crowdfunding campaign in 2012 to allow local organizations opposing human trafficking to screen the film. That same year, he expressed a willingness to release fifteen-minute excerpts from the film to help its message reach more people. In a 2012 interview, Bilheimer said that he considered A Closer Walk and Not My Life to be the first two installments in a trilogy; he intended to make an environmental film called Take Me Home as the third installment. As of 2013, however, the Worldwide Documentaries website stated that Bilheimer was considering different subjects for his next film, including poverty in the United States, the aftermath of the 2010 Haiti earthquake, and posttraumatic stress disorder among U.S. Army veterans of the wars in Iraq and Afghanistan. Bilheimer said in 2013 that Not My Life "is not a perfect film by any means, but it is having an impact." He said that he would like to be moving on to a new film project, but that he would continue promoting Not My Life because he thought it could help combat human trafficking. Throughout 2013, the World Affairs Councils of America hosted Not My Life screenings in a variety of cities across the United States, funded by the Carlson Family Foundation. That same year, the Swedish International Development Cooperation Agency gave Worldwide Documentaries a grant to do anti-human trafficking work over a three-year period. Not My Life was screened at the Mexican film festival Oaxaca FilmFest in November 2012; BORDEReview in Warsaw, Poland, in May 2013; and the Pasadena International Film & New Media Festival in February 2014. In May 2014, the Somaly Mam Foundation released a statement that Somaly had resigned from her leadership of the organization as a result of investigations regarding allegations about her personal history. The following month, Bilheimer released a statement in response, saying that he had re-edited the film in order to remove the scenes depicting Somaly and that the new version would be available shortly. Bilheimer wrote that "the storytelling in the Cambodia segment of Not My Life remains intact and is still very moving, with an even sharper focus, now, on the girls themselves." In this statement, Bilheimer requested that people screening previous versions of the film tell their audiences that the presence of Somaly in the film is understandably a distraction, that the film is not primarily about Somaly but rather about the millions of children in slavery in the world, and that this focus is what is most important about the film. For the 2014 re-release of the film, Bilheimer added new content relating to India, including an interview with Kailash Satyarthi, founder of the non-governmental organization Bachpan Bachao Andolan which opposes child labor. This content emphasizes that there are more human trafficking victims in India than in any other country in the world. The new version of the film, which was co-produced with the Delhi-based Riverbank Studios, is 56 minutes long and premiered at the India International Centre in New Delhi on June 26, 2014. Satyarthi was one of the panelists in a panel discussion accompanying the screening, as was Indian filmmaker Mike Pandey, who had managed Riverbank Studios' side of the co-production. The film was scheduled to air on Doordarshan (DD) in Hindi three days later. In July, Bilheimer called his continued work on the film "a labor of love" and said that "far too much silence still surrounds the issue" of human trafficking. ## Reception At the USAID preview screening, actress Lucy Liu, who has worked with MTV EXIT and produced the documentary film Redlight, said that people who watch Not My Life "will be shocked to find [human trafficking] is happening in America"; she said that there were 80,000 women being sexually assaulted daily and she called human trafficking the "cannibalization of the planet's youth." According to UN.GIFT, before Not My Life, there was "no single communication tool that effectively depict[ed] the problem as a whole for a mass audience." Susan Bissell, UNICEF's Child Protection Section chief, agreed with this assertion, and said that the film "takes a close look at the underlying causality that so many other filmmakers have missed [and] it will change the way we see our lives, in some very fundamental ways." She also said that Not My Life is an important documentary because it brings attention to underreported forms of abuse. A reviewer from Medical News Today praised the film for "raising awareness and speaking about taboo subjects," arguing that these activities "are critical to empower families, communities, and governments to speak out honestly and take action against abuses." Lucy Popescu of CineVue called the film "a powerful indictment of the global trade in human beings and the abuse of vulnerable people," but criticized the film for focusing on human trafficking victims, arguing that the perpetrators should have been dealt with more prominently. She commended Bilheimer on the few interviews with traffickers that he did include, but she condemned as inadequate the "only passing reference to the thousands of men who engage in sexual tourism, like those who travel to Cambodia to 'buy' traumatized children who they can then abuse for weeks at a time." Popescu also called the film "simplistic", arguing that it should have more clearly expressed that sex trafficking victims are not able to provide legitimate consent for sexual activity because they are afraid that their lives might be in danger if they do not comply. John Rash of the Star Tribune called the film "a cacophony of concerned voices speaking about a modern-day scourge." Rash praised the film for its global scope, but suggested that this geographical breadth allows American audiences to ignore the fact that the trafficking of children is prevalent in the United States and not just in other countries. Not My Life was named Best World Documentary at the Harlem International Film Festival in September 2012. Nancy Keefe Rhodes of Stone Canoe called it a "highly-distilled ... remarkable film," describing Bilheimer as "committed to strong story-telling and film-as-craft." She commends Bilheimer on alternating between American sequences and scenes in other countries, allowing "the experiences of young women with whom an American audience may more readily identify [to] become one among many woven into the fabric of global trafficking." Tripurari Sharan, Director General of DD, said that his organization was pleased to air the film and hoped that doing so would bring about greater awareness across India about human trafficking in the country. He called the film "both an eye-opener and a profoundly moving call to action".
25,937,372
Major urinary proteins
1,171,104,999
Proteins found in the urine and other secretions of many animals
[ "Allergology", "Lipocalins", "Mouse proteins", "Pheromones", "Protein families", "Urine" ]
Major urinary proteins (Mups), also known as α<sub>2</sub>u-globulins, are a subfamily of proteins found in abundance in the urine and other secretions of many animals. Mups provide a small range of identifying information about the donor animal, when detected by the vomeronasal organ of the receiving animal. They belong to a larger family of proteins known as lipocalins. Mups are encoded by a cluster of genes, located adjacent to each other on a single stretch of DNA, that varies greatly in number between species: from at least 21 functional genes in mice to none in humans. Mup proteins form a characteristic glove shape, encompassing a ligand-binding pocket that accommodates specific small organic chemicals. Urinary proteins were first reported in rodents in 1932, during studies by Thomas Addis into the cause of proteinuria. They are potent human allergens and are largely responsible for a number of animal allergies, including to cats, horses and rodents. Their endogenous function within an animal is unknown but may involve regulating energy expenditure. However, as secreted proteins they play multiple roles in chemical communication between animals, functioning as pheromone transporters and stabilizers in rodents and pigs. Mups can also act as protein pheromones themselves. They have been demonstrated to promote aggression in male mice, and one specific Mup protein found in male mouse urine is sexually attractive to female mice. Mups can also function as signals between different species: mice display an instinctive fear response on the detection of Mups derived from predators such as cats and rats. ## Discovery Humans in good health excrete urine that is largely free of protein. Therefore, since 1827 physicians and scientists have been interested in proteinuria, the excess of protein in human urine, as an indicator of kidney disease. To better understand the etiology of proteinuria, some scientists attempted to study the phenomenon in laboratory animals. Between 1932 and 1933 a number of scientists, including Thomas Addis, independently reported the surprising finding that some healthy rodents have protein in their urine. However, it was not until the 1960s that the major urinary proteins of mice and rats were first described in detail. It was found that the proteins are primarily made in the liver of males and secreted through the kidneys into the urine in large quantities (milligrams per day). Since they were named, the proteins have been found to be differentially expressed in other glands that secrete products directly into the external environment. These include lacrimal, parotid, submaxillary, sublingual, preputial and mammary glands. In some species, such as cats and pigs, Mups appear not to be expressed in urine at all and are mainly found in saliva. Sometimes the term urinary Mups (uMups) is used to distinguish those Mups expressed in urine from those in other tissues. ## Mup genes Between 1979 and 1981, it was estimated that Mups are encoded by a gene family of between 15 and 35 genes and pseudogenes in the mouse and by an estimated 20 genes in the rat. In 2008 a more precise number of Mup genes in a range of species was determined by analyzing the DNA sequence of whole genomes. ### Rodents The mouse reference genome has at least 21 distinct Mup genes (with open reading frames) and a further 21 Mup pseudogenes (with reading frames disrupted by a nonsense mutation or an incomplete gene duplication). They are all clustered together, arrayed side by side across 1.92 megabases of DNA on chromosome 4. The 21 functional genes have been divided into two sub-classes based on position and sequence similarity: 6 peripheral Class A Mups and 15 central Class B Mups. The central Class B Mup gene cluster formed through a number of sequential duplications from one of the Class A Mups. As all the Class B genes are almost identical to each other, researchers have concluded that these duplications occurred very recently in mouse evolution. Indeed, the repetitive structure of these central Mup genes means they are likely to be unstable and may vary in number among wild mice. The Class A Mups are more different from each other and are therefore likely to be more stable, older genes, but what, if any, functional differences the classes have are unknown. The similarity between the genes makes the region difficult to study using current DNA sequencing technology. Consequently, the Mup gene cluster is one of the few parts of the mouse whole genome sequence with gaps remaining, and further genes may remain undiscovered. Rat urine also contains homologous urinary proteins; although they were originally given a different name, α2<sub>u</sub>-globulins, they have since become known as rat Mups. Rats have 9 distinct Mup genes and a further 13 pseudogenes clustered together across 1.1 megabases of DNA on chromosome 5. Like in mice, the cluster formed by multiple duplications. However, this occurred independently of the duplications in mice, meaning that both rodent species expanded their Mup gene families separately, but in parallel. ### Nonrodents Most other mammals studied, including the pig, cow, cat, dog, bushbaby, macaque, chimpanzee and orangutan, have a single Mup gene. Some, however, have an expanded number: horses have three Mup genes, and gray mouse lemurs have at least two. Insects, fish, amphibia, birds and marsupials appear to have disrupted synteny at the chromosomal position of the Mup gene cluster, suggesting the gene family may be specific to placental mammals. Humans are the only placental mammals found not to have any active Mup genes; instead, they have a single Mup pseudogene containing a mutation that causes missplicing, rendering it dysfunctional. ## Function ### Transport proteins Mups are members of a large family of low-molecular weight (\~19 kDa) proteins known as lipocalins. They have a characteristic structure of eight beta sheets arranged in an anti-parallel beta barrel open on one face, with alpha helices at both ends. Consequently, they form a characteristic glove shape, encompassing a cup-like pocket that binds small organic chemicals with high affinity. A number of these ligands bind to mouse Mups, including 2-sec-butyl-4,5-dihydrothiazole (abbreviated as SBT or DHT), 6-hydroxy-6-methyl-3-heptanone (HMH) and 2,3 dihydro-exo-brevicomin (DHB). These are all urine-specific chemicals that have been shown to act as pheromones—molecular signals excreted by one individual that trigger an innate behavioural response in another member of the same species. Mouse Mups have also been shown to function as pheromone stabilizers, providing a slow release mechanism that extends the potency of volatile pheromones in male urine scent marks. Given the diversity of Mups in rodents, it was originally thought that different Mups may have differently shaped binding pockets and therefore bind different pheromones. However, detailed studies found that most variable sites are located on the surface of the proteins and appear to have little effect on ligand binding. Rat Mups bind different small chemicals. The most common ligand is 1-Chlorodecane, with 2-methyl-N-phenyl-2-propenamide, hexadecane and 2,6,11-trimethyl decane found to be less prominent. Rat Mups also bind limonene-1,2-epoxide, resulting in a disease of the host's kidney, hyaline-droplet nephropathy, that progresses to cancer. Other species do not develop this disorder because their Mups do not bind that particular chemical. Accordingly, when transgenic mice were engineered to express the rat Mup, their kidneys developed the disease. The Mup found in pigs, named salivary lipocalin (SAL), is expressed in the salivary gland of males where it tightly binds androstenone and androstenol, both pheromones that cause female pigs to assume a mating stance. Isothermal titration calorimetry studies performed with Mups and associated ligands (pyrazines, alcohols, thiazolines, 6-hydroxy-6-methyl-3-heptanone, and N-phenylnapthylamine,) revealed an unusual binding phenomena. The active site has been found to be suboptimally hydrated, resulting in ligand binding being driven by enthalpic dispersion forces. This is contrary to most other proteins, which exhibit entropy-driven binding forces from the reorganisation of water molecules. This unusual process has been termed the nonclassical hydrophobic effect. ### Pheromones Studies have sought to find the precise function of Mups in pheromone communication. Mup proteins have been shown to promote puberty and accelerate the estrus cycle in female mice, inducing the Vandenbergh and Whitten effects. However, in both cases the Mups had to be presented to the female dissolved in male urine, indicating that the protein requires some urinary context to function. In 2007 Mups normally found in male mouse urine were made in transgenic bacteria, and therefore created devoid of the chemicals they normally bind. These Mups were shown to be sufficient to promote aggressive behaviour in males, even in the absence of urine. In addition, Mups made in bacteria were found to activate olfactory sensory neurons in the vomeronasal organ (VNO), a subsystem of the nose known to detect pheromones via specific sensory receptors, of mice and rats. Together, this demonstrated that Mup proteins can act as pheromones themselves, independent of their ligands. Consistent with a role in male-male aggression, adult male mice secrete significantly more Mups into their urine than females, juveniles or castrated male mice. The precise mechanism driving this difference between the sexes is complex, but at least three hormones—testosterone, growth hormone and thyroxine—are known to positively influence the production of Mups in mice. Wild house mouse urine contains variable combinations of four to seven distinct Mup proteins per mouse. Some inbred laboratory mouse strains, such as BALB/c and C57BL/6, also have different proteins expressed in their urine. However, unlike wild mice, different individuals from the same strain express the same protein pattern, an artifact of many generations of inbreeding. One unusual Mup is less variable than the others: it is consistently produced by a high proportion of wild male mice and is almost never found in female urine. When this Mup was made in bacteria and used in behavioural testing, it was found to attract female mice. Other Mups were tested but did not have the same attractive qualities, suggesting the male-specific Mup acts as a sex pheromone. Scientists named this Mup darcin (Mup20, ) as a humorous reference to Fitzwilliam Darcy, the romantic hero from Pride and Prejudice. Taken together, the complex patterns of Mups produced has the potential to provide a range of information about the donor animal, such as gender, fertility, social dominance, age, genetic diversity or kinship. Wild mice (unlike laboratory mice that are genetically identical and which therefore also have identical patterns of Mups in the urine) have individual patterns of Mup expression in their urine that act as a "barcode" to uniquely identify the owner of a scent mark. In the house mouse, the major MUP gene cluster provides a highly polymorphic scent signal of genetic identity. Wild mice breeding freely in semi-natural enclosures showed inbreeding avoidance. This avoidance resulted from a strong deficit in successful matings between mice sharing both MUP haplotypes (complete match). In another study, using white-footed mice, it was found that when mice derived from wild populations were inbred, there was reduced survival when such mice were reintroduced into a natural habitat. These findings suggest that inbreeding reduces fitness, and that scent signal recognition has evolved in mice as a means of avoiding inbreeding depression. ### Kairomones In addition to serving as social cues between members of the same species, Mups can act as kairomones—chemical signals that transmit information between species. Mice are instinctively afraid of the smell of their natural predators, including cats and rats. This occurs even in laboratory mice that have been isolated from predators for hundreds of generations. When the chemical cues responsible for the fear response were purified from cat saliva and rat urine, two homologous protein signals were identified: Fel d 4 (Felis domesticus allergen 4; ), the product of the cat Mup gene, and Rat n 1 (Rattus norvegicus allergen 1; ), the product of the rat Mup13 gene. Mice are fearful of these Mups even when they are made in bacteria, but mutant animals that are unable to detect the Mups showed no fear of rats, demonstrating their importance in initiating fearful behaviour. It is not known exactly how Mups from different species initiate disparate behaviours, but mouse Mups and predator Mups have been shown to activate unique patterns of sensory neurons in the nose of recipient mice. This implies the mouse perceives them differently, via distinct neural circuits. The pheromone receptors responsible for Mup detection are also unknown, though they are thought be members of the V2R receptor class. ### Allergens Along with other members of the lipocalin protein family, major urinary proteins can be potent allergens to humans. The reason for this is not known; however, molecular mimicry between Mups and structurally similar human lipocalins has been proposed as a possible explanation. The protein product of the mouse Mup6 and Mup2 genes (previously mistaken as Mup17 due to the similarity among mouse MUPs), known as Mus m 1, Ag1 or MA1, accounts for much of the allergenic properties of mouse urine. The protein is extremely stable in the environment; studies have found 95% of inner city homes and 82% of all types of homes in the United States have detectable levels in at least one room. Similarly, Rat n 1 is a known human allergen. A US study found its presence in 33% of inner city homes, and 21% of occupants were sensitized to the allergen. Exposure and sensitization to rodent Mup proteins is considered a risk factor for childhood asthma and is a leading cause of laboratory animal allergy (LAA)—an occupational disease of laboratory animal technicians and scientists. One study found that two-thirds of laboratory workers who had developed asthmatic reactions to animals had antibodies to Rat n 1. Mup genes from other mammals also encode allergenic proteins, for example Fel d 4 is primarily produced in the submandibular salivary gland and is deposited onto dander as the cat grooms itself. A study found that 63% of cat allergic people have antibodies against the protein. Most had higher titres of antibodies against Fel d 4 than against Fel d 1, another prominent cat allergen. Likewise, Equ c 1 (Equus caballus allergen 1; ) is the protein product of a horse Mup gene that is found in the liver, sublingual and submaxillary salivary glands. It is responsible for about 80% of the antibody response in patients who are chronically exposed to horse allergens. ### Metabolism While the detection of Mups excreted by other animals has been well studied, the functional role in the producing animal is less clear. However, in 2009, Mups were shown to be associated with the regulation of energy expenditure in mice. Scientists found that genetically induced obese, diabetic mice produce thirty times less Mup RNA than their lean siblings. When they delivered Mup protein directly into the bloodstream of these mice, they observed an increase in energy expenditure, physical activity and body temperature and a corresponding decrease in glucose intolerance and insulin resistance. They propose that Mups' beneficial effects on energy metabolism occurs by enhancing mitochondrial function in skeletal muscle. Another study found Mups were reduced in diet-induced obese mice. In this case, the presence of Mups in the bloodstream of mice restricted glucose production by directly inhibiting the expression of genes in the liver. ## See also - Cis-vaccenyl acetate, an insect aggression pheromone - Major histocompatibility complex, peptides also implicated in individual recognition in mice - Proteins produced and secreted by the liver
1,128,651
Not One Less
1,171,307,533
1999 film by Zhang Yimou
[ "1990s Mandarin-language films", "1999 drama films", "1999 films", "Chinese drama films", "Films about educators", "Films directed by Zhang Yimou", "Films set in Hebei", "Films shot in Zhangjiakou", "Golden Lion winners", "Social realism in film" ]
Not One Less is a 1999 drama film by Chinese director Zhang Yimou, adapted from Shi Xiangsheng's 1997 story A Sun in the Sky (Chinese: 天上有个太阳; pinyin: tiān shàng yǒu ge tàiyáng). It was produced by Guangxi Film Studio and released by China Film Group Corporation in mainland China, and distributed by Sony Pictures Classics in North America and Columbia TriStar Film Distributors internationally. Set in the People's Republic of China during the 1990s, the film centers on a 13-year-old substitute teacher, Wei Minzhi, in the Chinese countryside. Called in to substitute for a village teacher for one month, Wei is told not to lose any students. When one of the boys takes off in search of work in the big city, she goes looking for him. The film addresses education reform in China, the economic gap between urban and rural populations, and the prevalence of bureaucracy and authority figures in everyday life. It is filmed in a neorealist/documentary style with a troupe of non-professional actors who play characters with the same names and occupations as the actors have in real life, blurring the boundaries between drama and reality. The domestic release of Not One Less was accompanied by a Chinese government campaign aimed at promoting the film and cracking down on piracy. Internationally, the film was generally well-received, but it also attracted criticism for its ostensibly political message; foreign critics are divided on whether the film should be read as praising or criticizing the Chinese government. When the film was excluded from the 1999 Cannes Film Festival's competition section, Zhang withdrew it and another film from the festival, and published a letter rebuking Cannes for politicization of and "discrimination" against Chinese cinema. The film went on to win the Venice Film Festival's Golden Lion and several other awards, and Zhang won the award for best director at the Golden Rooster Awards. ## Background In the 1990s, primary education reform had become one of the top priorities in the People's Republic of China. About 160 million Chinese people had missed all or part of their education because of the Cultural Revolution in the late 1960s and early 1970s, and in 1986 the National People's Congress enacted a law calling for nine years of compulsory education. By 1993, it was clear that much of the country was making little progress on implementing nine-year compulsory education, so the 1993–2000 seven-year plan focused on this goal. One of the major challenges educators faced was the large number of rural schoolchildren dropping out to pursue work. Another issue was a large urban–rural divide: funding and teacher quality were far better in urban schools than rural, and urban students stayed in school longer. ## Production and cast Not One Less was Zhang Yimou's ninth film, but only the second not to star long-time collaborator Gong Li (the first was his 1997 Keep Cool). For this film, he cast only amateur actors whose real-life names and occupations resembled those of characters they played in the film—as The Philadelphia Inquirer's Steven Rea described the performances, the actors are just "people playing variations of themselves in front of the camera". For instance, Tian Zhenda, who played the mayor, was the real-life mayor of a small village, and the primary actors Wei Minzhi and Zhang Huike were selected from among thousands of students in rural schools. (The names and occupations of the film's main actors are listed in the table below.) The movie was filmed on location at Chicheng County's Shuiquan Primary School, and in the city of Zhangjiakou; both locations are in Hebei province. The movie was filmed in a documentary-like, "neorealist" style involving hidden cameras and natural lighting. There are also, however, elements of heavy editing—for example, Shelly Kraicer noted that many scenes have frequent, rapid cuts, partially as a result of filming with inexperienced actors. Zhang had to work closely with government censors during production of the film. He related how the censors "kept reminding [me] not to show China as too backward and too poor", and said that on the title cards at the end of the movie he had to write that the number of rural children dropping out of school each year was one million, although he believed the number was actually three times that. Not One Less was Zhang's first film to enjoy government support and resources. ### Cast ## Plot Thirteen-year-old Wei Minzhi arrives in Shuiquan village to substitute for the village's only teacher (Gao Enman) while he takes a month's leave to care for his ill mother. When Gao discovers that Wei does not have a high school education and has no special talents, he instructs her to teach by copying his texts onto the board and then making the students copy them into their notebooks; he also tells her not to use more than one piece of chalk per day, because the village is too poor to afford more. Before leaving, he explains to her that many students have recently left school to find work in the cities, and he offers her a 10 yuan bonus if all the students are still there when he returns. When Wei begins teaching, she has little rapport with the students: they shout and run around instead of copying their work, and the class troublemaker, Zhang Huike, insists that "she's not a teacher, she's Wei Chunzhi's big sister!" After putting the lesson on the board, Wei usually sits outside, guarding the door to make sure no students leave until they have finished their work. Early in the month, a sports recruiter comes to take one athletic girl, Ming Xinhong, to a special training school; unwilling to let any students leave, Wei hides Ming, and when the village mayor (Tian Zhenda) finds her, Wei chases after their car in a futile attempt to stop them; and yet they, the sports recruiter and mayor, first notice and comment on Wei's running ability, endurance, and tenacity. One day, after trying to make the troublemaker Zhang apologize for bothering another student, Wei discovers that Zhang has left to go find work in the nearby city of Zhangjiakou. The village mayor is unwilling to give her money for a bus ticket to the city, so she resolves to earn the money herself, and recruits the remaining students to help. One girl suggests that they can make money by moving bricks in a nearby brickyard, and Wei begins giving the students mathematical exercises centered on finding out how much money they need to earn for the bus tickets, how many bricks they need to move, and how much time it will take. Through these exercises and working to earn money, her rapport with the class improves. After earning the money, she reaches the bus station but learns that the price is higher than she thought, and she cannot afford a ticket. Wei ends up walking most of the way to Zhangjiakou. In the city, Wei finds the people that Zhang was supposed to be working with, only to discover that they had lost him at the train station days before. She forces another girl her age, Sun Zhimei, to help her look for Zhang at the train station, but they do not find him. Wei has no success finding Zhang through the public address system and "missing person" posters, so she goes to the local television station to broadcast a missing person notice. The receptionist (Feng Yuying) will not let her in without valid identification, though, and says the only way she can enter is with permission from the station manager, whom she describes as "a man with glasses". For the rest of the day, Wei stands by the station's only gate, stopping every man with glasses, but she does not find the station manager, and spends the night asleep on the street. The next day the station manager (Wu Wanlu) sees her at the gate again, through his window, and lets her in, scolding the receptionist for making her wait outside. Although Wei has no money to run an ad on TV, the station manager is interested in her story and decides to feature Wei in a talk show special about rural education. On the talk show, Wei is nervous and hardly says a word when the host (Li Fanfan) addresses her, but Zhang—who has been wandering the streets begging for food—sees the show. After Wei and Zhang are reunited, the station manager arranges to have them driven back to Shuiquan village, along with a truckload of school supplies and donations that viewers had sent in. Upon their return, they are greeted by the whole village. In the final scene, Wei presents the students with several boxes of colored chalk that were donated, and allows each student to write one character on the board. The film ends with a series of title cards that recount the actions of the characters after the film ends, and describe the problem of poverty in rural education in China. ## Themes While most of Zhang's early films had been historical epics, Not One Less was one of the first to focus on contemporary China. The film's main theme involves the difficulties faced in providing rural education in China. When Wei Minzhi arrives in Shuiquan village, the teacher Gao has not been paid in six months and the school building is in disrepair, and chalk is in such short supply that Gao gives Wei specific instructions limiting how large her written characters should be. Wei sleeps in the school building, sharing a bed with several female students. The version of the film released overseas ends with a series of title cards in English, the last of which reads, "Each year, poverty forces one million children in China to leave school. Through the help of donations, about 15% of these children return to school." Because the people and locations used in the film are real but are carefully selected and edited, the film creates a "friction" between documentary reality and narrative fiction. This balancing act between the real and the imaginary has drawn comparisons to neorealist works such as those of Iranian directors Abbas Kiarostami and Mohsen Makhmalbaf, and Zhang has openly acknowledged the influence of Kiarostami in this film. Zhang Xiaoling of the University of Nottingham argues that Zhang Yimou used the documentary perspective in order to suggest that the story is an accurate reflection of most rural areas in China, while Shelly Kraicer believes that his "simultaneous presentation of seemingly opposing messages" is a powerful artistic method in of itself, and that it allows Zhang to circumvent censors by guaranteeing that the movie will include at least one message that they like. Jean-Michel Frodon of Le Monde maintains that the film was produced "in the shadow of two superpowers" and needed to make compromises with each. The film addresses the prominent place that bureaucracy, and verbal negotiation and struggle, occupy in everyday life in China. Many scenes pit Wei against authority figures such as the village mayor, the announcer in the train station, and the TV station receptionist who also acts as a "gatekeeper". Aside from Wei, many characters in the film show a "blind faith" in authority figures. While she lacks money and power, Wei overcomes her obstacles through sheer obstinacy and ignorant persistence, suggesting that speech and perseverance can overcome barriers. Wei becomes an example of "heroic obstinacy" and a model of using determination to face "overwhelming odds". For this reason, the film has been frequently compared to Zhang's 1993 The Story of Qiu Ju, whose heroine is also a determined, stubborn woman; likewise, Qiu Ju is also filmed in a neo-realistic style, set partially in contemporary rural China and partially in the city, and employs mostly amateur actors. Not One Less portrays the mass media as a locus of power: Wei discovers that only someone with money or connections can gain access to a television station, but once someone is on camera she or he becomes part of an "invisible media hegemony" with the power to "manipulate social behavior", catching people's attention where paper advertisements could not and moving cityfolk to donate money to a country school. The power of television within the film's story, according to Laikwan Pang of the Chinese University of Hong Kong, reflects its prominent place in Chinese society of the late 1990s, when domestic cinema was floundering but television was developing quickly; Pang argues that television-watching forms a "collective consciousness" for Chinese citizens, and that the way television unifies people in Not One Less is an illustration of this. Money is important throughout the film. Concerns about money dominate much of the film—for example, a large portion is devoted to Wei and her students' attempt to earn enough money for bus tickets—as well as motivating them. Most major characters, including Wei, demand payment for their actions, and it is left unclear whether Wei's search for Zhang Huike is motivated by altruism or by the promise of a 10-yuan bonus. Zhu Ying points out the prominence of money in the film creates a conflict between traditional Confucian values (such as the implication that the solutions to Wei's problems can be found through the help of authority figures) and modern, capitalist and individualistic society. Finally, the film illustrates the growing urban–rural divide in China. When Wei reaches Zhangjiakou, the film creates a clear contrast between urban and rural life, and the two locations are physically separated by a dark tunnel. The city is not portrayed as idyllic; rather, Zhang shows that rural people are faced with difficulties and discrimination in the cities. While Wei's first view of the city exposes her to well-dressed people and modern buildings, the living quarters she goes to while searching for Zhang Huike are cramped and squalid. Likewise, the iron gate where Wei waits all day for the TV station director reflects the barriers poor people face to survival in the city, and the necessity of connections to avoid becoming an "outsider" in the city. Frequent cuts show Wei and Zhang wandering aimlessly in the streets, Zhang begging for food, and Wei sleeping on the sidewalk; when an enthusiastic TV host later asks Zhang what part of the city left the biggest impression, Zhang replies that the one thing he will never forget is having to beg for food. A.O. Scott of The New York Times compared the "unbearable" despair of the film's second half to that of Vittorio De Sica's 1948 Bicycle Thieves. ## Reception ### Cannes withdrawal Neither Not One Less nor Zhang's other 1999 film The Road Home was selected for the 1999 Cannes Film Festival's Official Selection, the most prestigious competition in the festival, where several of Zhang's earlier films had won awards. The rationale is uncertain; Shelly Kraicer and Zhang Xiaoling claim that Cannes officials viewed the Not One Less' happy ending, with the main characters' conflicts resolved by the generosity of city dwellers and higher-up officials, as pro-China propaganda, while Zhu Ying claims that the officials saw it and The Road Home as too anti-government. Rather than have his films shown in a less competitive portion of the festival, Zhang withdrew them both in protest, stating that the movies were apolitical. In an open letter published in the Beijing Youth Daily, Zhang accused the festival of being motivated by other than artistic concerns, and criticized the Western perception that all Chinese films must be either "pro-government" or "anti-government", referring to it as a "discrimination against Chinese films". ### Critical response Not One Less has an approval rating of 96% on review aggregator website Rotten Tomatoes, based on 47 reviews, and an average rating of 7.6/10.Metacritic assigned the film a weighted average score of 73 out of 100, based on 22 critics, indicating "generaly favourable reviews". Many focused on the film's ending title cards: several compared them to a public service announcement, and Philip Kemp of Sight & Sound wrote, "All that's missing is the address we should send donations to." Zhang Xiaoling, on the other hand, considered the titles to be an implicit criticism of the state of rural education in China, saying, "the news that voluntary contributions have helped 15 percent of the pupils to return to school is aimed to give rise to a question: what about the remaining 85 percent?" The disagreement about the title cards is also reflected in the critical reaction to the rest of the film's resolution. Kemp described the ending as "feelgood" and criticized the film for portraying officials and generous cityfolk as coming to the rescue, The Washington Post's Desson Howe called the ending "flag-waving", and The Independent's Gilbert Adair called it "sugary". Alberto Barbera of the Venice Film Festival, on the other hand, said that while the end of the film may have been like propaganda, the rest was a "strong denunciation of a regime that is unable to assure proper education for the country children". Likewise, Zhang Xiaoling argued that although the film superficially appears to praise the city people and officials, its subtext is harshly critical of them: he pointed out that the apparently benevolent TV station manager seems to be motivated more by audience ratings than by altruism, that the receptionist's callous manner towards Wei is a result of Chinese "bureaucratism and nepotism", and that for all the good things about the city, Zhang Huike's clearest memory of city life is having to beg for food. Zhang and Kraicer both argued that critics who see the film as pro-government propaganda are missing the point and, as Kraicer put it, "mistaking [one] layer as the message of the film ... mistaking the part for the whole". David Ansen of Newsweek and Leigh Paatsch of the Herald Sun each pointed out that, while the film is "deceptive[ly]" positive at face value, it has harsh criticism "bubbling under the surface". Chinese critics Liu Xinyi and Xu Su of Movie Review recognized the dispute abroad over whether the film was pro- or anti-government, but made no comment; they praised the film for its realistic portrayal of hardships facing rural people, without speculating about whether Zhang intended to criticize or praise the government's handling of those hardships. Hao Jian of Film Appreciation, on the other hand, was more critical, claiming that the movie was organized around a political message and was intended to be pro-government. Hao said that Not One Less marked the beginning of Zhang's transformation from an outspoken independent director to one of the government's favorites. Overall, critics were impressed with the performances of the amateur actors, and Jean-Michel Frodon of Le Monde called that the film's greatest success. Peter Rainer of New York Magazine praised the scene of Wei's interview on TV as "one of the most improbably satisfying love scenes on film". The film also received praise for its artistic merits and Hou Yong's cinematography, even though its visuals were simplistic compared to Zhang's previous films; for example, A.O. Scott of The New York Times praised the "richness" displayed by the film despite its deliberate scarcity of color. Reviewers also pointed out that Zhang had succeeded in breaking away from the "commercial entertainment wave" of popular film. Noel Vera of BusinessWorld writes that the film concerns itself mainly with emotional impact, at the expense of visual extravagance, making it the opposite of earlier Zhang Yimou films such as Red Sorghum. Other critics noted the strength of the film's storytelling; for instance, Rainer called the film an "uncommon, and uncommonly moving, love story", and Film Journal International's Kevin Lally described it as "a poignant story of poverty and spirit reminiscent of the great Italian neo-realists." Another well-received part of the film was the segment in which Wei teaches math by creating practical examples out of her attempt to raise money for the bus to Zhangjiakou; in the Chinese journal Teacher Doctrines, Mao Wen wrote that teachers should learn from Wei's example and provide students with practical exercises. Wei Minzhi's character received mixed reactions: Scott described her as a "heroic" character who demonstrates how obstinacy can be a virtue, whereas Richard Corliss of Time says she is "no brighter or more resourceful than [her students]". Reactions to the city portion of the movie were also mixed: while Zhang describes the second half of the film as an eloquent commentary on China's urban-rural divide and Kevin Lally calls it "startling", Kemp criticizes it for being a predictable "Victorian cliché". ### Box office and release Rights to distribute the film were purchased by the China Film Group Corporation, a state-sponsored organization, and the government actively promoted the film. It was officially released in mainland China in April 1999, although there were showings as early as mid-February. Sheldon H. Lu reports that the film grossed ¥18 million, an average amount, in its first three months of showing; by the end of its run in November, it sold ¥40 million at the box office. (In comparison, Zhang's 2002 film Hero would earn ¥270 million three years later.) Nevertheless, Not One Less was the highest-grossing domestic film of 1999, and Laikwan Pang has called it a "box office success". In the United States, the film was released in theaters on 18 February 2000, and grossed \$50,256 in its first weekend and \$592,586 overall; The release was handled by Sony Pictures Classics, and home video distribution by Columbia TriStar; Not One Less was Columbia's first Chinese film. Lu warns that domestic box office sales are not reliable indicators of a film's popularity in mainland China, because of piracy and because of state or social group sponsorship; many workers were given free tickets to promote the film, and a 1999 report claimed that more tickets were purchased by the government than by individuals. The film was more popular than most government-promoted films touting the party line and Lu claims that it had "tremendous social support", but Pang points out that its success was "not purely egalitarian, but partly constructed." At the time of Not One Less' release, DVD and VCD piracy was a growing concern in mainland China, and the China Copyright Office issued a notice forbidding unauthorized production or distribution of the film. This was the first time China had enacted special copyright protections for a domestic film. On 21 April 1999, Hubei province's Culture Office issued an "Urgent Notice for Immediate Confiscation of Pirated Not One Less VCDs", and two days later the Culture Office and movie company joined forces to conduct raids on ten audio-video stores, seizing pirated discs from six of them. ### Awards Although it was withdrawn from Cannes, Not One Less went on to win the Golden Lion, the top award at the Venice Film Festival. Zhang also received a best director award at the Golden Rooster, mainland China's most prestigious award ceremony, and the film was voted one of the top three of the year in the Hundred Flowers Awards. Awards the film won or was nominated for are listed below.
8,848,608
Phosphatodraco
1,170,124,935
Late Cretaceous genus of pterosaur
[ "Azhdarchids", "Fossil taxa described in 2003", "Late Cretaceous pterosaurs of Africa" ]
Phosphatodraco is a genus of azhdarchid pterosaur that lived during the Late Cretaceous of what is now Morocco. In 2000, a pterosaur specimen consisting of five cervical (neck) vertebrae was discovered in the Ouled Abdoun Phosphatic Basin. The specimen was made the holotype of the new genus and species Phosphatodraco mauritanicus in 2003; the genus name means "dragon from the phosphates", and the specific name refers to the region of Mauretania. Phosphatodraco was the first Late Cretaceous pterosaur known from North Africa, and the second pterosaur genus described from Morocco. It is one of the only known azhdarchids preserving a relatively complete neck, and was one of the last known pterosaurs. Additional cervical vertebrae have since been assigned to the genus, and it has been suggested that fossils of the pterosaur Tethydraco represent wing elements of Phosphatodraco. Due to the fragmentary nature of the holotype cervical vertebrae, there has been controversy over their order. The describers considered them as cervicals (abbreviated as C) C5–C9 in the series, the first preserved vertebra (C5) being broken in two, but others consider them C3–C8, C3 and C4 being two different vertebrae. The interpretation followed has consequences for how Phosphatodraco is distinguished from other azhdarchids and how large it is thought to have been; the describers considered it to have had a wingspan of 5 m (16 ft); the alternate interpretation would lead to a 4 m (13 ft) wingspan. The complete neck may have been 865 mm (2 ft 10 in) long. Phosphatodraco is mainly distinguished by its C8 (or C7) vertebra being very elongated, 50% longer than the C5, and in having a prominent neural spine that is almost as tall as the centrum (the main part of the vertebra), truncated in a square shape at the top, and located far back. As an azhdarchid, it would have had a proportionally long neck, small body, and long limbs, compared to other pterosaurs. The closest relatives of Phosphatodraco appear to have been Aralazhdarcho and Eurazhdarcho. Azhdarchids have historically been considered skim-feeders that caught prey from water in coastal settings, but it has since been suggested that the context in which their fossils are found and their morphology – such as their long, stiffened necks (informed by for example the neck of Phosphatodraco) – is more consistent with them having foraged terrestrially like storks or ground hornbills, but this is still debated. Although pterosaurs were thought to have declined in diversity towards the time of their extinction 66 million years ago, the diversity in taxa, including Phosphatodraco, in the Ouled Abdoun Basin, which dates to the late Maastrichtian, right before the Cretaceous-Paleogene extinction event, indicates their extinction happened abruptly. ## Discovery During the late 1990s, remains of pterosaurs began to be discovered in different fossil localities of Morocco, all dating to the Cretaceous period. In 2000, pterosaur fossils were found by the Office Chérifien des Phosphates (OCP, located in Casablanca) during paleontological field work in the eastern part of the Ouled Abdoun Phosphatic Basin near the city of Khouribga in central Morocco. They were collected from "site 1" in the Sidi Daoui mine in the northern part of Grand Daoui, an area in which phosphate is quarried. The fossils were found in the upper part of the stratigraphic unit which miners called "couche III". These excavations were part of a collaboration between the OCP, the Ministère de L'Energie et des Mines, Rabat, and the French Centre National de la Recherche Scientifique, which had taken place since 1997. The pterosaur material, catalogued as specimen OCP DEK/GE 111, consists of five disarticulated but closely associated cervical (neck) vertebrae and an indeterminate bone, most likely belonging to a single individual. The vertebrae are crushed and damaged, and the surface of the bone is missing in some areas, with some infilling of phosphate sediments, and the fossils have therefore not been removed from the matrix. The block containing the bones is 98 cm (39 in) long and 34 cm (13 in) wide. During mechanical preparation of the specimen fossil remains of other animals were also found in association, including of several types of fish and mosasaurs. The specimen was made the holotype of the new genus and species Phosphatodraco mauritanicus by paleontologist Xabier Pereda-Suberbiola and colleagues in 2003. The genus name derives from the words phosphate and the Latin draco, meaning "dragon from the phosphates", and the specific name refers to the region of Mauretania where the fossils were found. The describers gave the etymology of Mauretania as Latin for North Africa; other sources specify it as an area stretching from Algeria to Morocco. Phosphatodraco was the first Late Cretaceous pterosaur known from North Africa (and thus the first known member of the family Azhdarchidae of this age from the region), and only the second pterosaur genus described from Morocco (the first being Siroccopteryx). At the time it was described, it was one of the only known azhdarchids preserving a relatively complete neck (the others being Zhejiangopterus and Quetzalcoatlus), and was one of the last known pterosaurs. Complete neck vertebral series are rare for azhdarchids, but such vertebrae are some of the most commonly found and best known remains of the group. In 2018 paleontologist Nicholas R. Longrich and colleagues reported pterosaur fossils collected from "couche III" in cooperation with the Moroccan fossil industry the previous three years; until that point, only the single specimen of Phosphatodraco was known from the assemblage. At the time, the collection was the largest and most diverse collection of pterosaurs from the Maastrichtian age of the Late Cretaceous, and included two cervical vertebrae they assigned to Phosphatodraco, based on similarity with the holotype in size and proportions. One of the vertebrae, specimen FSAC-OB 12, was identified as a C5 (though stated in the description to be similar to C6 of the holotype); the other, FSAC-OB 13, was identified only as a cervical vertebra. The cervical ribs (ribs of the neck vertebrae) of FSAC-OB 12 do not appear to have yet fused to the centrum (the main part of the vertebra), so the animal may not have been completely mature. These specimens are housed at Faculté des Sciences Aïn Chock in Casablanca. In 2020 paleontologists Claudio Labita and David M. Martill described an articulated (where the bones are connected as in life) pterosaur wing from "couche III" (specimen FSAC CP 251, bought from fossil dealers), which they assigned to Tethydraco, a genus also described by Longrich and colleagues in 2018 based on a humerus (upper arm bone). Tethydraco was originally considered a pteranodontid, but Labita and Martill concluded it was an azhdarchid, and that it possibly represented the wing elements of Phosphatodraco. They noted that more associated and articulated pterosaur fossils were being collected from these deposits due to improving methods used by fossil diggers, and that azhdarchid fossils were becoming abundant. They also cautioned that the provenance of some of the Moroccan fossils was difficult to establish, due to the commercial nature of their collection. In 2022, paleontologist Alexandra E. Fernandes and colleagues noted similarities between the humerus of Epapatelo and Tethydraco, and recovered Tethydraco as a pteranodontian. ### Interpretations of cervical (neck) vertebra order Pereda-Suberbiola and colleagues originally interpreted the five preserved cervical vertebrae of Phosphatodraco as cervicals C5–C9. The frontmost preserved vertebra they interpreted as C5 consisted of two fragments; they found it unlikely that these belonged to two different vertebrae, since they lay in continuity with no sediment in between, and overlapped each other in some areas. They considered the sideways expansion at the front of this vertebra to be due to crushing, and pointed out that such preservation where fragile, yet well-preserved bones are associated with damaged material of the same individual is known from other vertebrate fossils in the same level. They identified the frontmost vertebra as a C5 because this is usually the longest cervical vertebra in pterosaurs, their length decreasing hindward. In 2007 paleontologist Alexander W. A. Kellner and colleagues noted that Phosphatodraco was one of the most interesting azhdarchids found in Africa, but used cautious language about the original interpretation of the vertebrae. Kellner suggested in 2010 that interpretation of the holotype specimen had been affected by taphonomy (changes during decay and fossilisation); he instead proposed that the element originally described as a fifth cervical vertebra was actually the third and fourth vertebrae which had been compressed together, giving the impression that they were a single vertebra broken in the middle. This would shift the order of the vertebrae to C3–C8, and though this did not change the validity of the taxon (its distinctness being based on its stratigraphy, geography, and morphology), Kellner noted that the diagnosis (the suite of features that distinguish a taxon) and size estimate had to be reevaluated. The frontmost cervical vertebrae which are missing from Phosphatodraco in either scheme are the atlas (C1, which connects with the back of the skull) and the axis (C2). Subsequent articles from 2011 and 2015 with Kellner among the co-authorship have concurred with Kellner's interpretation. Paleontologist Alexander Averianov disagreed with Kellner's reinterpretation of the cervical vertebrae in 2014, and considered the original description accurate. A 2015 article by paleontologist Mátyás Vremir and colleagues called the issue "controversial" and considered the specimen too crushed for proper comparison, and Martill and Markus Moser concurred with this in 2018. Paleontologists Darren Naish and Mark P. Witton (the co-authors of Vremir's article) followed Kellner's interpretation in 2017. Paleontologist Rodrigo V. Pêgas and colleagues also followed Kellner's order in 2021. Though the palaeontologist Alexandru A. Solomon and colleagues noted the suggested change in interpretation of the holotype order in 2019, they stated that even if the reinterpretation was correct, the specimen was too damaged for comparison with the single known cervical vertebra of their new genus Albadraco. ## Description In their 2003 description, Pereda-Suberbiola and colleagues estimated Phosphatodraco to have had a wingspan close to 5 m (16 ft), based on comparison with other azhdarchids with preserved cervical vertebrae, and referred to it as a "large azhdarchid pterosaur". This is larger than azhdarchids such as Zhejiangopterus and Montanazhdarcho, and comparable to the smaller species of Quetzalcoatlus, Q. lawsoni; the larger Q. northropi is thought to have reached 10–11 metres (33–36 ft), thereby being the largest known flying animal. Witton grouped Phosphatodraco with "midsized" azhdarchids based on this size estimate in 2013. In 2010, Kellner suggested this size estimate too large, based on his reinterpretation of the neck vertebra order. Naish and Witton, who followed Kellner's interpretation, listed a neck-length of 865 mm (2 ft 10 in) for Phosphatodraco in 2017, and a wingspan of 4 m (13 ft). There were two main types of azhdarchid skulls; very long, low skulls that were up to ten times longer than wide, and some that were much shorter than that, closer to those of other pterosaurs. Some had crests and some did not. Azhdarchids had necks that were proportionally longer than those of other pterosaurs, and their vertebral column and much of the rest of the skeleton was pneumatized (filled with air-sacs that lightened it). The body skeleton of azhdarchids was small but robust, and their upper arm bones were solidly built. Their wing-metacarpals (the hand bones that connect with the fingers) were relatively the largest among pterosaurs, and the longest bones in the wings; their wing-fingers (which supported the wing-membranes) were relatively short. The pelvises were relatively robust, and the hindlimbs long. Combined, the long-wing metacarpals and legs made azhdarchids relatively taller when standing than other pterosaurs, though their feet were narrow and short. As a pterosaur, Phosphatodraco was covered with hair-like pycnofibers. ### Cervical vertebrae The suite of features that distinguish a taxon from other related taxa is called a diagnosis, and in the case of Phosphatodraco, these features are all found in the cervical vertebrae. Since Pereda-Suberbiola and colleagues considered the preserved vertebrae to be C5–C9 of the series in their description, that is the diagnosis and description followed here. Note that if Kellner's suggestion that the series actually represents vertebrae C3–C8 is correct, the diagnostic features listed by Pereda-Suberbiola and colleagues may possibly be inaccurate, and the following description would refer to different vertebrae in the series. Pereda-Suberbiola and colleagues found Phosphatodraco to differ from other azhdarchids in that the hindmost vertebra, C8 in their order, is very elongated, 50% longer than C5, and has a prominent neural spine (the spine that projects upwards from the vertebra) that is almost as tall as the centrum, truncated in a square shape at the top, and located far back. Phosphatodraco is also distinct in that the ratio between the maximum length of the vertebrae and the front width between prezygapophyses (processes at the sides of the centrum which connected with the postzygapophyses of the previous vertebrae) of the middle cervical vertebrae is about 4.3 in C5 and 4.1 in C6. The five preserved cervical vertebrae have hollow centra and their cortical bone (the outer, thick layer) is about 1 mm (0.039 in) thick. The vertebrae vary in length, the longest being the frontmost of those preserved, a C5 broken in two according to Pereda-Suberbiola and colleagues (C3–C4 according to Kellner), which they estimated to have been about 300 mm (0.98 ft) long when complete. The first of these fragments is 110 mm (4.3 in), the second is about 190 mm (7.5 in). When viewed from the side, the hind end of this vertebra shows a developed left postzygapophysis, and the convex articular condyle (the condyle that connected with the following vertebra) and left postexapophysial process (which connected with the preexapophys at the front of the preceding vertebra) is in front of it, all lying in the same plane due to crushing. The C6 (Kellner's C5) is the best-preserved of the vertebrae, and is shorter than the preceding C5, about 225 mm (8.9 in) long. It is distorted so that the front part is visible in lower side view, and the hind part is visible in left side view. Its centrum is procoelous (concave at the front surface), its prezygapophyses are horn-like, and nearly concave and parallel when seen from below. The right prezygapophys has a small tubercle (a rounded projection) at the midline, and there is no indication of additional processes at the front end of the vertebra or pneumatic foramina (holes) on the side surface of the centrum. The front cotyle (the concave front end of the centrum) is distorted, but it appears to be twice as wide as high, ovoid (or egg-shaped), and with a slightly concave upper margin. The lower margin has a prominent hypapophysis (a downwards projection), and this keel's height diminishes towards the centrum's mid-length. There is a longitudinal oval sulcus (a groove) on the right side of the lower surface, near the base of the prezygapophysis. The lower surface of the centrum is nearly flat, and the postexapophys is well-developed at the lower side of the condyle, like in the preceding vertebrae. The following C7 vertebra (Kellner's C6) is visible in bottom view, is missing the hind part, the preserved part being 190 mm (7.5 in) long, and the complete length perhaps being the same or shorter as the preceding C6. Its prezygapophyses are similar to those of the preceding vertebra, and the cotyle is oval and compressed from top to bottom. There is a sulcus below the left prezygapophys, apparently without a ridge extending hindward. The centrum bulges slightly hindward, becoming narrower at its mid-length. The postzygapophyses are well-developed, and diverge widely from the longitudinal midline of the centrum. A small protuberance between the postzygapophyses perhaps indicates where the upper margin of the neural canal (through which the spinal cord passed) was located, though its features cannot be accurately determined. The next to last vertebra is C8 (Kellner's C7) which is visible in side view, and despite missing some of the front, the centrum is very elongated, measuring 150 mm (5.9 in). The most notable feature of this vertebra is its tall neural spine, which is placed at its hindmost part. The neural spine is 40 mm (1.6 in) high measured from the upper surface of the postzygapophys to the top, almost the same height as the centrum, which is 45 mm (1.8 in). The front and hind margins of the neural spine are vertically parallel to each other, and its top is truncated in a square shape, and perpendicular to the side edges. The left postzygapophys is located at the base of the hind end of the neural arch (which forms the arch of bone through which the spinal cord passed). It is similar to the same vertebra of Quetzalcoatlus in that the neural spine is square on top, but differs in being placed so far back. The left postexaphophyseal process is well-developed at the back and below, but does not extend past the condyle as it does in the preceding vertebrae. The last vertebra is the C9 according to Pereda-Suberbiola and colleagues (Kellner's C8,) which is visible in hind view, and its preserved part is 75 mm (3.0 in) high. Its neural arch has a large neural spine and well-developed transverse processes (which projected from the sides of the centrum and acted as attachment points for muscles and ligaments). The neural spine terminates in a blunt process above, and the hind side of the neural spine has an oval depression, which has thick vertical edges at its sides. The transverse processes are long and slender, and project to the sides and slightly downwards. The neural canal is small and nearly circular, is about 22 mm (0.87 in) in diameter, and there are no pneumatic foramina near it. Its condyle is broad, around five times wider than high, and is crescent-shaped in cross-section. The left postexapophysis is placed at the side of the condyle and almost vertical. Though none of the vertebrae preserve cervical ribs, the development of the transverse processes of the last vertebra indicates that it probably had ribs. The indeterminate bone fragment associated with the two last vertebrae has a similar texture to them, is flat and crescent-shaped, and is about 9 mm (0.35 in) wide and 44 mm (1.7 in) long. Pereda-Suberbiola and colleagues found that the frontmost preserved cervical vertebrae of Phosphatodraco (their C5–C7) were similar in form to those of the mid-series cervical vertebrae of other long-necked pterodactyloid pterosaurs (the group consisting of short-tailed pterosaurs). The last hindmost preserved vertebrae (their C8–C9) have some features in common with the rest of the vertebrae, including broad, ovoid cotyles and condyles, as well as postexapophyses, but they differ in their neural canal being demarcated from the centrum and in having a prominent neural spine. Pereda-Suberbiola and colleagues suggested the C8–C9 could be cervicalized dorsal vertebrae, back vertebrae which have been incorporated into the neck. The total number of cervical vertebrae in pterosaurs varies between seven and nine, and the first dorsal vertebra is considered to be the first one that connects with the sternum (breast bone). Early pterosaurs like Rhamphorhynchus had eight cervical vertebrae with cervical ribs on at least C3–C8; later pterodactyloids had seven vertebrae and no ribs. In later pterdactyloid groups, nine cervical vertebrae are present, two of them being cervicalized dorsals, and adults have a notarium (a structure consisting of fused vertebrae in the shoulder region, also seen in birds). ## Classification In their 2003 description Pereda-Suberbiola and colleagues considered Phosphatodraco a member of Azhdarchidae based on features such as its mid-series cervical vertebrae being elongated, with low vestigial (almost evolutionarily lost) or absent neural spines, the presence of prezygapophyseal tubercles, a pair of lower sulci near the prezygapophyses, and the lack of oval pneumatic foramina on the lower surfaces of the centra. These features are especially similar to those of Quetzalcoatlus and Azhdarcho. Other features distinguishing the group could not be identified in Phosphatodraco due to the preservation of its fossils. Longrich and colleagues performed a phylogenetic analysis that included Moroccan pterosaurs in 2018, and found Phosphatodraco to be an azhdarchid, and the sister taxon of Aralazhdarcho from Kazakhstan. A 2021 analysis by Rodrigo and colleagues also found these two genera to be sister taxa, joined in a clade by Eurazhdarcho from Romania. This clade was supported by one clear synapomorphy (a distinct, ancestrally shared feature), that the side margin of the mid-cervical vertebrae is straight when viewed from above and below, with almost parallel sides. These researchers noted that previous studies had defined Azhdarchidae as a node-based clade with Azhdarcho and Quetzalcoatlus as internal specifiers, but cautioned that in their new phylogeny, Phosphatodraco, Zhejiangopterus, and Eurazhdarcho would fall outside the group. They found this undesirable, as those genera had otherwise consistently been considered azhdarchids, and that for stability's sake, Phosphatodraco should be added as a third internal specifier for the group, since this would result in all these taxa being included. In 2021 paleontologist Brian Andres and colleagues also found Phosphatodraco and Aralazhdarcho to be sister taxa, supported by the reduction of pneumatic foramina on the side of the neural canal. This clade was recovered as part of the azhdarchid subclade Quetzalcoatlinae. The cladogram below shows the placement of Phosphatodraco within Azhdarchiformes according to Andres and colleagues, 2021: ## Paleobiology ### Feeding and ecological niche In 2008 Witton and Naish pointed out that although azhdarchids have historically been considered to have been scavengers, probers of sediment, swimmers, waders, aerial predators, or stork-like generalists, most researchers until that point had considered them to have been skim-feeders living in coastal settings, which fed by trawling their lower jaws through water while flying and catching prey from the surface (like skimmers and some terns). In general, pterosaurs have historically been considered marine piscivores (fish-eaters), and despite their unusual anatomy, azhdarchids have been assumed to have occupied the same ecological niche. Witton and Naish noted that evidence for this mode of feeding lacked support from azhdarchid anatomy and functional morphology; they lacked cranial features such as sideways compressed lower jaws and the shock-absorbing adaptations required, and their jaws instead appear to have been almost triangular in cross-section, unlike those of skim-feeders and probers. Witton and Naish instead stated that azhdarchids probably inhabited inland environments, based on the taphonomic contexts their fossils have been found in (more than half the fossils surveyed were from for example fluvial or alluvial deposits, and most of the marine occurrences also had fossils of terrestrial lifeforms), and their morphology made them ill-suited for lifestyles other than wading and foraging terrestrially, though their feet were relatively small, slender, and had pads, not suited for wading either. These researchers instead argued that azhdarchids were similar to storks or ground hornbills, generalists they termed "terrestrial stalkers" that foraged in different kinds of environments for small animals and carrion, supported by their apparent proficiency on the ground and relatively inflexible necks, for example the well-preserved neck of Phosphatodraco providing information about their morphology. Witton and Naish suggested that their more generalist lifestyle could explain the group's resilience compared to other pterosaur lineages, which were not thought to have survived until the late Maastrichtian like the azhdarchids did (pterosaurs went extinct along with the non-bird dinosaurs during the Cretaceous-Paleogene extinction event 66 million years ago). Witton elaborated in a 2013 book that the proportions of azhdarchids would have been consistent with them striding through vegetated areas with their long limbs, and their downturned skull and jaws reaching the ground. Their long, stiffened necks would be an advantage as it would help lowering and raising the head and give it a vantage point when searching for prey, and enable them to grab small animals and fruit. In their 2021 study that reinterpreted Tethydraco as an azhdarchid, and possibly the same as Phosphatodraco, Labita and Martill noted that azhdarchids might have been less terrestrial than suggested by Witton and Naish, since the Moroccan fossils were from marine strata, as was Arambourgiania from the phosphates of Jordan. They noted that no azhdarchids had been found in truly terrestrial strata, and proposed they could instead have been associated with aquatic environments, such as rivers, lakes, marine and off-shore settings. Pterosaurs are generally thought to have gone gradually extinct by decreasing in diversity towards the end of the Cretaceous, but Longrich and colleagues suggested this impression could be a result of the poor fossil records for pterosaurs (the Signor-Lipps Effect). Since they found multiple lineages (Pteranodontidae, Nyctosauridae, and Azhdarchidae, the latter including Phosphatodraco and others) to have co-existed during the late Maastrichtian of Morocco, this is the most diverse pterosaur assemblage known from the Late Cretaceous. Pterosaurs during this time thereby had increased niche-partitioning compared to earlier faunas from the Santonian and Campanian ages, and they were able to outcompete birds in large size based niches, and birds therefore remained small, not exceeding 2 m (6.6 ft) wingspans during the Late Cretaceous (most pterosaurs during this time had larger wingspans, and thereby avoided the small-size niche). To these researchers, this indicated that the extinction of pterosaurs was abrupt instead of gradual, caused by the catastrophic Chicxulub impact. Their extinction freed up more niches that were then filled by birds, which led to their evolutionary radiation in the Early Cenozoic. ### Locomotion Witton summarized ideas about azhdarchid flight abilities in 2013, and noted they had generally been considered adapted for soaring, although some have found it possible their musculature allowed flapping flight like in swans and geese. Their short and potentially broad wings may have been suited for flying in terrestrial environments, as this is similar to some large, terrestrially soaring birds. Albatross-like soaring has also been suggested, but Witton thought this unlikely due to the supposed terrestrial bias of their fossils and adaptations for foraging on the ground. Studies of azhdarchid flight abilities indicate they would have been able to fly for long and probably fast (especially if they had an adequate amount of fat and muscle as nourishment), so that geographical barriers would not present obstacles. Azhdarchids are also the only group of pterosaurs to which trackways have been assigned, such as Haenamichnus from Korea, which matches this group in shape, age, and size. One long trackway of this kind shows that azhdarchids walked with their limbs held directly underneath their body, and along with the morphology of their feet indicates they were more proficient on the ground than other pterosaurs. According to Witton, their proportions indicate they were not good swimmers on the other hand, and though they could probably launch from water, they were not as good at this as some other pterosaur groups. ## Paleoenvironment Phosphatodraco is known from the "couche III" phosphatic unit of the Ouled Abdoun Basin in Morocco, which was deposited during the late Maastrichtian age of the Late Cretaceous period, which ended 66 million years ago. The phosphatic series is condensed and the Maastrichtian part is only 3–5 cm (1.2–2.0 in) thick. From the bottom to the top, "couche III" consists of thin phosphatic levels and marls, a grey limestone bed containing fish fossils, yellow, soft phosphates at the lower level, a thick, yellow marly level, separating the lower and upper "couche III", and gray, soft phosphates with brown stripes overlay in a thick marl level at upper "couche III". Pterosaur fossils are found in the lower part of the upper phosphatic unit. The kind of fossils that are usually used in biostratigraphy are rare, which complicates attempts at dating these beds, but "couche III" has been correlated with the late Maastrichtian on the basis of shark teeth, which has also been confirmed by carbon and oxygen isotope stratigraphy. The phosphates were deposited in an embayment of the eastern Atlantic Ocean that flooded North Africa during the Late Cretaceous and Early Paleogene. The area would have been part of the Tethys Sea at the time. The phosphatic matrix of the original Phosphatodraco specimen is gray and mottled with orange, and contained fossils including of the fish Serratolamna, Rhombodus, and Enchodus, and the mosasaur Prognathodon, as well as small nodules. The specimen was found close to skeletal remains of an indeterminate mosasaur, and remains of sharks, rays, fish such as Stratodus, mosasaurs such as Platecarpus, Mosasaurus, and Halisaurus, indeterminate elasmosaurid plesiosaurs, and indeterminate bothremydid turtles have also been found at the site. This assemblage of animals suggests the sediments were deposited in a marine environment. Other contemporary pterosaurs from the Ouled Abdoun Basin include the pteranodontid Tethydraco (if it is not the same animal as Phosphatodraco), the nyctosaurids Alcione, Simurghia, and Barbaridactylus, a small azhdarchid similar to Quetzalcoatlus, and a very large azhdarchid which may be Arambourgiania. Dinosaurs are rare there, but the abelisaurid Chenanisaurus, the hadrosaur Ajnabia, and sauropod fossils are also known. Longrich and colleagues suggested in 2018 that, although the fauna was overwhelmingly marine, the presence of terrestrial dinosaurs and azhdarchids indicates the coast was nearby. ## See also - List of pterosaur genera - Timeline of pterosaur research - Pterosaur size
67,364,158
The Holocaust in Greece
1,158,031,177
Systematic dispossession, deportation, and murder of Jews in Greece
[ "Antisemitism in Greece", "Germany–Greece relations", "Greece in World War II", "Jewish Greek history", "Mass murder in Greece", "Murder in Greece", "The Holocaust by country", "The Holocaust in Greece" ]
The Holocaust in Greece was the mass murder of Greek Jews, mostly as a result of their deportation to Auschwitz concentration camp, during World War II. By 1945, between 83 and 87 percent of Greek Jews had been murdered, one of the highest proportions in Europe. Before the war, approximately 72,000 to 77,000 Jews lived in 27 communities in Greece. The majority, about 50,000, lived in Salonica (Thessaloniki), a former Ottoman city captured and annexed by Greece in 1912. Most Greek Jews were Judeo-Spanish-speaking Sephardim (Jews originating on the Iberian peninsula) with some being Greek-speaking Romaniotes (an ancient Jewish community native to Greece). Germany, Italy, and Bulgaria invaded and occupied Greece in April 1941. During the first year of the occupation, the authorities did not enact any systematic measures that targeted Jews per se. In March 1943, just over 4,000 Jews were deported from the Bulgarian occupation zone to Treblinka extermination camp. From 15 March through August, almost all of Salonica's Jews, along with those of neighboring communities in the German occupation zone, were deported to Auschwitz concentration camp. After the Italian armistice in September 1943, Germany took over the Italian occupation zone, whose rulers had until then opposed the deportation of Jews. In March 1944, Athens, Ioannina, and other places in the former Italian occupation zone witnessed the roundup and deportation of their Jewish communities. In mid-1944, Jews living in the Greek islands were targeted. Around 10,000 Jews survived the Holocaust either by going into hiding, fighting with the Greek resistance, or surviving their deportation. Following World War II, surviving Jews faced obstacles regaining their property from non-Jews who had taken it over during the war. About half emigrated to Israel and other countries in the first decade after the war. The Holocaust was long overshadowed by other events during the wartime occupation, but gained additional prominence in the twenty-first century. ## Background The Greek-speaking Romaniotes are the oldest Jewish community in Europe, dating back possibly as far as the sixth century BCE. Many Judeo-Spanish-speaking Sephardim settled in the Ottoman Empire, including areas that are now Greece, after their expulsion from Spain and Portugal at the end of the fifteenth century. Numerically and culturally, they came to dominate the earlier Romaniote community. The prewar Jewish communities of southern, western, and northern Greece each had a different history: - Because of suspicion that they opposed the Greek insurgents, many Jews of the Peloponnese and Central Greece were massacred during the Greek War of Independence in the 1820s, while others fled to the Ottoman Empire. The newly independent Greek state established the Eastern Orthodox Church of Greece as the state religion shared by almost all inhabitants. Very few Jews remained in independent Greece, the largest community comprising fifty Romaniote families at Chalcis. After the establishment of the monarchy following independence, small numbers of Ashkenazim (Jews from Central Europe) as well as Sephardim from the Ottoman Empire settled in Athens, many in the service of the new king, Otto of Bavaria. They became well integrated into social and political life, considering themselves Greeks of the Jewish faith. - Western Greece, especially Epirus, was home to a community of Romaniotes who settled along the area's trading routes, especially the Via Egnatia, during the early centuries CE. Emigration in the nineteenth and early twentieth century of the Jewish community of Ioannina left it with a few thousand Jews. Western Greece remained under Ottoman rule until the Balkan Wars in 1912–1913, when it was captured by Greece. - Forced resettlement in Constantinople in 1455 by Sultan Mehmet II almost erased the Romaniote communities of Thrace, Macedonia, and Central Greece. At the end of the fifteenth century, the Ottoman Empire allowed Sephardim to resettle on the Aegean coast from Larissa west; Ashkenazi migrants joined them later, but the Sephardim remained dominant. Prior to World War II, about 50,000 Jews lived in Salonica (Thessaloniki), a center of Sephardic learning that historically held a Jewish majority and was termed the "Jerusalem of the Balkans". The city was heavily Hellenized as a result of the Great Fire of 1917, but the Jewish demographic plurality persisted until many Greek refugees from Eastern Thrace and Anatolia arrived in 1922. - The Greek islands, especially Corfu, Rhodes, and Crete, were home to both Sephardic and Romaniote communities that had spent many years under Venetian rule or influence such that many Jews from these islands spoke Italian. Before the Balkan Wars, no more than 10,000 Jews lived in Greece; this number would increase eightfold as a result of territorial acquisitions. Jews occasionally faced antisemitic violence such as the 1891 riots in Corfu and the 1931 Campbell pogrom [el], carried out by the National Union of Greece (EEE) in a suburb of Salonica. As a result of economic decline, many Jews left Greece after World War I. At first, wealthy merchants left for Europe, Latin America, and the United States. In the 1930s, many poorer Jews emigrated from Salonica to Mandatory Palestine. Under heavy pressure to Hellenize, Jews in Salonica gradually assimilated into the Greek majority and some young Jews even acquired Greek as their first language. Historian Steven Bowman states that while the physical destruction of Greek Jews took place from 1943 to 1945, "an economic, social, and political assault predated the vicissitudes of World War II". The political fragmentation of Salonican Jews into opposing factions of conservative assimilationists, Zionists, and Communists hampered its ability to cope. In 1936, the Metaxas dictatorship overthrew unstable parliamentary politics. Upon the outbreak of World War II, some 72,000 to 77,000 Jews lived in 27 communities in Greece—the majority of them in Salonica. ## Axis occupation Early in the morning of 28 October 1940, Italy gave an ultimatum to dictator Ioannis Metaxas: if he did not allow Italian troops to occupy Greece, Italy would declare war. Metaxas refused and Italy immediately invaded Greece. The Jewish community reported that 12,898 Jews fought for Greece in the war; 613 died and 3,743 were wounded, most famously Colonel Mordechai Frizis. During the winter of 1940–1941, Italians and Greeks fought in Albania, but in April 1941, Germany joined the war and occupied all of mainland Greece by the end of the month and Crete in May. A group of generals announced a new government with German backing on 26 April, while the royal family was evacuated to Crete and then to Cairo, where the Greek government-in-exile was established. After a month, all Greek prisoners of war were released, including all Jewish soldiers. In mid-1941, Greece was partitioned into three occupation zones. The Germans occupied strategically important areas: Macedonia including Salonica, the harbor of Piraeus, most of Crete and some of the Aegean islands, and allowed the Italians to take almost all the Greek mainland and many islands. Bulgaria occupied Western Thrace and eastern Macedonia, where it immediately undertook a harsh Bulgarianization program, sending more than 100,000 Greek refugees westward. The collaborationist Greek government began to see Bulgaria as the main threat and did all it could to secure German support for limiting the size of the Bulgarian occupation zone. However, in June 1943, parts of eastern Macedonia switched from German to Bulgarian control. ## Anti-Jewish persecution Immediately after the occupation, German police units made arrests based on lists of individuals deemed subversive, including Greek Jewish intellectuals and the entire Salonica Jewish community council. The Reichsleiter Rosenberg Taskforce surveyed Jewish assets a week after the occupation. To curry favor with the Germans, collaborationist prime minister Georgios Tsolakoglou announced that there was a "Jewish problem" in Greece—the term was not a part of prewar discourse—adding, "this question will be definitively solved within the framework of the whole New Order in Europe". Confiscation of all kinds of property from both Jews and non-Jews was undertaken on a massive scale; wealthy Jews were arrested and their businesses expropriated. During the first year of occupation, Jews shared in the same hardships as other Greeks, including the 1941 Greek famine and hyperinflation. Black market activity was widespread despite being punishable by immediate execution. The famine disproportionately affected Greek Jews as many were members of the urban proletariat and lacked connections to the countryside. In Salonica, German occupation forces tried to exacerbate the divisions between Greek Jews and the Christian population, encouraging newspapers to print antisemitic material and reviving the EEE, which Metaxas had banned. In the Bulgarian occupation zone, hundreds of Thracian Jews were forced into Bulgarian labor battalions, thus escaping famine and the deportation of Thracian Jews in 1943. In Macedonia, all recently arrived Jews, mostly a few hundred refugees from Yugoslavia, were required to register with the police in November 1941. A handful were immediately placed in German custody, deported, and executed. Greek collaborators provided the names of alleged Communists to the German authorities, who held them as hostages and shot them in reprisal for resistance activities. Jews were overrepresented among these victims. In the second half of 1941, Jewish property in Salonica was confiscated on a large scale to rehouse Christians whose residences had been destroyed by bombing, or who had fled the Bulgarian occupation zone. In February 1942, the collaborationist government acceded to German demands and fired high-ranking official Georgios D. Daskalakis [el] because of his alleged Jewish ancestry. Soon after it agreed to ban all Jews from leaving the country at German request. On 11 July 1942, 9,000 Jewish men were rounded up for registration in Eleftherias Square in Salonika, in a joint operation by Germany and the Greek collaborationist government. The assembled Jews were publicly humiliated and forced to perform exercises. After this registration, as many as 3,500 Jewish men were drafted into labor battalions by Organization Todt, a Nazi civil and military engineering organization. Greek gendarmes guarded the forced laborers as they were transferred to work sites and former Greek military officers oversaw the work projects. Conditions were so harsh that hundreds of Jews died. Some escaped, but the Germans shot others in retaliation. Neither the Greek authorities nor the Orthodox Church made any protest. As a ransom for the laborers, the Jewish community paid two billion drachmas and gave up the extensive Jewish cemetery of Salonica, which the city administration had been trying to obtain for years. The municipality of Salonica destroyed the cemetery beginning in December 1942, and the city and the Greek Orthodox Church used many of the tombstones for construction. By the end of 1942, more than a thousand Jews had fled from Salonica to Athens—mostly the wealthy, as the journey cost 150,000 drachmas (£300, ). ## Deportation More than 2,000 Greek Jews were deported in late 1942 to Auschwitz concentration camp during the Holocaust in France. Historian Christopher Browning argues that German dictator Adolf Hitler ordered the deportation of Salonica's Jews on 2 November 1941, citing a passage in Gerhard Engel's diary stating that Hitler "demands that the Jewish elements be removed from Salonika". Salonica's chief rabbi, Zvi Koretz, was interned in Vienna from May 1941 to January 1942—a year before the deportation process began in Salonica. Building defenses for a possible Allied attack in the northern Aegean coincided with preparations for the deportation of Salonica's Jews and the deployment of German advisor Theodor Dannecker to Bulgaria, to ensure that Western Thrace was also cleared. Hitler believed that Jewish populations would hamper the Axis defenses in the event of invasion. According to historian Andrew Apostolou, the collaborationist Greek leadership continued to cooperate with the Germans to ward off Bulgarian aspirations for the permanent annexation of Western Thrace and Macedonia, while creating exonerating evidence in case the Allies won. Both the collaborationist administration and postwar governments used the war as an opportunity to Hellenize northern Greece, for example by the expulsion of Cham Albanians and the displacement of many ethnic Macedonians. This same area, from Corfu to the Turkish border, was most deadly for Jews during the Holocaust. Overall, 60,000 Jews were deported from Greece to Auschwitz; around 12,750 were spared from immediate gassing and no more than 2,000 returned home after the war. Jews were not necessarily aware of the fate awaiting them, and some expected to be put to forced labor in Poland. The trains were packed so tightly that there was no space to sit down, and the journey took three weeks. As many as 50 percent died en route, some went mad, and most were unable to stand upon arriving at Auschwitz. Following the deportation, almost all Jewish-owned property was sold by the authorities, privately looted by Greeks, or nationalized by the Greek government. Almost everywhere, Christians went into Jewish districts immediately after they were vacated to loot. ### Thrace (March 1943) Before dawn on 4 March 1943, 4,058 of the 4,273 Jews in Bulgarian-occupied Macedonia and Western Thrace (Belomorie) were arrested. This roundup was planned on 22 February, and entailed the Bulgarian Army sealing off neighborhoods so that the police could conduct arrests based on lists of names and addresses. The Jews were then transferred to camps in Gorna Džumaja and Dupnica, held there for a few weeks, and then deported to Treblinka extermination camp via the Danube. In less than a month, 97 percent of the Jews in the Bulgarian occupation zone were murdered; none of those deported survived. Dannecker reported the deportation "was carried out without any particular reaction from local people". Bulgarian authorities saw the removal of non-Bulgarian ethnic groups, including Jews and Greeks, as a necessary step in making room for Bulgarian settlers. ### Salonica (March–August 1943) Preparation for the deportation of Salonica's Jews began in January 1943. A German official, Günther Altenburg, notified the prime minister of the collaborationist government, Konstantinos Logothetopoulos, on 26 January, but there is no record of him taking action to prevent the deportations, except two letters of protest written after they had already begun. Despite the letters, the collaborationist government continued to cooperate with the deportation. The Italian occupation authorities and Consul Guelfo Zamboni vigorously protested, issued Italian citizenship to Greek Jews, and arranged travel to Athens for hundreds of Jews with Italian or foreign citizenship. Spanish officials in the region also attempted to stop the deportations. On 6 February, the SS group tasked with the deportation arrived in the city and set up headquarters at 42 Velissariou Street in a confiscated Jewish villa. Its leaders, Alois Brunner and Dieter Wisliceny, stayed on the first floor while wealthy Jews were tortured in the basement. They had arrived with a series of anti-Jewish decrees intended to establish the Nuremberg laws and issued the first decree, requiring Jews without foreign citizenship to wear the yellow star, the same day. The Nazis set up the Baron Hirsch ghetto next to the train station, enclosed in barbed wire on 4 March. Regular Greek policemen guarded the ghetto while internal order was the responsibility of a Jewish police force. The first Jews transferred there were fifteen Jewish families from Langadas, but as many as 2,500 Jews occupied the area at a time. Some Jews escaped to the mountains and joined resistance groups or fled to Athens, but most could not. To prevent escapes, twenty-five Jewish hostages were held and a curfew was imposed. German authorities tried to convince the Salonican Jews to cooperate by telling them that they would be resettled in Poland, giving them Polish money and allowing them to take some minor possessions when they left. The first transport from Salonica left on 15 March 1943. Most Jews were deported by mid-June, but the last of the transports departed on 10 August, carrying 1,800 Salonican Jewish men who had been engaged in forced labor projects. Altogether about 45,200 Jews were deported from Salonica to Auschwitz and another 1,700 from five other communities in the German occupation zone who were deported via Salonica: Florina and Veria in western Macedonia and Soufli, Nea Orestiada, and Didymoteicho in the strip along the Turkish border. Around 600 Jews, mostly Spanish citizens and members of the Jewish Council, were deported instead to Bergen-Belsen concentration camp. Overall, 96 percent of Jews from Salonica were murdered. Following the cessation of all Jewish businesses on 6 March, it was discovered that 500 of 1,700 Jewish merchant agencies were involved in foreign commerce and their shutdown would cause commercial loss to German firms, leading to a decision to continue to operate the businesses under new ownership. At the end of May, a Greek government agency called Service for the Custody of Jewish Property [de] was set up to oversee the property of deported Jews. Greeks expelled from Bulgarian-occupied areas were allowed to live in some of the formerly Jewish housing (11,000 apartments were confiscated from Jews) while many Germans and Greeks became wealthy from the proceeds of expropriated assets. The total value of Jewish-owned property, according to declarations, was about 11 billion drachma (approximately £11 million, £ million in ), a significant part of which was transferred to the Greek state. Despite anti-looting orders from the German occupiers, many Jewish-owned houses were torn up by Greek Christians looking for hidden gold coins. Gold confiscated from Jews was used to ward off inflation and had a significant impact on the Greek economy. Historian Kostis Kornetis states, "the elimination of Jews from [Salonica]'s economic life was eventually welcomed by both elites and the general public". ### Passover roundup (March 1944) In September 1943, Germany occupied the Italian occupation zone following the Armistice of Cassibile. The remaining fifteen Jewish communities had fewer than 2,000 people and were near ports or major roads. Jürgen Stroop was appointed Higher SS and Police Leader of occupied Greece, partly to facilitate the deportation of Athenian Jews. Stroop ordered the chief rabbi of Athens, Elias Barzilai, to produce a list of Jews. Barzilai said that the community register had been destroyed during a raid by the collaborationist Hellenic Socialist Patriotic Organization (EPSO) the previous year. Stroop ordered him to make a new list. Instead Barzilai warned Jews to flee and absconded with the help of the left-wing National Liberation Front (EAM) resistance group. Barzilai negotiated a deal with EAM; in exchange for sheltering Jews in rebel-controlled areas, he paid the Jewish community's entire cash reserve. On 4 October, Stroop instituted a curfew for Jews and ordered them to register at the synagogue. Despite the threat of the death penalty for Jews failing to register and any Christian helpers, only 200 registered, while many others followed Barzilai's example and fled. Without sufficient troops, and faced with the opposition of the collaborationist Greek government headed by Ioannis Rallis, the Nazis had to put off deportation operations until the following year. Under pressure, Rallis passed laws for the confiscation of property owned by Jews. While wealthy and middle-class Jews were able to go into hiding, those who registered with the authorities came from the lower classes in society who lacked the financial resources to escape. Over the next six months, additional Jews were lured out of hiding as their resources were exhausted. The delay in implementing deportation led to complacency among some Jews. In some places, Jews did not take the opportunity to escape because of a lack of awareness of the threat, failure of Jewish leadership, negative attitudes to the resistance, and reluctance to leave family members behind. In January 1944, Adolf Eichmann replaced Wisliceny with Anton Burger, tasked with deporting Greece's Jews as quickly as possible. In March 1944, the Jewish holiday Passover was used as a cover for coordinated roundups around Greece carried out by the Geheime Feldpolizei (German military police) and Greek gendarmerie. On 23 March, unleavened bread was distributed at a synagogue in Athens—the 300 Jews who had tried to collect the bread were arrested, and others hunted down later that day based on registration lists. The Greek police generally refused to arrest any Jews not on the list, sparing the lives of a number of young children. At the end of the day, the 2,000 Jews caught were imprisoned at Haidari concentration camp outside the city. On 24 March, Jews from all the remaining communities in mainland Greece were arrested, including Patras, Chalcis, Ioannina, Arta, Preveza, Larissa, Volos, and Kastoria. Most of the Jews in Ioannina and Kastoria were arrested, with higher percentages escaping elsewhere. On 2 April, a train departed from Athens, adding additional Jews during its journey north. Nearly five thousand Jews were deported from Greece, arriving in Auschwitz nine days later. ### Deportation from Greek islands (June–August 1944) After the Passover roundup, the Nazis focused on the Jewish communities of the Greek islands. The entire Cretan Jewish community, 314 people in Chania and 26 in Heraklion, were rounded up on 20 May and departed the harbor on Souda Bay on 7 June. All were killed in the sinking of the SS Tanais by a British submarine. After the 1943 armistice, the Italian garrison of Corfu refused to surrender, and Germany forcibly occupied the island following battles that left the Jewish quarter in ruins. Despite warnings from the Italian soldiers, the Jews did not go into hiding in the mountains. On 8 June, the Jews of Corfu were rounded up and deported by ship and rail to Haidari. The Mayor of Corfu stated, "Our good friends the Germans have cleansed the island from the Jewish riffraff"—the only case where a Greek official publicly approved of the deportation of Jews. The Corfu Jews were deported from Haidari to Poland on 21 June. The Dodecanese islands were part of Italy before the war. In late 1943, the British briefly occupied Kos and evacuated thousands of Greek Christians, but not the island's Jews. On 23 July 1944, 1,661 Jews from Rhodes were forced to board a boat that took them to Piraeus. In Leros the boat stopped to load another 94 Jews from Kos. Together with around 700 to 900 Jews captured in and around Athens, they were deported to Auschwitz on 3 August, arriving on 16 August. Only 157 (nine percent) of the Jews from Rhodes and Kos returned. This operation, the last deportation during the Holocaust in Greece, was carried out two months before the end of the Axis occupation. The few Jews who were hiding on smaller islands were left alone. ## Evasion and resistance Regional survival rates varied greatly because of a variety of factors, such as timing of deportations, the attitude of the local authorities, and the degree of integration of Jewish communities. According to Greek Holocaust survivor Michael Matsas, the decisive factors influencing survival rates were the strength of resistance organizations and the reaction of the Jewish leadership. After the deportation of the Jews of Salonica and the end of the Italian occupation zone, thousands of Jews in other parts of Greece joined the resistance or went into hiding. In many parts of Thessaly, Central Greece (including Athens), and the Peloponnese, Holocaust deaths were relatively low. The activities of the left-wing resistance in Thessaly are credited with the higher survival rate there. Some smaller Jewish communities, including those of Karditsa and Agrinio (around 80 people each), completely escaped to the mountain villages controlled by EAM's Greek People's Liberation Army (ELAS); 55 Jews from Veria were hidden in the nearby village of Sykia for fifteen to seventeen months. At least two thirds of the Jews living in Athens and Larissa before the war survived. Archbishop Damaskinos, the head of the Church of Greece, issued strongly worded protests against the mistreatment of Greek Jews and issued many false baptismal certificates. He was the only leader of a major European church to condemn the Holocaust. The chief of police in Athens, Angelos Evert, saved hundreds of Jews by issuing false papers. The 275 Jews of Zakynthos were entirely spared because the Austrian garrison commander (from the 999th Light Afrika Division) did not execute the deportation order following protests by the local mayor and the Orthodox Christian prelate, who turned over their own names when ordered to submit a list of Jews. Historian Giorgos Antoniou states that, "the line between selfless and selfish assistance is more often than not hard to distinguish", and robbery of Jews in hiding was "not rare". Unlike in other countries, Greek rabbis encouraged Jews to accept false baptismal certificates. Many Jews in hiding converted to Christianity and did not necessarily return to Judaism after the war. The Greek resistance readily accepted Jewish volunteers into its ranks; at least 650 Jewish resistance fighters are known by name, and there may have been as many as 2,000. Jews mostly fought in ELAS but there were also some in the rival resistance organizations EDES (National Republican Greek League) and National and Social Liberation (EKKA). Unlike the other resistance organizations, EAM publicly appealed to Greeks to help their Jewish fellow citizens, and actively recruited young Jews to join ELAS. Thousands of Jews, perhaps as many as 8,000, received assistance from EAM/ELAS. In some cases, EAM refused to help Jews if it did not receive payment. Greek smugglers charged Jews 300 Palestine pounds per boat, carrying around two dozen Jews, to take them to Çeşme in Turkey via Euboea, but later ELAS and the Haganah negotiated a price of one gold piece per Jew. By June 1944, 850 Jews had escaped to Çeşme. ## Aftermath Axis occupation forces withdrew from all of mainland Greece by November 1944. About 10,000 Greek Jews survived the Holocaust, representing a death rate of 83 to 87 percent. This was the highest Holocaust death rate in the Balkans and among the highest in Europe. The survivors were sharply divided between the camp survivors and the larger number who survived in Greece or returned from abroad. About half those who returned from the concentration camps only stayed briefly in Greece before emigrating while others remained abroad. The Greek foreign ministry attempted to delay or prevent their return to Greece. In Salonica, Jewish camp survivors were often called "unused cakes of soap" by other Greeks. Almost everyone had lost family members. The disintegration of families as well as unavailability of religious professionals made it almost impossible to maintain traditional Jewish religious observance. In November 1944, the returning Greek government-in-exile annulled the law confiscating Jewish property and passed the first measure in Europe for the return of this property to its Jewish owners or their heirs, and of heirless property to Jewish organizations. However, this law was not applied in practice. Lacking any property or place to live and not helped by local authorities, Jews found themselves sleeping in improvised shelters in conditions that were compared to the Nazi concentration camps. Most Jews found it difficult or impossible to regain properties taken over by non-Jews during the war. In Salonica, 15 percent or less of Jewish property was returned and only 30 Jews were successful in recovering all their real estate. Postwar return of property, however, was somewhat easier in the former Italian-occupied zone. Greek courts usually ruled against survivors, and failure to regain property led many Jews to emigrate; emigrants lost their Greek citizenship and any claim to property in Greece. Conflict over property also fueled antisemitic incidents. Jewish cemeteries faced expropriation and destruction even after the war. West Germany paid reparations to Greece but no money was set aside to compensate Greek Jews. As in other European countries, American Jewish charities, especially the American Jewish Joint Distribution Committee (JDC), coordinated relief efforts to aid survivors. Skeptical that Jews had a future in southeastern Europe, the JDC prioritized aid for those seeking to emigrate to Palestine. Sephardic Jews in the United States raised money to pay dowries so that Greek Jews could marry, as well as sending items such as clothing, shoes, and food. Zionists organized hakhshara programs intended to prepare Jews for emigration to Mandatory Palestine. Many Jews supported left-wing parties prior to World War II, and the help they received from EAM strengthened their leftist sympathies. These connections made them politically suspect, to the point that some Greeks repeated Nazi propaganda equating Jews with Communism. Some Jews suspected of left-wing sympathies were arrested, tortured, or assassinated during the anti-leftist repression in 1945 and 1946. In contrast, the political climate allowed Nazi collaborators to rebrand themselves as loyal, anti-communist citizens. The Greek government avoided prosecuting collaborators and in 1959 passed a law (repealed in 2010) that prevented any prosecution of Holocaust perpetrators for crimes committed in Greece. For decades, the Greek government refused repeated requests from the Jewish community to extradite and try Brunner, who was living in Syria. Across the political spectrum, a high-profile trial that would draw attention to the Holocaust in northern Greece was seen as undesirable. From 1946 to 1949, the Greek Civil War was fought between the monarchist government and leftist insurgents that had succeeded EAM/ELAS. According to Bowman, "there was a strong current of antisemitism and traditional Jew hatred" in the anti-Communist coalition. Some Jews were drafted into the government army, while others fought with the insurgents. After the defeat of the insurgents, some Jewish Communists were executed or imprisoned, and others systematically marginalized from society. Jews' distinct religion in a state that was increasingly defined by Greek Orthodoxy, as well as their sympathy for the political left—purged after the Greek Civil War—contributed to their increasing alienation from Greek society. Within a decade after the war, the Jewish population of Greece had reduced by half and has remained stable since. In 2017, Greece passed a law allowing Greek Holocaust survivors and their descendants who had lost their Greek citizenship to regain it. As of 2021, around 5,000 Jews live in Greece, mostly in Athens (3,000) and Salonica (1,000). ## Legacy The Holocaust in Greece, long overshadowed by other events like the Greek famine, Greek resistance, and the Greek Civil War, was clouded in Greek memory by exaggerated beliefs about the degree of solidarity shown by average Greek Christians. Another reason for lack of attention to the Holocaust was the relatively high level of antisemitism in Greece, which was considered higher than in any other country in the pre-2004 European Union. Pro-Palestinian sympathies in Greece led to an environment where Jews were not distinguished from Israel and antisemitism could be passed off as a principled anti-Zionism. Holocaust denial is promoted by some Greeks, especially the extremist Golden Dawn party. Historian Katherine Elizabeth Fleming writes that often, "the story of the destruction of Greece's Jews has served as a vehicle for the celebration of Greek Orthodox kindness and valor". Fleming states that while some acted heroically in rescuing Jews, "at times, Greek Christians were complicit in the destruction of Jewish lives; many more were unmoved by it; and no small number welcomed it". Academic research into the Holocaust did not begin until decades afterwards and is still sparse. Questions of Greek collaborationism were taboo for scholars and only began to be examined in the twenty-first century. In 2005, Greece joined the International Holocaust Remembrance Alliance and subsequently introduced Holocaust education into the national curriculum. Athens was reported to be the last European capital without a Holocaust memorial, prior to its completion in 2010 [he]. There are also memorials in Salonica (one in Eleftherias Square and another at the site of the old Jewish cemetery), Rhodes, Ioannina, Kavala, Larissa, and elsewhere. Holocaust memorials in Greece have been vandalized repeatedly. In 1977, the Jewish Museum of Greece opened in Athens, and in 2018 the first stone of the Holocaust Museum of Greece in Salonica was laid, although construction has not begun as of 2022. As of 2021, 362 Greeks have been recognized by Yad Vashem as Righteous Among the Nations for helping to save Jews during the occupation.
412,809
Endometrial cancer
1,171,967,335
Uterine cancer that is located in tissues lining the uterus
[ "Gynaecological cancer", "Wikipedia medicine articles ready to translate", "Women's health" ]
Endometrial cancer is a cancer that arises from the endometrium (the lining of the uterus or womb). It is the result of the abnormal growth of cells that have the ability to invade or spread to other parts of the body. The first sign is most often vaginal bleeding not associated with a menstrual period. Other symptoms include pain with urination, pain during sexual intercourse, or pelvic pain. Endometrial cancer occurs most commonly after menopause. Approximately 40% of cases are related to obesity. Endometrial cancer is also associated with excessive estrogen exposure, high blood pressure and diabetes. Whereas taking estrogen alone increases the risk of endometrial cancer, taking both estrogen and a progestogen in combination, as in most birth control pills, decreases the risk. Between two and five percent of cases are related to genes inherited from the parents. Endometrial cancer is sometimes loosely referred to as "uterine cancer", although it is distinct from other forms of uterine cancer such as cervical cancer, uterine sarcoma, and trophoblastic disease. The most frequent type of endometrial cancer is endometrioid carcinoma, which accounts for more than 80% of cases. Endometrial cancer is commonly diagnosed by endometrial biopsy or by taking samples during a procedure known as dilation and curettage. A pap smear is not typically sufficient to show endometrial cancer. Regular screening in those at normal risk is not called for. The leading treatment option for endometrial cancer is abdominal hysterectomy (the total removal by surgery of the uterus), together with removal of the Fallopian tubes and ovaries on both sides, called a bilateral salpingo-oophorectomy. In more advanced cases, radiation therapy, chemotherapy or hormone therapy may also be recommended. If the disease is diagnosed at an early stage, the outcome is favorable, and the overall five-year survival rate in the United States is greater than 80%. In 2012, endometrial cancers newly occurred in 320,000 women and caused 76,000 deaths. This makes it the third most common cause of death in cancers which only affect women, behind ovarian and cervical cancer. It is more common in the developed world and is the most common cancer of the female reproductive tract in developed countries. Rates of endometrial cancer have risen in a number of countries between the 1980s and 2010. This is believed to be due to the increasing number of elderly people and increasing rates of obesity. ## Signs and symptoms Vaginal bleeding or spotting in women after menopause occurs in 90% of endometrial cancer. Bleeding is especially common with adenocarcinoma, occurring in two-thirds of all cases. Abnormal menstrual cycles or extremely long, heavy, or frequent episodes of bleeding in women before menopause may also be a sign of endometrial cancer. Symptoms other than bleeding are not common. Other symptoms include thin white or clear vaginal discharge in postmenopausal women. More advanced disease shows more obvious symptoms or signs that can be detected on a physical examination. The uterus may become enlarged or the cancer may spread, causing lower abdominal pain or pelvic cramping. Painful sexual intercourse or painful or difficult urination are less common signs of endometrial cancer. The uterus may also fill with pus (pyometrea). Of women with these less common symptoms (vaginal discharge, pelvic pain, and pus), 10–15% have cancer. ## Risk factors Risk factors for endometrial cancer include obesity, insulin resistance and diabetes mellitus, breast cancer, use of tamoxifen, never having had a child, late menopause, high levels of estrogen, and increasing age. Immigration studies (migration studies), which examine the change in cancer risk in populations moving between countries with different rates of cancer, show that there is some environmental component to endometrial cancer. These environmental risk factors are not well characterized. It is found that adiposity is associated with the earlier diagnosis of EC, particularly the endometrioid subtype. ### Hormones Most of the risk factors for endometrial cancer involve high levels of estrogens. An estimated 40% of cases are thought to be related to obesity. In obesity, the excess of adipose tissue increases conversion of androstenedione into estrone, an estrogen. Higher levels of estrone in the blood causes less or no ovulation, and exposes the endometrium continuously to high levels of estrogens. Obesity also causes less estrogen to be removed from the blood. Polycystic ovary syndrome (PCOS), which also causes irregular or no ovulation, is associated with higher rates of endometrial cancer for the same reasons as obesity. Specifically, obesity, type II diabetes, and insulin resistance are risk factors for Type I endometrial cancer. Obesity increases the risk for endometrial cancer by 300–400%. Estrogen replacement therapy during menopause when not balanced (or "opposed") with progestin is another risk factor. Higher doses or longer periods of estrogen therapy have higher risks of endometrial cancer. Women of lower weight are at greater risk from unopposed estrogen. A longer period of fertility—either from an early first menstrual period or late menopause—is also a risk factor. Unopposed estrogen raises an individual's risk of endometrial cancer by 2–10 fold, depending on weight and length of therapy. In trans men who take testosterone and have not had a hysterectomy, the conversion of testosterone into estrogen via androstenedione may lead to a higher risk of endometrial cancer. ### Genetics Genetic disorders can also cause endometrial cancer. Overall, hereditary causes contribute to 2–10% of endometrial cancer cases. Lynch syndrome, an autosomal dominant genetic disorder that mainly causes colorectal cancer, also causes endometrial cancer, especially before menopause. Women with Lynch syndrome have a 40–60% risk of developing endometrial cancer, higher than their risk of developing colorectal (bowel) or ovarian cancer. Ovarian and endometrial cancer develop simultaneously in 20% of people. Endometrial cancer nearly always develops before colon cancer, on average, 11 years before. Carcinogenesis in Lynch syndrome comes from a mutation in MLH1 or MLH2: genes that participate in the process of mismatch repair, which allows a cell to correct mistakes in the DNA. Other genes mutated in Lynch syndrome include MSH2, MSH6, and PMS2, which are also mismatch repair genes. Women with Lynch syndrome represent 2–3% of endometrial cancer cases; some sources place this as high as 5%. Depending on the gene mutation, women with Lynch syndrome have different risks of endometrial cancer. With MLH1 mutations, the risk is 54%; with MSH2, 21%; and with MSH6, 16%. Women with a family history of endometrial cancer are at higher risk. Two genes most commonly associated with some other women's cancers, BRCA1 and BRCA2, do not cause endometrial cancer. There is an apparent link with these genes but it is attributable to the use of tamoxifen, a drug that itself can cause endometrial cancer, in breast and ovarian cancers. The inherited genetic condition Cowden syndrome can also cause endometrial cancer. Women with this disorder have a 5–10% lifetime risk of developing endometrial cancer, compared to the 2–3% risk for unaffected women. Common genetic variation has also been found to affect endometrial cancer risk in large-scale genome-wide association studies. Sixteen genomic regions have been associated with endometrial cancer and the common variants explain up to 7% of the familial relative risk. ### Other health problems Some therapies for other forms of cancer increase the lifetime risk of endometrial cancer, which is a baseline 2–3%. Tamoxifen, a drug used to treat estrogen-positive breast cancers, has been associated with endometrial cancer in approximately 0.1% of users, particularly older women, but the benefits for survival from tamoxifen generally outweigh the risk of endometrial cancer. A one to two-year course of tamoxifen approximately doubles the risk of endometrial cancer, and a five-year course of therapy quadruples that risk. Raloxifene, a similar drug, did not raise the risk of endometrial cancer. Previously having ovarian cancer is a risk factor for endometrial cancer, as is having had previous radiotherapy to the pelvis. Specifically, ovarian granulosa cell tumors and thecomas are tumors associated with endometrial cancer. Low immune function has also been implicated in endometrial cancer. High blood pressure is also a risk factor, but this may be because of its association with obesity. Sitting regularly for prolonged periods is associated with higher mortality from endometrial cancer. The risk is not negated by regular exercise, though it is lowered. ### Protective factors Smoking and the use of progestin are both protective against endometrial cancer. Smoking provides protection by altering the metabolism of estrogen and promoting weight loss and early menopause. This protective effect lasts long after smoking is stopped. Progestin is present in the combined oral contraceptive pill and the hormonal intrauterine device (IUD). Combined oral contraceptives reduce risk more the longer they are taken: by 56% after four years, 67% after eight years, and 72% after twelve years. This risk reduction continues for at least fifteen years after contraceptive use has been stopped. Obese women may need higher doses of progestin to be protected. Having had more than five infants (grand multiparity) is also a protective factor, and having at least one child reduces the risk by 35%. Breastfeeding for more than 18 months reduces risk by 23%. Increased physical activity reduces an individual's risk by 38–46%. There is preliminary evidence that consumption of soy is protective. ## Pathophysiology Endometrial cancer forms when there are errors in normal endometrial cell growth. Usually, when cells grow old or get damaged, they die, and new cells take their place. Cancer starts when new cells form unneeded, and old or damaged cells do not die as they should. The buildup of extra cells often forms a mass of tissue called a growth or tumor. These abnormal cancer cells have many genetic abnormalities that cause them to grow excessively. In 10–20% of endometrial cancers, mostly Grade 3 (the highest histologic grade), mutations are found in a tumor suppressor gene, commonly p53 or PTEN. In 20% of endometrial hyperplasias and 50% of endometrioid cancers, PTEN has a loss-of-function mutation or a null mutation, making it less effective or completely ineffective. Loss of PTEN function leads to up-regulation of the PI3k/Akt/mTOR pathway, which causes cell growth. The p53 pathway can either be suppressed or highly activated in endometrial cancer. When a mutant version of p53 is overexpressed, the cancer tends to be particularly aggressive. P53 mutations and chromosome instability are associated with serous carcinomas, which tend to resemble ovarian and Fallopian carcinomas. Serous carcinomas are thought to develop from endometrial intraepithelial carcinoma. PTEN and p27 loss of function mutations are associated with a good prognosis, particularly in obese women. The Her2/neu oncogene, which indicates a poor prognosis, is expressed in 20% of endometrioid and serous carcinomas. CTNNB1 (beta-catenin; a transcription gene) mutations are found in 14–44% of endometrial cancers and may indicate a good prognosis, but the data is unclear. Beta-catenin mutations are commonly found in endometrial cancers with squamous cells. FGFR2 mutations are found in approximately 10% of endometrial cancers, and their prognostic significance is unclear. SPOP is another tumor suppressor gene found to be mutated in some cases of endometrial cancer: 9% of clear cell endometrial carcinomas and 8% of serous endometrial carcinomas have mutations in this gene. Type I and Type II cancers (explained below) tend to have different mutations involved. ARID1A, which often carries a point mutation in Type I endometrial cancer, is also mutated in 26% of clear cell carcinomas of the endometrium, and 18% of serous carcinomas. Epigenetic silencing and point mutations of several genes are commonly found in Type I endometrial cancer. Mutations in tumor suppressor genes are common in Type II endometrial cancer. PIK3CA is commonly mutated in both Type I and Type II cancers. In women with Lynch syndrome-associated endometrial cancer, microsatellite instability is common. Development of an endometrial hyperplasia (overgrowth of endometrial cells) is a significant risk factor because hyperplasias can and often do develop into adenocarcinoma, though cancer can develop without the presence of a hyperplasia. Within ten years, 8–30% of atypical endometrial hyperplasias develop into cancer, whereas 1–3% of non-atypical hyperplasias do so. An atypical hyperplasia is one with visible abnormalities in the nuclei. Pre-cancerous endometrial hyperplasias are also referred to as endometrial intraepithelial neoplasia. Mutations in the KRAS gene can cause endometrial hyperplasia and therefore Type I endometrial cancer. Endometrial hyperplasia typically occurs after the age of 40. Endometrial glandular dysplasia occurs with an overexpression of p53, and develops into a serous carcinoma. ## Diagnosis Diagnosis of endometrial cancer is made first by a physical examination, endometrial biopsy, or dilation and curettage (removal of endometrial tissue; D&C). This tissue is then examined histologically for characteristics of cancer. If cancer is found, medical imaging may be done to see whether the cancer has spread or invaded tissue. ### Examination Routine screening of asymptomatic people is not indicated since the disease is highly curable in its early, symptomatic stages. Instead, women, particularly menopausal women, should be aware of the symptoms and risk factors of endometrial cancer. A cervical screening test, such as a Pap smear, is not a useful diagnostic tool for endometrial cancer because the smear will be normal 50% of the time. A Pap smear can detect disease that has spread to the cervix. Results from a pelvic examination are frequently normal, especially in the early stages of disease. Changes in the size, shape or consistency of the uterus or its surrounding, supporting structures may exist when the disease is more advanced. Cervical stenosis, the narrowing of the cervical opening, is a sign of endometrial cancer when pus or blood is found collected in the uterus (pyometra or hematometra). Women with Lynch syndrome should begin to have annual biopsy screening at the age of 35. Some women with Lynch syndrome elect to have a prophylactic hysterectomy and salpingo-oophorectomy to greatly reduce the risk of endometrial and ovarian cancer. Transvaginal ultrasound to examine the endometrial thickness in women with postmenopausal bleeding is increasingly being used to aid in the diagnosis of endometrial cancer in the United States. In the United Kingdom, both an endometrial biopsy and a transvaginal ultrasound used in conjunction are the standard of care for diagnosing endometrial cancer. The homogeneity of the tissue visible on transvaginal ultrasound can help to indicate whether the thickness is cancerous. Ultrasound findings alone are not conclusive in cases of endometrial cancer, so another screening method (for example endometrial biopsy) must be used in conjunction. Other imaging studies are of limited use. CT scans are used for preoperative imaging of tumors that appear advanced on physical exam or have a high-risk subtype (at high risk of metastasis). They can also be used to investigate extrapelvic disease. An MRI can be of some use in determining if the cancer has spread to the cervix or if it is an endocervical adenocarcinoma. MRI is also useful for examining the nearby lymph nodes. Dilation and curettage or an endometrial biopsy are used to obtain a tissue sample for histological examination. Endometrial biopsy is the less invasive option, but it may not give conclusive results every time. Hysteroscopy only shows the gross anatomy of the endometrium, which is often not indicative of cancer, and is therefore not used, unless in conjunction with a biopsy. Hysteroscopy can be used to confirm a diagnosis of cancer. New evidence shows that D&C has a higher false negative rate than endometrial biopsy. Before treatment is begun, several other investigations are recommended. These include a chest x-ray, liver function tests, kidney function tests, and a test for levels of CA-125, a tumor marker that can be elevated in endometrial cancer. ### Classification Endometrial cancers may be tumors derived from epithelial cells (carcinomas), mixed epithelial and mesenchymal tumors (carcinosarcomas), or mesenchymal tumors. Traditional classification of endometrial carcinomas is based either on clinical and endocrine features (Type I and Type II), or histopathological characteristics (endometrioid, serous, and clear-cell). Some tumors are difficult to classify and have features overlapping more than one category. High grade endometrioid tumors, in particular, tend to have both type I and type II features. #### Carcinoma The vast majority of endometrial cancers are carcinomas (usually adenocarcinomas), meaning that they originate from the single layer of epithelial cells that line the endometrium and form the endometrial glands. There are many microscopic subtypes of endometrial carcinoma, but they are broadly organized into two categories, Type I and Type II, based on clinical features and pathogenesis. The two subtypes are genetically distinct. Type I endometrial carcinomas occur most commonly before and around the time of menopause. In the United States, they are more common in white women, particularly those with a history of endometrial hyperplasia. Type I endometrial cancers are often low-grade, minimally invasive into the underlying uterine wall (myometrium), estrogen-dependent, and have a good outcome with treatment. Type I carcinomas represent 75–90% of endometrial cancer. Type II endometrial carcinomas usually occur in older, post-menopausal people, in the United States are more common in black women, and are not associated with increased exposure to estrogen or a history of endometrial hyperplasia. Type II endometrial cancers are often high-grade, with deep invasion into the underlying uterine wall (myometrium), are of the serous or clear cell type, and carry a poorer prognosis. They can appear to be epithelial ovarian cancer on evaluation of symptoms. They tend to present later than Type I tumors and are more aggressive, with a greater risk of relapse and/or metastasis. ##### Endometrioid adenocarcinoma In endometrioid adenocarcinoma, the cancer cells grow in patterns reminiscent of normal endometrium, with many new glands formed from columnar epithelium with some abnormal nuclei. Low-grade endometrioid adenocarcinomas have well differentiated cells, have not invaded the myometrium, and are seen alongside endometrial hyperplasia. The tumor's glands form very close together, without the stromal tissue that normally separates them. Higher-grade endometrioid adenocarcinomas have less well-differentiated cells, have more solid sheets of tumor cells no longer organized into glands, and are associated with an atrophied endometrium. There are several subtypes of endometrioid adenocarcinoma with similar prognoses, including villoglandular, secretory, and ciliated cell variants. There is also a subtype characterized by squamous differentiation. Some endometrioid adenocarcinomas have foci of mucinous carcinoma. The genetic mutations most commonly associated with endometrioid adenocarcinoma are in the genes PTEN, a tumor suppressor; PIK3CA, a kinase; KRAS, a GTPase that functions in signal transduction; and CTNNB1, involved in adhesion and cell signaling. The CTNNB1 (beta-catenin) gene is most commonly mutated in the squamous subtype of endometrioid adenocarcinoma. ##### Serous carcinoma Serous carcinoma is a Type II endometrial tumor that makes up 5–10% of diagnosed endometrial cancer and is common in postmenopausal women with atrophied endometrium and black women. Serous endometrial carcinoma is aggressive and often invades the myometrium and metastasizes within the peritoneum (seen as omental caking) or the lymphatic system. Histologically, it appears with many atypical nuclei, papillary structures, and, in contrast to endometrioid adenocarcinomas, rounded cells instead of columnar cells. Roughly 30% of endometrial serous carcinomas also have psammoma bodies. Serous carcinomas spread differently than most other endometrial cancers; they can spread outside the uterus without invading the myometrium. The genetic mutations seen in serous carcinoma are chromosomal instability and mutations in TP53, an important tumor suppressor gene. ##### Clear cell carcinoma Clear cell carcinoma is a Type II endometrial tumor that makes up less than 5% of diagnosed endometrial cancer. Like serous cell carcinoma, it is usually aggressive and carries a poor prognosis. Histologically, it is characterized by the features common to all clear cells: the eponymous clear cytoplasm when H&E stained and visible, distinct cell membranes. The p53 cell signaling system is not active in endometrial clear cell carcinoma. This form of endometrial cancer is more common in postmenopausal women. ##### Mucinous carcinoma Mucinous carcinomas are a rare form of endometrial cancer, making up less than 1–2% of all diagnosed endometrial cancer. Mucinous endometrial carcinomas are most often stage I and grade I, giving them a good prognosis. They typically have well-differentiated columnar cells organized into glands with the characteristic mucin in the cytoplasm. Mucinous carcinomas must be differentiated from cervical adenocarcinoma. ##### Mixed or undifferentiated carcinoma Mixed carcinomas are those that have both Type I and Type II cells, with one making up at least 10% of the tumor. These include the malignant mixed Müllerian tumor, which derives from endometrial epithelium and has a poor prognosis. Undifferentiated endometrial carcinomas make up less than 1–2% of diagnosed endometrial cancers. They have a worse prognosis than grade III tumors. Histologically, these tumors show sheets of identical epithelial cells with no identifiable pattern. ##### Other carcinomas Non-metastatic squamous cell carcinoma and transitional cell carcinoma are very rare in the endometrium. Squamous cell carcinoma of the endometrium has a poor prognosis. It has been reported fewer than 100 times in the medical literature since its characterization in 1892. For primary squamous cell carcinoma of the endometrium (PSCCE) to be diagnosed, there must be no other primary cancer in the endometrium or cervix and it must not be connected to the cervical epithelium. Because of the rarity of this cancer, there are no guidelines for how it should be treated, nor any typical treatment. The common genetic causes remain uncharacterized. Primary transitional cell carcinomas of the endometrium are even more rare; 16 cases had been reported as of 2008. Its pathophysiology and treatments have not been characterized. Histologically, TCCE resembles endometrioid carcinoma and is distinct from other transitional cell carcinomas. #### Sarcoma In contrast to endometrial carcinomas, the uncommon endometrial stromal sarcomas are cancers that originate in the non-glandular connective tissue of the endometrium. They are generally non-aggressive and, if they recur, can take decades. Metastases to the lungs and pelvic or peritoneal cavities are the most frequent. They typically have estrogen and/or progesterone receptors. The prognosis for low-grade endometrial stromal sarcoma is good, with 60–90% five-year survival. High-grade undifferentiated endometrial sarcoma (HGUS) has a worse prognosis, with high rates of recurrence and 25% five-year survival. HGUS prognosis is dictated by whether or not the cancer has invaded the arteries and veins. Without vascular invasion, the five-year survival is 83%; it drops to 17% when vascular invasion is observed. Stage I ESS has the best prognosis, with five-year survival of 98% and ten-year survival of 89%. ESS makes up 0.2% of uterine cancers. ### Metastasis Endometrial cancer frequently metastasizes to the ovaries and Fallopian tubes when the cancer is located in the upper part of the uterus, and the cervix when the cancer is in the lower part of the uterus. The cancer usually first spreads into the myometrium and the serosa, then into other reproductive and pelvic structures. When the lymphatic system is involved, the pelvic and para-aortic nodes are usually first to become involved, but in no specific pattern, unlike cervical cancer. More distant metastases are spread by the blood and often occur in the lungs, as well as the liver, brain, and bone. Endometrial cancer metastasizes to the lungs 20–25% of the time, more than any other gynecologic cancer. ### Histopathology There is a three-tiered system for histologically classifying endometrial cancers, ranging from cancers with well-differentiated cells (grade I), to very poorly-differentiated cells (grade III). Grade I cancers are the least aggressive and have the best prognosis, while grade III tumors are the most aggressive and likely to recur. Grade II cancers are intermediate between grades I and III in terms of cell differentiation and aggressiveness of disease. The histopathology of endometrial cancers is highly diverse. The most common finding is a well-differentiated endometrioid adenocarcinoma, which is composed of numerous, small, crowded glands with varying degrees of nuclear atypia, mitotic activity, and stratification. This often appears on a background of endometrial hyperplasia. Frank adenocarcinoma may be distinguished from atypical hyperplasia by the finding of clear stromal invasion, or "back-to-back" glands which represent nondestructive replacement of the endometrial stroma by the cancer. With progression of the disease, the myometrium is infiltrated. ### Staging Endometrial carcinoma is surgically staged using the FIGO cancer staging system. The 2009 FIGO staging system is as follows: Myometrial invasion and involvement of the pelvic and para-aortic lymph nodes are the most commonly seen patterns of spread. A Stage 0 is sometimes included, in this case it is referred to as "carcinoma in situ". In 26% of presumably early-stage cancers, intraoperative staging revealed pelvic and distant metastases, making comprehensive surgical staging necessary. ## Management ### Surgery The initial treatment for endometrial cancer is surgery; 90% of women with endometrial cancer are treated with some form of surgery. Surgical treatment typically consists of hysterectomy including a bilateral salpingo-oophorectomy, which is the removal of the uterus, and both ovaries and Fallopian tubes. Lymphadenectomy, or removal of pelvic and para-aortic lymph nodes, is performed for tumors of histologic grade II or above. Lymphadenectomy is routinely performed for all stages of endometrial cancer in the United States, but in the United Kingdom, the lymph nodes are typically only removed with disease of stage II or greater. The topic of lymphadenectomy and what survival benefit it offers in stage I disease is still being debated. In women with presumed stage I disease, a 2017 systematic review found no evidence that lymphadenectomy reduces the risk of death or relapse of cancer when compared with no lymphadenectomy. Women who undergo lymphadenectomy are more likely to experience systemic morbidity related to surgery or lymphoedema/lymphocyst formation. In stage III and IV cancers, cytoreductive surgery is the norm, and a biopsy of the omentum may also be included. In stage IV disease, where there are distant metastases, surgery can be used as part of palliative therapy. Laparotomy, an open-abdomen procedure, is the traditional surgical procedure; however, in those with presumed early stage primary endometrial cancer, laparoscopy (keyhole surgery) is associated with reduced operative morbidity and similar overall and disease free survival. Removal of the uterus via the abdomen is recommended over removal of the uterus via the vagina because it gives the opportunity to examine and obtain washings of the abdominal cavity to detect any further evidence of cancer. Staging of the cancer is done during the surgery. The few contraindications to surgery include inoperable tumor, massive obesity, a particularly high-risk operation, or a desire to preserve fertility. These contraindications happen in about 5–10% of cases. Women who wish to preserve their fertility and have low-grade stage I cancer can be treated with progestins, with or without concurrent tamoxifen therapy. This therapy can be continued until the cancer does not respond to treatment or until childbearing is done. Uterine perforation may occur during a D&C or an endometrial biopsy. Side effects of surgery to remove endometrial cancer can specifically include sexual dysfunction, temporary incontinence, and lymphedema, along with more common side effects of any surgery, including constipation. ### Add-on therapy There are a number of possible additional therapies. Surgery can be followed by radiation therapy and/or chemotherapy in cases of high-risk or high-grade cancers. This is called adjuvant therapy. #### Chemotherapy Adjuvant chemotherapy is a recent innovation, consisting of some combination of paclitaxel (or other taxanes like docetaxel), doxorubicin (and other anthracyclines), and platins (particularly cisplatin and carboplatin). Adjuvant chemotherapy has been found to increase survival in stage III and IV cancer more than added radiotherapy. Mutations in mismatch repair genes, like those found in Lynch syndrome, can lead to resistance against platins, meaning that chemotherapy with platins is ineffective in people with these mutations. Side effects of chemotherapy are common. These include hair loss, low neutrophil levels in the blood, and gastrointestinal problems. In cases where surgery is not indicated, palliative chemotherapy is an option; higher-dose chemotherapy is associated with longer survival. Palliative chemotherapy, particularly using capecitabine and gemcitabine, is also often used to treat recurrent endometrial cancer. Low certainty evidence suggests that in women with recurrent endometrial cancer who have had chemotherapy, receiving drugs that inhibit the mTOR pathway may reduce the risk of disease worsening compared to having more chemotherapy or hormonal therapy. Though, mTOR inhibitors may increase the chance of experiencing digestive tract ulcers. #### Radiotherapy Adjuvant radiotherapy is commonly used in early-stage (stage I or II) endometrial cancer. It can be delivered through vaginal brachytherapy (VBT), which is becoming the preferred route due to its reduced toxicity, or external beam radiotherapy (EBRT). Brachytherapy involves placing a radiation source in the organ affected; in the case of endometrial cancer a radiation source is placed directly in the vagina. External beam radiotherapy involves a beam of radiation aimed at the affected area from outside the body. VBT is used to treat any remaining cancer solely in the vagina, whereas EBRT can be used to treat remaining cancer elsewhere in the pelvis following surgery. However, the benefits of adjuvant radiotherapy are controversial. Though EBRT significantly reduces the rate of relapse in the pelvis, overall survival and metastasis rates are not improved. VBT provides a better quality of life than EBRT. Radiotherapy can also be used before surgery in certain cases. When pre-operative imaging or clinical evaluation shows tumor invading the cervix, radiation can be given before a total hysterectomy is performed. Brachytherapy and EBRT can also be used, singly or in combination, when there is a contraindication for hysterectomy. Both delivery methods of radiotherapy are associated with side effects, particularly in the gastrointestinal tract. #### Hormonal therapy Hormonal therapy is only beneficial in certain types of endometrial cancer. It was once thought to be beneficial in most cases. If a tumor is well-differentiated and known to have progesterone and estrogen receptors, progestins may be used in treatment. There is no evidence to support the use of progestagen in addition to surgery for newly diagnosed endometrial cancer. About 25% of metastatic endometrioid cancers show a response to progestins. Also, endometrial stromal sarcomas can be treated with hormonal agents, including tamoxifen, hydroxyprogesterone caproate, letrozole, megestrol acetate, and medroxyprogesterone. This treatment is effective in endometrial stromal sarcomas because they typically have estrogen and/or progestin receptors. Progestin receptors function as tumor suppressors in endometrial cancer cells. Preliminary research and clinical trials have shown these treatments to have a high rate of response even in metastatic disease. In 2010 hormonal therapy is of unclear effect in those with advanced or recurrent endometrial cancer. There is insufficient evidence to inform women considering hormone replacement therapy after treatment for endometrial cancer. #### Targeted therapy Dostarlimab has been approved by the FDA for therapy of endometrial cancer with specific biomarker ### Monitoring The tumor marker CA-125 is frequently elevated in endometrial cancer and can be used to monitor response to treatment, particularly in serous cell cancer or advanced disease. Periodic MRIs or CT scans may be recommended in advanced disease and women with a history of endometrial cancer should receive more frequent pelvic examinations for the five years following treatment. Examinations conducted every three to four months are recommended for the first two years following treatment, and every six months for the next three years. Women with endometrial cancer should not have routine surveillance imaging to monitor the cancer unless new symptoms appear or tumor markers begin rising. Imaging without these indications is discouraged because it is unlikely to detect a recurrence or improve survival, and because it has its own costs and side effects. If a recurrence is suspected, PET/CT scanning is recommended. ## Prognosis ### Survival rates The five-year survival rate for endometrial adenocarcinoma following appropriate treatment is 80%. Most women, over 70%, have FIGO stage I cancer, which has the best prognosis. Stage III and especially Stage IV cancers has a worse prognosis, but these are relatively rare, occurring in only 13% of cases. The median survival time for stage III–IV endometrial cancer is nine to ten months. Older age indicates a worse prognosis. In the United States, white women have a higher survival rate than black women, who tend to develop more aggressive forms of the disease by the time of their diagnosis. Tumors with high progesterone receptor expression have a good prognosis compared to tumors with low progesterone receptor expression; 93% of women with high progesterone receptor disease survived to three years, compared with 36% of women with low progesterone receptor disease. Heart disease is the most common cause of death among those who survive endometrial cancer, with other obesity-related health problems also being common. Following diagnosis, quality of life is also positively associated with a healthy lifestyle (no obesity, high-quality diet, physical activity). ### Recurrence rates Recurrence of early stage endometrial cancer ranges from 3 to 17%, depending on primary and adjuvant treatment. Most recurrences (75–80%) occur outside of the pelvis, and most occur within two to three years of treatment—64% within two years and 87% within three years. Higher-staged cancers are more likely to recur, as are those that have invaded the myometrium or cervix, or that have metastasized into the lymphatic system. Papillary serous carcinoma, clear cell carcinoma, and endometrioid carcinoma are the subtypes at the highest risk of recurrence. High-grade histological subtypes are also at elevated risk for recurrence. The most common site of recurrence is in the vagina; vaginal relapses of endometrial cancer have the best prognosis. If relapse occurs from a cancer that has not been treated with radiation, EBRT is the first-line treatment and is often successful. If a cancer treated with radiation recurs, pelvic exenteration is the only option for curative treatment. Palliative chemotherapy, cytoreductive surgery, and radiation are also performed. Radiation therapy (VBT and EBRT) for a local vaginal recurrence has a 50% five-year survival rate. Pelvic recurrences are treated with surgery and radiation, and abdominal recurrences are treated with radiation and, if possible, chemotherapy. Other common recurrence sites are the pelvic lymph nodes, para-aortic lymph nodes, peritoneum (28% of recurrences), and lungs, though recurrences can also occur in the brain (\<1%), liver (7%), adrenal glands (1%), bones (4–7%; typically the axial skeleton), lymph nodes outside the abdomen (0.4–1%), spleen, and muscle/soft tissue (2–6%). ## Epidemiology As of 2014, approximately 320,000 women are diagnosed with endometrial cancer worldwide each year and 76,000 die, making it the sixth most common cancer in women. It is more common in developed countries, where the lifetime risk of endometrial cancer in women is 1.6%, compared to 0.6% in developing countries. It occurs in 12.9 out of 100,000 women annually in developed countries. In the United States, endometrial cancer is the most frequently diagnosed gynecologic cancer and, in women, the fourth most common cancer overall, representing 6% of all cancer cases in women. In that country, as of 2014 it was estimated that 52,630 women were diagnosed yearly and 8,590 would die from the disease. Northern Europe, Eastern Europe, and North America have the highest rates of endometrial cancer, whereas Africa and West Asia have the lowest rates. Asia saw 41% of the world's endometrial cancer diagnoses in 2012, whereas Northern Europe, Eastern Europe, and North America together comprised 48% of diagnoses. Unlike most cancers, the number of new cases has risen in recent years, including an increase of over 40% in the United Kingdom between 1993 and 2013. Some of this rise may be due to the increase in obesity rates in developed countries, increasing life expectancies, and lower birth rates. The average lifetime risk for endometrial cancer is approximately 2–3% in people with uteruses. In the UK, approximately 7,400 cases are diagnosed annually, and in the EU, approximately 88,000. Endometrial cancer appears most frequently during perimenopause (the period just before, just after, and during menopause), between the ages of 50 and 65; overall, 75% of endometrial cancer occurs after menopause. Women younger than 40 make up 5% of endometrial cancer cases and 10–15% of cases occur in women under 50 years of age. This age group is at risk for developing ovarian cancer at the same time. The worldwide median age of diagnosis is 63 years of age; in the United States, the average age of diagnosis is 60 years of age. White American women are at higher risk for endometrial cancer than black American women, with a 2.88% and 1.69% lifetime risk respectively. Japanese-American women and American Latina women have a lower rates and Native Hawaiian women have higher rates. ## Research There are several experimental therapies for endometrial cancer under research, including immunologic, hormonal, and chemotherapeutic treatments. Trastuzumab (Herceptin), an antibody against the Her2 protein, has been used in cancers known to be positive for the Her2/neu oncogene, but research is still underway. Immunologic therapies are also under investigation, particularly in uterine papillary serous carcinoma. Cancers can be analyzed using genetic techniques (including DNA sequencing and immunohistochemistry) to determine if certain therapies specific to mutated genes can be used to treat it. PARP inhibitors are used to treat endometrial cancer with PTEN mutations, specifically, mutations that lower the expression of PTEN. The PARP inhibitor shown to be active against endometrial cancer is olaparib. Research is ongoing in this area as of the 2010s. Research is ongoing on the use of metformin, a diabetes medication, in obese women with endometrial cancer before surgery. Early research has shown it to be effective in slowing the rate of cancer cell proliferation. Preliminary research has shown that preoperative metformin administration can reduce expression of tumor markers. Long-term use of metformin has not been shown to have a preventative effect against developing cancer, but may improve overall survival. Temsirolimus, an mTOR inhibitor, is under investigation as a potential treatment. Research shows that mTOR inhibitors may be particularly effective for cancers with mutations in PTEN. Ridaforolimus (deforolimus) is also being researched as a treatment for people who have previously had chemotherapy. Preliminary research has been promising, and a stage II trial for ridaforolimus was completed by 2013. There has also been research on combined ridaforolimus/progestin treatments for recurrent endometrial cancer. Bevacizumab and tyrosine kinase inhibitors, which inhibit angiogenesis, are being researched as potential treatments for endometrial cancers with high levels of vascular endothelial growth factor. Ixabepilone is being researched as a possible chemotherapy for advanced or recurrent endometrial cancer. Treatments for rare high-grade undifferentiated endometrial sarcoma are being researched, as there is no established standard of care yet for this disease. Chemotherapies being researched include doxorubicin and ifosfamide. There is also research in progress on more genes and biomarkers that may be linked to endometrial cancer. The protective effect of combined oral contraceptives and the IUD is being investigated. Preliminary research has shown that the levonorgestrel IUD placed for a year, combined with six monthly injections of gonadotropin-releasing hormone, can stop or reverse the progress of endometrial cancer in young women; specifically complex atypical hyperplasia however the results have been inconclusive. An experimental drug that combines a hormone with doxorubicin is also under investigation for greater efficacy in cancers with hormone receptors. Hormone therapy that is effective in treating breast cancer, including use of aromatase inhibitors, is also being investigated for use in endometrial cancer. One such drug is anastrozole, which is currently being researched in hormone-positive recurrences after chemotherapy. Research into hormonal treatments for endometrial stromal sarcomas is ongoing as well. It includes trials of drugs like mifepristone, a progestin antagonist, and aminoglutethimide and letrozole, two aromatase inhibitors. Research continues into the best imaging method for detecting and staging endometrial cancer. As current diagnostic methods are invasive and inaccurate, researchers are looking into new ways to catch endometrial cancer, especially in its early stages. A study found that using a technique involving infrared light on simple blood test samples detected uterine cancer with high accuracy (87%), and could detect precancerous growths in all cases. In surgery, research has shown that complete pelvic lymphadenectomy along with hysterectomy in stage 1 endometrial cancer does not improve survival and increases the risk of negative side effects, including lymphedema. Other research is exploring the potential of identifying the sentinel lymph nodes for biopsy by injecting the tumor with dye that shines under infrared light. Intensity modulated radiation therapy is currently under investigation, and already used in some centers, for application in endometrial cancer, to reduce side effects from traditional radiotherapy. Its risk of recurrence has not yet been quantified. Research on hyperbaric oxygen therapy to reduce side effects is also ongoing. The results of the PORTEC 3 trial assessing combining adjuvant radiotherapy with chemotherapy were awaited in late 2014. There is not enough evidence to determine if people with endometrial cancer benefit from additional behavioural and life style interventions that are aimed at losing excess weight. ## History and culture Endometrial cancer is not widely known by the general populace, despite its frequency. There is low awareness of the symptoms, which can lead to later diagnosis and worse survival.
484,793
USS President (1800)
1,150,749,852
United States Navy frigate
[ "1800 ships", "Barbary Wars American ships", "Sailing frigates of the United States Navy", "Ships built in New York City", "Vessels captured from the United States Navy", "War of 1812 ships of the United States" ]
USS President was a wooden-hulled, three-masted heavy frigate of the United States Navy, nominally rated at 44 guns; she was launched in April 1800 from a shipyard in New York City. President was one of the original six frigates whose construction the Naval Act of 1794 had authorized, and she was the last to be completed. The name "President" was among ten names submitted to President George Washington by Secretary of War Timothy Pickering in March of 1795 for the frigates that were to be constructed. Joshua Humphreys designed these frigates to be the young Navy's capital ships, and so President and her sisters were larger and more heavily armed and built than standard frigates of the period. Forman Cheeseman, and later Christian Bergh were in charge of her construction. Her first duties with the newly formed United States Navy were to provide protection for American merchant shipping during the Quasi War with France and to engage in a punitive expedition against the Barbary pirates in the First Barbary War. On 16 May 1811, President was at the center of the Little Belt affair; her crew mistakenly identified HMS Little Belt as HMS Guerriere, which had impressed an American seaman. The ships exchanged cannon fire for several minutes. Subsequent U.S. and Royal Navy investigations placed responsibility for the attack on each other without a resolution. The incident contributed to tensions between the U.S. and Great Britain that led to the War of 1812. During the war, President made several extended cruises, patrolling as far away as the English Channel and Norway; she captured the armed schooner HMS Highflyer and numerous merchant ships. In January 1815, after having been blockaded in New York for a year by the Royal Navy, President attempted to run the blockade, and was chased by a blockading squadron. During the chase, she was engaged and crippled by the frigate HMS Endymion off the coast of the city. The British squadron captured President soon after, and the Royal Navy took her into service as HMS President until she was broken up in 1818. President's design was copied and used to build the next HMS President in 1829. ## Design and construction During the 1790s, American merchant vessels began to fall prey to Barbary pirates in the Mediterranean, most notably from Algiers. Congress's response was the Naval Act of 1794. The Act provided funds for the construction of six frigates; however, it included a clause stating that construction of the ships would cease if the United States agreed to peace terms with Algiers. Joshua Humphreys' design was long on keel and narrow of beam (width) to allow for mounting very heavy guns. The design incorporated a diagonal scantling (rib) scheme to limit hogging (warping); the ships were given extremely heavy planking. This gave the hull greater strength than those of more lightly built frigates. Humphreys developed his design after realizing that the fledgling United States Navy could not match the navies of the European states for size. He therefore designed his frigates to be able to overpower other frigates, but with the speed to escape from a ship of the line. George Washington named President in order to reflect a principle of the United States Constitution. In March 1796, before President's keel could be laid down, a peace accord was announced between the United States and Algiers. Construction was suspended in accordance with the Naval Act of 1794. At the onset of the Quasi-War with France in 1798, funds were approved to complete her construction, and her keel was laid at a shipyard in New York City. Her original naval constructor was Forman Cheeseman and the superintendent was Captain Silas Talbot. Based on experience Humphreys gained during construction of President's sister ships, Constitution and United States, he instructed Cheeseman to make alterations to the frigate's design. These included raising the gun deck by 2 in (5.1 cm) and moving the main mast 2 ft (61 cm) further rearward. President was built to a length of 175 ft (53 m) between perpendiculars and a beam of 44.4 ft (13.5 m). Although construction was begun at New York in the shipyard of Foreman Cheesman, work on her was discontinued in 1796. Construction resumed in 1798, under Christian Bergh and naval constructor William Doughty. ### Armament President's nominal rating was that of a 44-gun ship. However, she usually carried over 50 guns. During her service in the War of 1812, President was armed with a battery of 55 guns: thirty-two 24-pounder (10.9 kg) cannon, twenty-two 42-pounder (19 kg) carronades, and one 18-pounder (8 kg) long gun. During her Royal Navy service as HMS President, she was initially rated at 50 guns, although she was at this stage armed with 60 cannons—thirty 24-pounder guns (10.9 kg) on the upper deck, twenty-eight 42-pounder (19 kg) carronades on the spar deck, plus two more 24-pounder guns on the forecastle. In February 1817, she was again re-rated, this time to 60 guns. Unlike modern Navy vessels, ships of this era had no permanent battery of guns. Guns were portable and were often exchanged between ships as situations warranted. Each commanding officer modified his vessel's armaments to his liking, taking into consideration factors such as the overall tonnage of cargo, complement of personnel aboard, and planned routes to be sailed. Consequently, a vessel's armament would change often during its career; records of the changes were not generally kept. ## Quasi and First Barbary Wars President launched on 10 April 1800—the last of the original six frigates to do so. After her fitting out, she departed for Guadeloupe on 5 August with Captain Thomas Truxtun in command. She conducted routine patrols during the latter part of the Quasi-War and made several recaptures of American merchant ships. Nevertheless, her service in this period was uneventful. She returned to the United States in March, after a peace treaty with France was ratified on 3 February 1801. During the Quasi-War, the United States paid tribute to the Barbary States to ensure that they would not seize or harass American merchant ships. In 1801 Yusuf Karamanli of Tripoli, dissatisfied with the amount of tribute in comparison to that paid to Algiers, demanded an immediate payment of \$250,000. Thomas Jefferson responded by sending a squadron of warships to protect American merchant ships in the Mediterranean and to pursue peace with the Barbary States. In May, Commodore Richard Dale selected President as his flagship for the assignment in the Mediterranean. Dale's orders were to present a show of force off Algiers, Tripoli, and Tunis and maintain peace with promises of tribute. Dale was authorized to commence hostilities at his discretion if any Barbary State had declared war by the time of his arrival. Dale's squadron consisted of President, Philadelphia, Essex, and Enterprise. The squadron arrived at Gibraltar on 1 July; President and Enterprise quickly continued to Algiers, where their presence convinced the regent to withdraw threats he had made against American merchant ships. President and Enterprise subsequently made appearances at Tunis and Tripoli before President arrived at Malta on 16 August to replenish drinking water supplies. Blockading the harbor of Tripoli on 24 August, President captured a Greek vessel with Tripolitan soldiers aboard. Dale negotiated an exchange of prisoners that resulted in the release of several Americans held captive in Tripoli. President arrived at Gibraltar on 3 September. Near Mahón in early December, President struck a large rock while traveling at 6 knots (11 km/h; 6.9 mph). The impact brought Dale on deck and he successfully navigated President out of danger. An inspection revealed that the impact had twisted off a short section of her keel. President remained in the Mediterranean until March 1802; she departed for the United States and arrived on 14 April. Although President remained in the United States, operations against the Barbary States continued. A second squadron assembled under the command of Richard Valentine Morris in Chesapeake. Morris' poor performance resulted in his recall and subsequent dismissal from the Navy in 1803. A third squadron assembled under the command of Edward Preble in Constitution; by July 1804, they had fought the Battle of Tripoli Harbor. ### Second Barbary patrol In April 1804, President Jefferson decided to reinforce Preble's squadron. President, Congress, Constellation, and Essex prepared to sail as soon as possible under the direction of Commodore Samuel Barron. Barron selected President as his flagship, but she required a new bowsprit and repairs to her masts and rigging. Some two months passed before the squadron was ready to sail. They departed in late June and arrived at Gibraltar on 12 August. President left Gibraltar on 16 August with Constellation; the frigates paused at Malta before arriving off Tripoli on 10 September, joining Constitution, Argus, and Vixen. Sighting three ships running the blockade of Tripoli, the squadron moved in to capture them; during the pursuit, a sudden change in wind direction caused President to collide with Constitution. The collision caused serious damage to Constitution's stern, bow, and figurehead. Two of the captured ships were sent to Malta with Constitution; President sailed to Syracuse, Sicily, arriving on 27 August. When Barron arrived in the Mediterranean, his seniority of rank over Preble entitled him to assume the duties of commodore. However, soon after replacing Preble, Barron went ashore at Syracuse in poor health and became bedridden. Under command of Captain George Cox, President began routine blockade duties of Tripoli during the winter months of 1804–05. In late April 1805, Constitution captured three ships off Tripoli. President escorted them to port at Malta before rejoining Constitution. Barron's fragile health necessitated his resignation; he passed command to John Rodgers in late May 1805. Barron ordered Cox to command Essex, and turned President over to his brother, James Barron, on 29 May. On 3 June, after the Battle of Derne, the U.S signed a peace treaty with Tripoli. President sailed for the United States on 13 July, carrying the ailing Barron and many sailors released from captivity in Tripoli. ## Little Belt Affair In 1807, the Chesapeake-Leopard Affair heightened tensions between the United States and Britain. In preparation for further hostilities, Congress began authorizing naval appropriations, and President recommissioned in 1809 under the command of Commodore John Rodgers. She made routine and uneventful patrols, mainly along the United States' eastern seaboard, until 1 May 1811, when the British frigate HMS Guerriere stopped the American brig Spitfire 18 mi (29 km) from New York and impressed a crewman. Rodgers received orders to pursue Guerriere, and President sailed immediately from Fort Severn on 10 May. On 16 May, approximately 40 miles (64 km) northeast of Cape Henry, a lookout spotted a sail on the horizon. Closing to investigate, Rodgers determined the sail belonged to a warship, and raised signal flags to identify his ship. The unidentified ship, later learned to be HMS Little Belt—a 20-gun sixth rate—hoisted signal flags in return, but the hoist was not understood by President's crew. Little Belt sailed southward and Rodgers, believing the ship to be Guerriere, pursued. Darkness set in before the ships were within hailing distance, and Rodgers hailed twice, only to have the same question returned to him: "What ship is that?" According to Rodgers, immediately after the exchange of hails, Little Belt fired a shot that tore through President's rigging. Rodgers returned fire. Little Belt promptly answered with three guns, and then a whole broadside. Rodgers ordered his gun crews to fire at will; several accurate broadsides heavily damaged Little Belt in return. After five minutes of firing, President's crew realized their adversary was much smaller than a frigate and Rodgers ordered a cease fire. However, Little Belt fired again and President answered with more broadsides. After Little Belt became silent, President stood off and waited overnight. At dawn it was obvious that Little Belt was greatly damaged from the fight; Rodgers sent a boat over from President to offer assistance in repairing the damage. Her Captain, Arthur Bingham, acknowledged the damage; declining any help, he sailed to Halifax, Nova Scotia. President had one sailor slightly wounded in the exchange, while Little Belt suffered 31 killed or wounded. Upon President's return to port, the U.S. Navy launched an investigation into the incident. Gathering testimony from President's officers and crewmen, they determined that Little Belt had fired the first shot in the encounter. In the Royal Navy investigation, Captain Bingham insisted that President had fired the first shot and continued firing for 45 minutes, rather than the five minutes Rodgers claimed. In all subsequent reports, both captains continually insisted that the other ship had fired the first shot. Reaching a stalemate, the American and British governments quietly dropped the matter. ## War of 1812 The United States declared war against Britain on 18 June 1812. Three days later, within an hour of receiving official word of the declaration, Commodore Rodgers sailed from New York City. The commodore sailed aboard President, leading a squadron consisting of United States, Congress, Hornet, and Argus on a 70-day North Atlantic cruise. A passing American merchant ship informed Rodgers about a fleet of British merchantmen en route to Britain from Jamaica. Rodgers and his squadron sailed in pursuit, and on 23 June they encountered what was later learned to be HMS Belvidera. President pursued the ship, and in what is recorded as the first shot of the War of 1812, Rodgers himself aimed and fired a bowchaser at Belvidera, striking her rudder and penetrating the gun room. Upon President's fourth shot at Belvidera, a cannon one deck below Rodgers burst, killing or wounding 16 sailors and throwing Rodgers to the deck with enough force to break his leg. The ensuing confusion allowed Belvidera to fire her stern chasers, killing six more men aboard President. Rodgers kept up the pursuit, using his bow chasers to severely damage Belvidera's rigging, but his two broadsides had little effect. The crew of Belvidera quickly made repairs to the rigging. They cut loose her anchors and boats and pumped drinking water overboard to lighten her load, thereby increasing her speed. Belvidera soon gained enough speed to distance herself from President, and Rodgers abandoned the pursuit. Belvidera sailed to Halifax to deliver the news that war had been declared. President and her squadron returned to the pursuit of the Jamaican fleet, and on 1 July began to follow the trail of coconut shells and orange peels the Jamaicans had left behind them. President sailed to within one day's journey of the English Channel, but never sighted the convoy. Rodgers called off the pursuit on 13 July. During their return trip to Boston, Rodgers' squadron captured seven merchant ships and recaptured one American vessel. After some refitting, President, still under Rodgers' command, sailed on 8 October with Congress, United States, and Argus. On 12 October, United States and Argus parted from the squadron for their own patrols. On 10 October, President chased HMS Nymphe, but failed to overtake her. On 17 October President captured the British packet ship Swallow, which carried a large amount of currency on board. On 31 October, President and Congress began pursuit of HMS Galatea, which was escorting two merchant ships. The chase lasted about three hours, and in that time Congress captured the merchant ship Argo. Meanwhile, President kept after Galatea and drew very close, but lost sight of her in the night. Congress and President remained together, but did not find any ships to capture during November. Returning to the United States, they passed north of Bermuda and proceeded toward the Virginia capes; they arrived in Boston on 31 December, having taken nine prizes. President and Congress found themselves blockaded there by the Royal Navy until April 1813. On 30 April, President and Congress sailed through the blockade on their third cruise of the war. On 2 May, they pursued HMS Curlew, but she outran them and escaped. President parted company with Congress on 8 May, and Rodgers set a course along the Gulf Stream to search for merchant ships to capture. By June, not having come across a single ship, President turned north; she put into North Bergen, Norway, on 27 June to replenish her drinking water. Sailing soon after, President captured two British merchant ships, which helped to replenish her stores. On 10 June President captured the outward-bound Falmouth packet Duke of Montrose, Captain Aaron Groub Blewett, which managed to throw her mails overboard before President could send a prize crew aboard. President made a cartel of Duke of Montrose, putting all of President's prisoners on board and then sending her into Falmouth under the command of an American officer. When Duke of Montrose arrived at Falmouth the British Government abrogated the cartel on the grounds that they had advised the American Government that the British would not recognize agreements entered into on the high seas. Around the same time, two Royal Navy ships came into view. President set all sails to escape, and outran them in a chase lasting 80 hours. Rodgers reported that his decision to flee the ships was based on identifying them as a ship of the line and a frigate. Royal Navy records later revealed that the vessels were actually the 32-gun frigate Alexandria and the 16-gun fireship Spitfire. Spending a few days near the Irish Channel, President captured several more merchant ships. She then set a course for the United States. In late September, she encountered HMS Highflyer along the east coast of the United States. Rodgers used his signal flags to trick Highflyer into believing that President was HMS Tenedos. Lieut. George Hutchinson, Highflyer's captain, came aboard President only to discover he had walked into a trap; President captured Highflyer without a shot being fired. President's long cruise netted her 11 merchant ships, in addition to Highflyer. On 4 December 1813, President sailed from Providence, Rhode Island. On the 25th, she encountered two frigates in the dark, one of which fired at her. Rodgers believed the ships to be British, but they were two French frigates, Méduse and Nymphe. Afterward, Rodgers headed toward Barbados for an eight-week cruise in the West Indies, reportedly making three small captures, among them the British merchant ships Wanderer, which she captured on 4 January 1814 in the Atlantic Ocean at approximately and sank, and Edward, which she captured and sank on 9 January. Returning to New York City on 18 February 1814, President encountered HMS Loire, which turned to escape once the latter's crew realized President was a 44-gun frigate. President remained in New York for the duration of 1814 due to the harbor's blockade by a British squadron consisting of HMS Endymion, Majestic, Pomone, and Tenedos. ### Capture Stephen Decatur assumed command of President in December 1814, planning a cruise to the West Indies to prey on British shipping. In mid-January 1815, a snowy gale with strong winds forced the British blockading squadron away from New York Harbor, giving Decatur the opportunity to put to sea. On the evening of 14 January, President headed out of the harbor but ran aground, the result of harbor pilots incorrectly marking a safe passage. Stranded on the sand bar, President lifted and dropped with the incoming tide. Within two hours her hull had been damaged, her timbers twisted, and masts sprung. Damage to her keel caused the ship to hog and sag. Decatur was finally able to float President off the bar and, assessing the damage, he decided to return to New York for repairs; however, the wind direction was not favorable and President was forced to head out to sea. Unaware of the exact location of the blockading squadron, Decatur set a course to avoid them and seek a safe port, but approximately two hours later the squadron's sails were spotted on the horizon. President changed course to outrun them, but the damage she suffered the night before had significantly reduced her speed. Attempting to gain speed, Decatur ordered expendable cargo thrown overboard; by late afternoon of 15 January, HMS Endymion under Captain Henry Hope came alongside and proceeded to fire broadsides. Decatur planned to bring President in close to Endymion, whereby President's crew could board and capture the opposing ship and sail her to New York. (President would be scuttled to prevent her capture). Making several attempts to close on Endymion, Decatur discovered that President's damage limited her maneuverability, allowing Endymion to anticipate, and draw away from, positions favorable for boarding. Faced with this new dilemma, Decatur ordered bar and chain shot fired to disable Endymion's sails and rigging, the idea being to shake his pursuer and allow President to proceed to a safe port without being followed. At noon, Endymion, being the much better sailer, was close-hauled, outpacing her squadron and leaving them behind. At 2 pm, she gained on President and took position on the American ship's quarter, shooting into President as she tried to escape. Endymion was able to rake President three times and did considerable damage to her; by contrast, President primarily directed her fire at Endymion's rigging in order to slow her down during the two-hour engagement. Finally at 7:58 pm, President ceased fire and hoisted a light in her rigging, indicating that she had surrendered. Endymion ceased firing on the defeated American ship but did not board to take possession of her prize, due to a lack of undamaged boats. Endymion's foresails had been damaged in the engagement and while she hove to for repairs, Decatur took advantage of the situation and, despite having struck, made off to escape at 8:30 pm; Endymion, hastily completed repairs and resumed the chase at 8:52 pm. President drew away while her crew made hurried repairs of their own. Within two hours, one of her lookouts spotted the remainder of the enemy squadron drawing near. President continued her escape attempt, but by nightfall HMS Pomone and Tenedos had caught up and began firing broadsides. Realizing his situation, Decatur surrendered President again, just before midnight. ## As HMS President Now in possession of the Royal Navy, President and her crew were ordered to proceed to Bermuda with Endymion. During the journey, they encountered a dangerous gale. The storm destroyed President's masts and strained Endymion's timbers so badly that all the upper-deck guns were thrown overboard to prevent her from sinking. The cartel Clarendon, Garness, master, brought 400 prisoners from President from Bermuda back to New York. On 7 April 1815 Clarendon grounded at Sandy Hook but crew, passengers, and prisoners were all saved. Upon the prisoners' return to the United States, a U.S. Navy court martial board acquitted Decatur, his officers, and his men of any wrongdoing in the surrender of President. President and Endymion continued to England, arriving at Spithead on 28 March. President was commissioned into the Royal Navy under the name HMS President. Her initial rating was set at 50 guns, although she was at this stage armed with 60 cannons—thirty 24-pounders (10.9 kg) on the upper deck, twenty-eight 42-pounder (19 kg) carronades on the spar deck, plus two more 24-pounder guns on the forecastle. In February 1817 she was again re-rated, this time to 60 guns. In March 1818 she was considered for refitting. A drydock inspection revealed that the majority of her timber was defective or rotten and she was broken up at Portsmouth in June. President's design was copied and used to build HMS President in 1829, although this was reportedly more of a political maneuver than a testament to the design: the Royal Navy wished to retain the name and likeness of the American ship on their register as a reminder to the United States and other nations of the capture. ## Notes and citations
35,581,834
McDonnell Douglas F/A-18 Hornet in Australian service
1,173,453,704
History of the F/A-18 fighter aircraft used by Australia
[ "Aircraft in Royal Australian Air Force service", "McDonnell Douglas aircraft" ]
The Royal Australian Air Force (RAAF) operated McDonnell Douglas F/A-18 Hornet fighter aircraft between 1984 and 2021. The Australian Government purchased 75 A and B variants of the F/A-18 in 1981 to replace the RAAF's Dassault Mirage III fighters. The Hornets entered service with the RAAF between 1984 and 1990. Four Hornets were destroyed in flying accidents during the late 1980s and early 1990s. RAAF Hornets were first sent on a combat deployment as part of the Australian contribution to the 2003 invasion of Iraq. During the invasion, 14 Hornets flew patrols over Iraq, as well as close air support sorties to assist coalition ground forces. RAAF F/A-18s also provided security for the American air base at Diego Garcia between late 2001 and early 2002, and have protected a number of high-profile events in Australia. Between 2015 and 2017 a detachment of Hornets was deployed to the Middle East and struck ISIL targets as part of Operation Okra. Commencing in 1999, the RAAF put its Hornets through a series of upgrades to improve their effectiveness. However, the aircraft became increasingly difficult to operate and were at risk of being outclassed by the fighters and air-defence systems operated by other countries. Under current Australian Government planning they will be replaced by 72 Lockheed Martin F-35 Lightning II fighters. The Australian Government has offered the Hornets for sale once they are no longer needed by the RAAF, and finalised a deal to sell 25 to Canada in early 2019. Eight F/A-18s will be preserved for historical purposes in Australia and the remainder may be sold to an American air combat training company. ## Selection The RAAF began the initial stages of finding a replacement for its Dassault Mirage III fighters in 1968. The service issued an Air Staff Requirement for new fighter aircraft in December 1971, which received a larger than expected number of proposals from manufacturers. At that time the RAAF expected to start phasing out the Mirage IIIs in 1980. In 1973, a team of RAAF personnel inspected the McDonnell Douglas F-15 Eagle, Northrop YF-17, Saab 37 Viggen and Dassault Mirage F1 programs, but recommended that any decisions about a suitable replacement be delayed so that several new fighters that were expected to soon become available could also be considered. In August 1974 the Australian Government decided to defer the fighter replacement project and extend the Mirage IIIs' operational life into the 1980s. One of the four Mirage III-equipped squadrons was also disbanded at this time. Work on the Mirage replacement program resumed in 1975, and the Tactical Fighter Project Office was established in 1976 to manage the process of selecting the RAAF's next fighter. A request for proposals was issued in November that year and attracted eleven responses. By March 1977 the office had chosen to focus on the F-15 Eagle, General Dynamics F-16 Fighting Falcon, Dassault Mirage 2000 and Panavia Tornado, as well as the McDonnell Douglas F-18A and F-18L; the F-18A was a carrier-based fighter developed from the YF-17 for the United States Navy, and the F-18L was a land-based variant of this design. The Grumman F-14 Tomcat was also considered by the project office, but was regarded as unsuitable and never placed on the official shortlist. In November 1978, the F-15 and Tornado were removed from the list of aircraft being considered. The Tornado was excluded as it was principally a strike aircraft and had limited air-to-air capability. While the F-15 was an impressive aircraft that met or exceeded almost all of the RAAF's requirements, it was believed that the air force did not need a fighter with such advanced capabilities and that introducing it into service could destabilise Australia's region. Further evaluation of the remaining aircraft took place during 1979. Wing Commander (and later Air Vice-Marshal) Bob Richardson test-flew a Mirage 2000 in April 1979, and reported that while the aircraft had excellent aerodynamic characteristics, its avionics, radar, fuel system, cockpit and weapons capability were inferior to those of US designs. Richardson also test-flew a YF-17 that was being used as a demonstrator for the F-18L in mid-1979, and was impressed by its capabilities. No F-18Ls had been ordered at this time, and the RAAF did not want to take on the risk of being the lead customer for the design. At about the same time, the RAAF rejected an offer of F-14 Tomcats that had been originally ordered by the Iranian Government but not delivered as a result of the revolution in that country. While the Tomcats were made available at a greatly reduced price, the air force judged that these aircraft were too large and complex for its requirements. With the Mirage 2000 and F-18L rejected, the RAAF was faced with a choice between the F-16 and F-18A. Richardson and several other RAAF pilots tested United States Air Force (USAF) F-16Bs in 1979 and 1980, and reported that the aircraft had excellent performance but could be difficult to control at times. The evaluation team was also concerned about the reliability of the F-16's engine and regarded the aircraft as technologically immature. It was also noted that the aircraft's radar was inferior to that of the F-18A, and that F-16s could not fire the beyond-visual-range (BVR) air-to-air missiles and long-range anti-shipping missiles that the F-18A was capable of operating. In contrast, the evaluation team was impressed by the F-18A, and regarded it as being a more robust and survivable aircraft as it had been designed to operate from aircraft carriers; these features were important for operations from bare bases in northern Australia. Richardson and three other RAAF pilots test-flew F-18As, and reported that the aircraft handled well, but had some deficiencies with its flight control system and engines; these were not seen as major flaws by the evaluation team. The F-18A's twin engines were considered to be its main advantage over the single-engined F-16, as research conducted by the evaluation team found that the attrition rate for single-engined fighters was twice that for aircraft with two engines. Overall, the RAAF judged that both the F-16 and F-18A were too immature for a decision to be made in 1980 as had been originally planned, and recommended to the Government that this be deferred by a year. The Government accepted the RAAF's recommendation, and delayed its decision on a Mirage III replacement until late 1981. This gave General Dynamics an opportunity to offer the improved F-16C to the RAAF. The capability of these aircraft was closer to that of the F-18 as they were equipped with BVR missiles. Richardson and another RAAF pilot test-flew F-16Cs in May 1981. The F-18 design was also improved during 1981, and was redesignated the F/A-18. When RAAF test pilots flew these aircraft during 1981, they found that the deficiencies they had detected in 1980 were now addressed. Overall, the RAAF concluded that while both aircraft met its requirements and the F-16 was less expensive, the F/A-18 was the superior design as it was more technologically mature, easier to maintain during operational deployments, and likely to have a much lower attrition rate. The Government accepted this advice, and announced on 20 October 1981 that 75 F/A-18s would be ordered. As part of this announcement, Minister for Defence Jim Killen acknowledged that the F-16 would have been seven percent cheaper to purchase, but stated that the F/A-18's lower running costs and expected attrition rate greatly reduced the difference between the lifetime cost of the two designs. Instead of directly ordering the aircraft from McDonnell Douglas, the Australian Government purchased its F/A-18s through the US Government's Foreign Military Sales (FMS) program. Ordering the aircraft via the US Government allowed the RAAF to take advantage of the superior purchasing power of the US military, and reduced the service's project management requirements. This led to a complicated arrangement whereby the aircraft were ordered by the US Government, delivered to the US Navy, and then transferred to the RAAF once initial flight testing had taken place. The process functioned smoothly and was cost effective. ## Production The RAAF's order of 75 Hornets comprised 57 single-seat A variant fighters and 18 two-seat B variant operational training aircraft. It was planned that each of the three fighter squadrons and the single operational conversion unit that were to operate the F/A-18 would be allocated 16 aircraft, of which 12 were expected to be operational at any time while the other four were undergoing maintenance. The remaining eleven Hornets were labelled the "half-life attrition buy" and would replace the aircraft that were expected to have been lost by 2000; as it happened, this greatly exceeded the RAAF's actual losses. Deliveries were planned to start in late 1984 and be completed in 1990. The total cost of the F/A-18 program, including the aircraft, spare parts, other equipment and modifications to the RAAF's fighter bases, was calculated as billion in August 1981, but was rapidly revised upwards due to the depreciation of the Australian dollar at this time. The Australian Hornets were very similar to the standard US Navy variants, but incorporated a number of minor modifications. These included the addition of an instrument landing system/VHF omnidirectional range (ILS/VOR) system, a high-frequency radio, a different ejection seat harness and the deletion of all equipment used only to launch the aircraft from catapults. In addition, two of the Australian aircraft were fitted with flight-test instrumentation so that they could be used as part of trials. The Government sought to use the Mirage III replacement program as a means to increase the capabilities of Australia's manufacturing industry. Accordingly, it was decided to build the aircraft in Australia, though it was recognised that this would lead to higher costs than if the fighters were purchased directly from the United States. While the first four RAAF Hornets were built in the United States, the remainder were assembled at the Government Aircraft Factories (GAF) plant at Avalon Airport in Victoria, and their engines were produced by the Commonwealth Aircraft Corporation at Fishermans Bend in Melbourne. Another twelve Australian companies were involved in other stages of the project. These firms were sub-contracted to McDonnell Douglas and the other major US companies that produced components for the F/A-18, and had to comply with the requirements of the FMS program. The Australian Government hoped that Singapore and New Zealand would purchase Australian-built Hornets, but this did not eventuate. The Canadian Government expressed interest in purchasing 25 Australian-built F/A-18As in 1988 in order to increase its force of these aircraft after they had ceased to be manufactured in the United States, but this did not lead to any sales. The Australian Hornets began to roll off the production lines in 1984. The first two aircraft (serial numbers A21-101 and A21-102) were entirely built at McDonnell Douglas's factory in St. Louis, and were handed over to the RAAF on 29 October 1984. These aircraft remained in the United States until May 1985 for training and trials purposes. The next two Australian Hornets (A21-103 and A21-104) were also built at St. Louis, but were then disassembled and flown to Avalon in June 1984 on board a USAF Lockheed C-5 Galaxy. The aircraft were then reassembled, and A21-103 was rolled out at a ceremony attended by Prime Minister Bob Hawke and the Chief of the Air Staff, Air Marshal David Evans, on 16 November. However, the aircraft's initial test flight was delayed until 26 February 1985 by a demarcation dispute over which category of pilot was permitted to fly the aircraft. In order to meet production targets, GAF was required to complete 1.5 Hornets per month. Production fell behind schedule during the first half of 1987, however, as a result of inefficiencies at the company's factory and industrial relations problems. GAF was able to accelerate production later in the year, though some components that were planned to be manufactured in Australia were purchased from companies in the United States instead. The final cost of the Hornet project was billion; after adjusting for the depreciation of the Australian dollar this was \$186 million less than the initial estimate. The RAAF began to accept Hornets into service in 1985. A21-103 was formally delivered on 4 May of that year. Two weeks later, A21-101 and 102 were flown from Naval Air Station Lemoore in California to RAAF Base Williamtown in New South Wales between 16 and 17 May 1985. This ferry flight was conducted as a non-stop journey, and USAF McDonnell Douglas KC-10 Extender tankers refuelled each of the Hornets 15 times as they crossed the Pacific. As of 2005 this remained the longest single flight to have been undertaken by F/A-18s. Despite the delays to production in 1987, the final Australian Hornet (A21-57) was delivered on schedule at a ceremony held in Canberra on 16 May 1990. The F/A-18As were allocated serial numbers A21-1 through to A21-57 and the F/A-18Bs were allocated A21-101 to A21-118. A major capital works program was also undertaken to prepare RAAF bases for the Hornets. Over \$150 million was spent upgrading the runways, hangars and maintenance facilities at RAAF Base Williamtown, which has been the main F/A-18 base throughout the aircraft's service. The pre-existing airfield at RAAF Base Tindal in the Northern Territory was also developed into a major air base between 1985 and 1988 at a cost of \$215 million, so that it could accommodate No. 75 Squadron. Until this time the squadron had been stationed at RAAF Base Darwin which, due to its location on Australia's north coast, was vulnerable to damage from cyclones and difficult to defend during wartime. Owing to concerns over the airworthiness of the RAAF's General Dynamics F-111 bombers and delays to the Lockheed Martin F-35 Lightning II program, the Australian Government ordered 24 F/A-18F Super Hornets in 2006. This design is significantly different from the original (or "classic") Hornet. The RAAF's first Super Hornets entered service in 2010 and deliveries were completed the next year. In 2013 the Australian Government ordered 12 Boeing EA-18G Growler electronic warfare variants of the Super Hornet, and all were delivered to the RAAF between 2015 and 2017. ## Maintenance and upgrades Maintenance of the RAAF's Hornets is carried out by both air force personnel and civilian contractors. Until the early 1990s, all routine servicing and a significant proportion of intensive "deeper maintenance" was undertaken by the air force. However, the share of intensive maintenance tasks outsourced to the private sector was increased during the 1990s under the RAAF-wide Commercial Support Program. Under current arrangements, the four Hornet-equipped units undertake all routine servicing and some of the more complex deeper maintenance tasks. The remainder of the deeper maintenance work, as well as all major refurbishments and upgrade projects, are carried out by commercial firms. BAE Systems has been the lead contractor for Hornet deeper maintenance since 2003, and Boeing Australia has also provided maintenance services for the aircraft since it won a contract to do so in 2010. In August 2017, Boeing's contract was extended until the planned retirement of the Hornets in 2021, with the company also gaining responsibility for integrating weapons onto the type. This change was made to free up RAAF personnel for activities associated with introducing the F-35 into service. The RAAF's Hornet fleet received few modifications until the late 1990s. During this period, the AN/AAS-38 "Nite Hawk" targeting pod was the only new system fitted to the aircraft. Australia also managed to break the codes which prevented modifications to the Hornet's radar software after the US Government refused to share them. This enabled the software to be adjusted so that all of the aircraft operated by Australia's neighbours could be designated as hostile. In his final address to Parliament, former Minister for Defence Kim Beazley stated that he had raised access to the radar system repeatedly with the US Government during the 1980s, and "in the end, we spied on them and we extracted the codes ourselves". Several Asian countries introduced Mikoyan MiG-29 fighters into service during the 1990s, raising concerns that the RAAF's aircraft would be outclassed. The air force considered replacing the Hornet with the Eurofighter Typhoon or Boeing F/A-18E/F Super Hornet, but concluded that both aircraft were technologically immature. As a result, it was decided to upgrade the Hornets. The Hornet Upgrade Program (HUG) began in 1999, and had three main phases. In Phase 1, which ran from mid-2000 through 2002, the Hornets' computer systems, navigation system and radio were replaced. The aircraft were also fitted to operate the ASRAAM air-to-air missile; these weapons replaced the AIM-9 Sidewinder. HUG Phase 2 comprised four sub-elements and sought to improve the Hornets' combat performance. During Phase 2.1 the APG-65 radar was replaced with the improved AN/APG-73, and the aircraft were fitted with a secure voice encryption communications system and various updates to their computer systems. In HUG Phase 2.2, the most important element of the program, the Hornets were fitted with a Joint Helmet Mounted Cueing System, equipment needed to share data through the Link 16 network, a new countermeasures dispensing system and several upgrades to their cockpit displays. All of the Hornets were upgraded to this standard between January 2005 and December 2006. In Phase 2.3, an improved Electronic Counter Measures system was fitted to the Hornets; the AN/ALR-2002 was originally selected, but proved unsuccessful. It was replaced by the ALR-67 Radar Warning Receiver in late 2006. As of early 2012, 14 Hornets had been fitted with the system and the remainder were scheduled to receive it by the end of the year. During HUG Phase 2.4 the Hornets were modified to be able to use the AN/AAQ-28(v) "LITENING" targeting pod and 37 of these systems were purchased; this phase was completed in 2007. The third stage of the Hornet Upgrade Program sought to rectify airframe damage. HUG Phase 3.1 involved minor structural work to all aircraft as they passed through other phases of the program. The centre fuselages of the ten Hornets assessed as suffering the greatest amount of structural damage were replaced in HUG Phase 3.2. It was originally intended that all the RAAF's Hornets would receive new centre fuselages, but the scope of this phase of the program was reduced after it was found that the number of man-hours needed to upgrade each aircraft was much greater than originally estimated. The ten aircraft were upgraded at an L-3 Communications facility in Canada, and all were returned to service by June 2010. The long-running HUG process complicated the RAAF's management of its fleet of F/A-18s. At any one time, the capabilities of individual aircraft differed considerably depending on their upgrades. Accordingly, the long-standing arrangement where aircraft were almost permanently assigned to each squadron was replaced by a system where they were pooled. Attempts to allocate Hornets with similar levels of modifications from the common pool to each squadron were not successful. ## Armament The RAAF's Hornets have been fitted with several different types of air-to-air weapons. The aircraft are equipped with an internal M61A1 cannon for use against air and ground targets; 578 rounds can be carried for this weapon. During the initial years of the Hornets' service, the aircraft were equipped with AIM-9M Sidewinder short-range air-to-air missiles and AIM-7M Sparrow medium-range air-to-air missiles. The Sparrows were replaced by the AIM-120 AMRAAM in 2002, and in 2004 the Sidewinders were replaced by ASRAAMs. The older missiles are occasionally used in training exercises, however. A variety of unguided and guided weapons can also be used against ground targets. The Hornets carry Mark 82, Mark 83 and Mark 84 bombs, as well as GBU-10, GBU-12 and GBU-16 Paveway II laser-guided bombs. In addition, the aircraft have operated bombs fitted with JDAM guidance kits since 2008. The long-ranged JDAM-ER variant of these bombs were ordered in 2011 and entered service in 2015. During exercises the Hornets carry BDU-33 and BDU-57 LGTR training bombs. Since November 2011, the RAAF's Hornets have also been equipped with AGM-158 JASSM cruise missiles. The F/A-18s' main weapon in the maritime strike role is the Harpoon anti-ship missile; the RAAF initially operated the Block IC variant of this missile, but purchased Block II variants in 2003. In addition to these weapons, the Hornets can also be fitted with 330-US-gallon (1,200 L) drop tanks to extend their range. ## Operational history ### Introduction into service Four RAAF units converted to the Hornet between 1985 and 1988. The first 14 Hornets were allocated to No. 2 Operational Conversion Unit (2OCU) at RAAF Base Williamtown, and were used to train the pilots and instructors needed to convert the RAAF's three fighter squadrons to the aircraft. 2OCU's first Hornet operational conversion course began on 19 August 1985. In addition to the unit's training activities, 2OCU aircraft travelled widely around Australia and South East Asia during 1985 and 1986 to showcase the new aircraft. No. 3 Squadron was the first fighter unit to convert from the Mirage III, and became operational with the Hornet in August 1986. It was followed by No. 77 Squadron in June 1987 and No. 75 Squadron in May 1988. No. 81 Wing, whose headquarters is located at Williamtown, has commanded these four units since they converted to the F/A-18. As of 2012, 2OCU, No. 3 and No. 77 Squadrons are stationed at Williamtown and No. 75 Squadron is located at Tindal. In addition, two Hornets are allocated to the Aircraft Research and Development Unit at RAAF Base Edinburgh in South Australia. The RAAF's Mirage III pilots generally found the process of converting to the Hornet to be straightforward. While the F/A-18 was considered to be easier to fly, its more sophisticated avionics and weapons systems required improved cockpit workload management skills. The Hornets have also proven to be mechanically reliable and easy to maintain, though shortages of spare parts reduced availability rates during the early years of their service with the RAAF. The updates installed as part of the HUG process have further simplified maintenance procedures. In recent years, however, the aging aircraft have required much more servicing than was the case in the past. To extend the Hornets' range, four of the RAAF's six Boeing 707 transport aircraft were converted to tankers in the early 1990s; the first Boeing 707 tanker entered service in 1990. The tankers were operated by No. 33 Squadron and supported the Hornet units until the 707s were retired in 2008. These aircraft were replaced with KC-30A tanker-transports in 2011. The RAAF has at times suffered from shortfalls of Hornet-qualified pilots. The service began to experience shortages of F/A-18 and F-111 fast-jet pilots in the mid-1980s due to competition from commercial airlines and relatively low recruitment rates. By June 1999 the three operational Hornet-equipped squadrons had only 40 pilots, which was fewer than the number of aircraft allocated to these units. The RAAF claimed that the squadrons were able to meet their readiness targets, however. To overcome this shortfall, the RAAF gave its fast-jet units a higher priority for aircrew, implemented measures to reduce separation rates, and recruited pilots from other countries. These reforms coincided with reduced demand for civil pilots following the 11 September attacks, and by late 2003 the RAAF's fast-jet units were at near full strength. A 2010 article in the magazine Australian Aviation stated that No. 3 Squadron typically had "about 18 pilots on strength" at any point in time. At this time the total strength of the squadron, including air and ground crew, was around 300 personnel. ### Training As the Hornets are multi-role fighters, their pilots practise a wide range of tasks during peacetime training. Each year the three Hornet squadrons rotate between four-month training blocks focused on air-to-air combat, air-to-ground tactics and Australian Defence Force support tasks. The units undertake the air-to-air and air-to-ground blocks before assuming responsibility for Australian Defence Force support (which involves operating with the Australian Army and Royal Australian Navy). No. 81 Wing's headquarters oversees this training program and monitors adherence to common standards and procedures. Training sorties may include such tasks as defending air bases, infrastructure and shipping from enemy aircraft, attacking naval and ground targets, and practising in-flight refuelling. More unusual tasks such as dropping naval mines have also been practised at times. Major exercises often involve other RAAF units and aircraft, as well as units from the Army and Navy and contingents from other countries. As part of their regular training activities, F/A-18 Hornets operate in different parts of Australia and the Asia-Pacific region. Regular deployments are made to Singapore and RMAF Butterworth in Malaysia as part of Integrated Air Defence System exercises. In addition, RAAF F/A-18s have participated in exercises in the Philippines, Thailand and the United States. These deployments have seen Australian fighter squadrons range as far afield as Eielson Air Force Base in Alaska, where they took part in Red Flag – Alaska exercises in 2008 and 2011. Four of the RAAF's Hornets were destroyed in flying accidents during the late 1980s and early 1990s. A21-104 was the first aircraft to be lost when it crashed at Great Palm Island in Queensland on 18 November 1987; its pilot was killed. The next loss occurred on 2 August 1990 when two No. 75 Squadron Hornets (A21-29 and A21-42) collided. A21-42 crashed, killing the unit's commanding officer; the other aircraft was damaged but managed to return to base. On 5 June 1991 A21-41 crashed 100 kilometres (62 mi) north-east of Weipa, Queensland, killing its pilot. A21-106 was the fourth aircraft to be lost when it crashed inland from Shoalwater Bay in Queensland on 19 May 1992 – its pilot and a passenger from the Defence Science and Technology Organisation died. As of September 2017, all of the remaining 71 F/A-18s were still in service. Aviation writer Nigel Pittaway has noted that the type has "enjoyed an exemplary safety record during its RAAF service", especially when compared to the loss of 41 of the RAAF's 116 Mirages due to accidents. Similarly, Andrew McLaughlin noted in 2019 that the loss of four Hornets to this time was "a remarkable endorsement of the aircraft's rugged design and systems redundancy" given that the RAAF had projected that eleven would have been destroyed by 2004 when the aircraft were first acquired. ### Deployments In late 1990, consideration was given to deploying a squadron of F/A-18s to the Middle East as part of an expanded Australian contribution to the Gulf War. The Department of Defence opposed dispatching the aircraft on the grounds that doing so would greatly strain the fighter force in Australia, and this option was not adopted by the government. As a result, the Hornets' only role in the war was to support the training of the Royal Australian Navy warships which were sent to the Gulf by conducting mock attacks on the vessels as they sailed from Sydney to Perth. During late 1999, No. 75 Squadron was placed on alert to provide close air support and air defence for the international forces deployed to East Timor as part of INTERFET. While Indonesian forces posed a potential threat to this force, no fighting eventuated and the Hornets were not required. The first operational deployment of RAAF Hornets took place in 2001. Following the 11 September terrorist attacks, the Australian Government agreed to deploy F/A-18s to protect the major USAF air base on the Indian Ocean island of Diego Garcia, which was being used to mount operations in Afghanistan. Four No. 77 Squadron Hornets and 70 personnel departed for the island on 9 November. No. 3 Squadron pilots and ground crew relieved the No. 77 Squadron personnel in early February 2002. RAAF Hornets were not assigned to the War in Afghanistan as at the time they were less capable than other available coalition aircraft. While the F/A-18s were occasionally scrambled in response to reports of aircraft near the base, no threat developed. The detachment returned to Australia on 21 May 2002. No. 75 Squadron formed part of the Australian contribution to the 2003 invasion of Iraq. The squadron began initial planning for this deployment in December 2002, and intensive training was undertaken from January 2003. To improve the unit's readiness, air and ground crew as well as aircraft were also posted to No. 75 Squadron from other units. The Australian Government announced on 1 February that it would begin deploying RAAF aircraft, including a squadron of F/A-18s, to the Middle East. No. 75 Squadron departed from Tindal on 13 February, and arrived at Al Udeid Air Base in Qatar on the 16th of the month. The 14 F/A-18A Hornets selected for this deployment had received the HUG 2.1 package of upgrades and recently completed major servicing. These upgrades allowed the F/A-18s to operate alongside other coalition aircraft. In addition to No. 75 Squadron, several experienced Hornet pilots were also posted to the USAF Combined Air and Space Operations Center in the Middle East to provide advice on how to make the best use of the squadron. The Australian Hornets saw combat in several roles during the Iraq War. Following the outbreak of war on 20 March, No. 75 Squadron was initially used to escort high-value Coalition aircraft, such as tankers and airborne early warning and control aircraft. As it rapidly became clear that the Iraqi Air Force posed no threat, from 21 March No. 75 Squadron also began to conduct air interdiction sorties against Iraqi forces. These sorties were initially flown in support of the United States Army's V Corps, but the squadron was rarely assigned any targets to attack. As a result, the Australian commanders in the Middle East had No. 75 Squadron reassigned to support the United States Marine Corps' I Marine Expeditionary Force. At this time the squadron also began flying close air support sorties. During the first two weeks of the war the squadron typically flew twelve sorties per day. To avoid pilot fatigue, additional aircrew were posted to the Middle East from Australia. The number of sorties dropped to between six and ten per day from 5 April onwards as the American forces closed on Baghdad and few targets remained in southern Iraq. On 12 April, No. 75 Squadron supported elements of the Special Air Service Regiment and 4th Battalion, Royal Australian Regiment, which occupied Al Asad Airbase. During the last weeks of the war the squadron continued to fly sorties across western, central and southern Iraq to support British and American forces. In several of the squadron's operations in the final week of the war, the Hornets made low altitude and high speed passes over Iraqi positions to encourage their defenders to surrender. No. 75 Squadron conducted its final combat sorties on 27 April. During the war the squadron flew 350 combat missions (including 670 individual sorties) and dropped 122 laser-guided bombs. No. 75 Squadron did not suffer any casualties, and all 14 Hornets returned to Tindal on 14 May 2003. RAAF Hornets have also provided air defence for several high-profile events in Australia since the 11 September attacks. In 2002, Hornets patrolled over the Commonwealth Heads of Government Meeting (CHOGM) at Coolum Beach, Queensland; this was the first time RAAF aircraft had flown air defence sorties over Australia since World War II. On 22 and 23 October that year a detachment of Hornets patrolled over Canberra during US President George W. Bush's visit to the city. A detachment of aircraft from No. 77 Squadron was deployed to RAAF Base East Sale in March 2006 to protect the Commonwealth Games, which were being held in Melbourne. In September 2007, Hornets patrolled over Sydney during the APEC leaders meeting there. Eight Hornets were also deployed from Williamtown to RAAF Base Pearce in October 2011 to protect the CHOGM meeting in nearby Perth. On 16 and 17 November that year, Hornets operated over Canberra and Darwin while President Barack Obama was present. In March 2015, six F/A-18As from No. 75 Squadron were deployed to the Middle East as part of Operation Okra, replacing a detachment of Super Hornets. No. 81 Wing's involvement in Operation Okra concluded in May 2017, with No. 1 Squadron resuming responsibility for this task. By this time all of the wing's three squadrons had completed at least one rotation to the Middle East: No. 3 Squadron was deployed once, and the other two squadrons conducted two deployments. The squadrons used a common 'pool' of aircraft during these deployments, with either six or seven Hornets being stationed in the Middle East at any time. The aircraft were typically deployed for eight months before rotating back to Australia when becoming due for major servicing. Members of the three Hornet-equipped squadrons served five or six-month rotations, and ground crew from 2OCU and No. 81 Wing's workshops were also deployed to fill specialist roles. The Hornets attacked ISIL personnel and facilities in both Iraq and Syria, including in support of Iraqi forces engaged in the Battle of Mosul. Overall, No. 81 Wing conducted 1,973 sorties over Iraq and Syria during which 1,961 munitions were released. Despite the age of the aircraft and the harsh environmental conditions in the Middle East, the detachment sustained a very high serviceability rate. ### Retirement While the Hornet Upgrade Program was successful, it was expected that the aircraft would become increasingly expensive to operate as they aged, and improvements to the fighter aircraft and air defences operated by other countries would reduce their combat effectiveness. The Australian Government decided to replace the RAAF's F/A-18 Hornets with Lockheed Martin F-35A Lightning II fighters, with this process commencing in 2018. The acquisition process is designated Project AIR 6000 Phase 2A/B, and will involve the purchase of 72 F-35A fighters to equip three squadrons and an operational training unit. All of the F/A-18A and Bs were scheduled to be retired by 2022. The RAAF's Hornet sustainability planning was designed to allow the type to be retained in service for longer if the F-35 program experienced further delays. No. 3 Squadron was the first Hornet unit to be reequipped, and ceased operating the type in December 2017. It began to transition to the F-35 in early 2018. The squadron's Hornets and most of its personnel were transferred to No. 77 Squadron, which was expanded from two to three flights as part of this change. 2OCU completed its final Hornet conversion training course in 2019, and ceased flying the type in December that year. No. 77 Squadron ceased Hornet operations in December 2020. As of May 2020, No. 75 Squadron was to begin converting to the F-35 in January 2022. The RAAF implemented several measures to keep the Hornets in service until the F-35As were ready. These included a structural refurbishment program, increased monitoring of fatigue-related issues as well as repainting the aircraft and frequently washing them to reduce the risks posed by corrosion. In 2015 the Defence Science and Technology Group revised the fatigue damage algorithm used for determining the Hornets' structural condition which found that the airframes were less fatigued than previously believed, and so able to remain in service for a longer period than planned if necessary. This finding was accepted by the Directorate General Technical Airworthiness – Australian Defence Force. As of September 2017, none of the RAAF Hornets were subject to flying restrictions due to airframe fatigue. However, the cost of maintaining the aging aircraft in service was increasing. A 2017 article by Canadian defence analyst Christopher Cowan and Australian Strategic Policy Institute analyst Dr. Andrew Davies stated that the RAAF "has done an excellent job managing its Hornet fleet", with each aircraft having a unique plan to minimise airframe fatigue. At this time each of the Hornets had, on average, been flown for 4,200 hours, as compared to the nominal fatigue life of 6,000 hours for the type. In August 2019, the Hornet fleet passed the milestone of having flown 400,000 hours. At this time, several Hornets had been retired from service after becoming due for major servicing periods. The Hornet was officially retired from RAAF service on 29 November 2021. A ceremony to mark the occasion took place that day at RAAF Base Williamtown. The fleet had been flown for almost 408,000 hours. ## Disposal The Australian Government is planning to sell the Hornets and associated spare parts after the type is retired from RAAF service. In August 2017, the Canadian Government initiated discussions to purchase a number of Australian F/A-18s to augment the Royal Canadian Air Force's fleet of similar McDonnell Douglas CF-18 Hornets in the event that a planned purchase of Super Hornets was cancelled as the result of a trade dispute. A Canadian delegation also visited Australia that month to inspect RAAF Hornets. The Canadian Government lodged a formal expression of interest to purchase Australian F/A-18s on 29 September 2017. On 13 December 2017, Australian Minister for Defence Marise Payne confirmed the sale of 18 F/A-18 Hornets and associated spare parts to Canada. The Canadian Government announced at the same time that it had cancelled its plans to acquire Super Hornets. The Australian aircraft are being acquired to enable the RCAF to continue to meet its international commitments until a new fighter type is ordered and enters service. In June 2018, the Canadian Government requested a further seven Australian Hornets. These additional aircraft will be used as a source of spare parts. The sale of the 25 Hornets was finalised in early 2019, with the purchase price being million. Of these aircraft, 18 will be issued to operational units and the remainder used for trials purposes and as a source of spare parts. After they arrive in Canada, the aircraft will be fitted with different ejection seats and software so that they are identical to CF-18s. Deliveries of ex-RAAF Hornets to Canada began in February 2019. Two of the aircraft were flown to CFB Cold Lake by Australian pilots in mid-February, and handed over after they had formed part of the RAAF contingent at a Red Flag exercise in the United States. At this time, deliveries of the other 23 Hornets were scheduled to be completed in 2021. However, this schedule is dependent on progress with introducing the F-35 into Australian service. At least one other Hornet was handed over to Canada during 2019. In August 2020 it was reported that 18 ex-Australian aircraft would be among 36 RCAF Hornets to be placed through a modernisation program. The upgrades will include the installation of a AN/APG-79 active electronically scanned array radar and equipping the aircraft with AIM-9X Sidewinder air-to-air missiles and AGM-154 Joint Standoff Weapons. In March 2020 the Minister for Defence Industry Melissa Price announced that up to 46 ex-RAAF Hornets, as well as the entire associated spare parts and test equipment inventory, would be sold to the American company Air USA. This company provides air combat training to the US Government. Journalist Nigel Pittaway noted that if all of these aircraft are sold it would mean that none of the Hornets are preserved in Australian museums after the type leaves RAAF service. In May, the Department of Defence stated that the numbers which would be sold to Canada and Air USA had not yet been finalised. As of November 2021 it was not clear if the deal to sell Hornets to Air USA would proceed. The aircraft had still not been transferred in February 2023, but it was reported that the contract remained in place and Department of Defence still intended to deliver them. By this time Air USA had been rebranded as RAVN Aerospace. In March 2023 Australian Financial Review correspondent Aaron Patrick wrote that the sale "appears to have fallen through" and called for the Hornets to be transferred to Ukraine as part of Australia's assistance to the country following the Russian invasion during 2022. It was reported in April 2023 that the Department of Defence had repeatedly refused to comment on the status of the sale to RAVN Aerospace. In May 2023 Australian Strategic Policy Institute analyst Malcom Davis stated that the Australian Hornets were stored at Andersen Air Force Base on the American island of Guam, and appeared to be in poor condition. Davis considers the aircraft to be too outdated to be suitable for Ukraine, and noted that the country has not requested them from Australia. In June the Australian Financial Review reported that the Australian Government was discussing donating the Hornets to Ukraine with the Ukrainian and United States governments. The US Government was reported to be supportive of this transfer. The story also stated that multiple experts considered the aircraft to be suitable for Ukraine and that they were being stored at RAAF Base Williamtown. ABC News reported also in June that the arrangement would require RAVN Aerospace to on-sell the aircraft to Ukraine, with the company being willing to do so. If the Hornets are provided to Ukraine it will be the largest-ever transfer of Australian military equipment to another country. ## Preservation In May 2020, the Department of Defence announced that six F/A-18As (A21-22, -23, -29, -32, -40 and -43) and two F/A-18Bs (A21-101 and -103) would be preserved in Australia. Two of these aircraft were earmarked for static preservation at the Australian War Memorial. All eight Hornets were prepared for museum display by Boeing Defence Australia over a two year period ending in May 2023.
15,573
Japan
1,173,907,448
Island country in East Asia
[ "East Asian countries", "G20 nations", "Island countries", "Japan", "Member states of the United Nations", "Northeast Asian countries", "OECD members", "Transcontinental countries" ]
Japan (Japanese: 日本, , Nippon or Nihon, and formally 日本国, Nihonkoku) is an island country in East Asia. It is situated in the northwest Pacific Ocean and is bordered on the west by the Sea of Japan, extending from the Sea of Okhotsk in the north toward the East China Sea, Philippine Sea, and Taiwan in the south. Japan is a part of the Ring of Fire, and spans an archipelago of 14,125 islands, with the five main islands being Hokkaido, Honshu (the "mainland"), Shikoku, Kyushu, and Okinawa. Tokyo is the nation's capital and largest city, followed by Yokohama, Osaka, Nagoya, Sapporo, Fukuoka, Kobe, and Kyoto. Japan is the eleventh most populous country in the world, as well as one of the most densely populated. About three-fourths of the country's terrain is mountainous, concentrating its highly urbanized population on narrow coastal plains. Japan is divided into 47 administrative prefectures and eight traditional regions. The Greater Tokyo Area is the most populous metropolitan area in the world. Japan has the world's highest life expectancy, though it is experiencing a population decline. Japan has been inhabited since the Upper Paleolithic period (30,000 BC). Between the 4th and 9th centuries, the kingdoms of Japan became unified under an emperor and the imperial court based in Heian-kyō. Beginning in the 12th century, political power was held by a series of military dictators (shōgun) and feudal lords (daimyō) and enforced by a class of warrior nobility (samurai). After a century-long period of civil war, the country was reunified in 1603 under the Tokugawa shogunate, which enacted an isolationist foreign policy. In 1854, a United States fleet forced Japan to open trade to the West, which led to the end of the shogunate and the restoration of imperial power in 1868. In the Meiji period, the Empire of Japan adopted a Western-modeled constitution and pursued a program of industrialization and modernization. Amidst a rise in militarism and overseas colonization, Japan invaded China in 1937 and entered World War II as an Axis power in 1941. After suffering defeat in the Pacific War and two atomic bombings, Japan surrendered in 1945 and came under a seven-year Allied occupation, during which it adopted a new constitution. Under the 1947 constitution, Japan has maintained a unitary parliamentary constitutional monarchy with a bicameral legislature, the National Diet. Japan is a developed country and a great power, with one of the largest economies by nominal GDP. Japan has renounced its right to declare war, though it maintains a Self-Defense Force that ranks as one of the world's strongest militaries. A global leader in the automotive, robotics, and electronics industries, the country has made significant contributions to science and technology and is one of the world's largest exporters and importers. It is part of multiple major international and intergovernmental institutions. Japan is considered a cultural superpower as the culture of Japan is well known around the world, including its art, cuisine, film, music, and popular culture, which encompasses prominent manga, anime, and video game industries. ## Etymology The name for Japan in Japanese is written using the kanji 日本 and is pronounced Nippon or Nihon. Before 日本 was adopted in the early 8th century, the country was known in China as Wa (倭, changed in Japan around 757 to 和) and in Japan by the endonym Yamato. Nippon, the original Sino-Japanese reading of the characters, is favored for official uses, including on banknotes and postage stamps. Nihon is typically used in everyday speech and reflects shifts in Japanese phonology during the Edo period. The characters 日本 mean "sun origin", which is the source of the popular Western epithet "Land of the Rising Sun". The name "Japan" is based on Chinese pronunciations of 日本 and was introduced to European languages through early trade. In the 13th century, Marco Polo recorded the early Mandarin or Wu Chinese pronunciation of the characters 日本國 as Cipangu. The old Malay name for Japan, Japang or Japun, was borrowed from a southern coastal Chinese dialect and encountered by Portuguese traders in Southeast Asia, who brought the word to Europe in the early 16th century. The first version of the name in English appears in a book published in 1577, which spelled the name as Giapan in a translation of a 1565 Portuguese letter. ## History ### Prehistoric to classical history A Paleolithic culture from around 30,000 BC constitutes the first known habitation of the islands of Japan. This was followed from around 14,500 BC (the start of the Jōmon period) by a Mesolithic to Neolithic semi-sedentary hunter-gatherer culture characterized by pit dwelling and rudimentary agriculture. Clay vessels from the period are among the oldest surviving examples of pottery. From around 700 BC, the Japonic-speaking Yayoi people began to enter the archipelago from the Korean Peninsula, intermingling with the Jōmon; the Yayoi period saw the introduction of practices including wet-rice farming, a new style of pottery, and metallurgy from China and Korea. According to legend, Emperor Jimmu (grandson of Amaterasu) founded a kingdom in central Japan in 660 BC, beginning a continuous imperial line. Japan first appears in written history in the Chinese Book of Han, completed in 111 AD. Buddhism was introduced to Japan from Baekje (a Korean kingdom) in 552, but the development of Japanese Buddhism was primarily influenced by China. Despite early resistance, Buddhism was promoted by the ruling class, including figures like Prince Shōtoku, and gained widespread acceptance beginning in the Asuka period (592–710). The far-reaching Taika Reforms in 645 nationalized all land in Japan, to be distributed equally among cultivators, and ordered the compilation of a household registry as the basis for a new system of taxation. The Jinshin War of 672, a bloody conflict between Prince Ōama and his nephew Prince Ōtomo, became a major catalyst for further administrative reforms. These reforms culminated with the promulgation of the Taihō Code, which consolidated existing statutes and established the structure of the central and subordinate local governments. These legal reforms created the ritsuryō state, a system of Chinese-style centralized government that remained in place for half a millennium. The Nara period (710–784) marked the emergence of a Japanese state centered on the Imperial Court in Heijō-kyō (modern Nara). The period is characterized by the appearance of a nascent literary culture with the completion of the Kojiki (712) and Nihon Shoki (720), as well as the development of Buddhist-inspired artwork and architecture. A smallpox epidemic in 735–737 is believed to have killed as much as one-third of Japan's population. In 784, Emperor Kanmu moved the capital, settling on Heian-kyō (modern-day Kyoto) in 794. This marked the beginning of the Heian period (794–1185), during which a distinctly indigenous Japanese culture emerged. Murasaki Shikibu's The Tale of Genji and the lyrics of Japan's national anthem "Kimigayo" were written during this time. ### Feudal era Japan's feudal era was characterized by the emergence and dominance of a ruling class of warriors, the samurai. In 1185, following the defeat of the Taira clan by the Minamoto clan in the Genpei War, samurai Minamoto no Yoritomo established a military government at Kamakura. After Yoritomo's death, the Hōjō clan came to power as regents for the shōgun. The Zen school of Buddhism was introduced from China in the Kamakura period (1185–1333) and became popular among the samurai class. The Kamakura shogunate repelled Mongol invasions in 1274 and 1281 but was eventually overthrown by Emperor Go-Daigo. Go-Daigo was defeated by Ashikaga Takauji in 1336, beginning the Muromachi period (1336–1573). The succeeding Ashikaga shogunate failed to control the feudal warlords (daimyō) and a civil war began in 1467, opening the century-long Sengoku period ("Warring States"). During the 16th century, Portuguese traders and Jesuit missionaries reached Japan for the first time, initiating direct commercial and cultural exchange between Japan and the West. Oda Nobunaga used European technology and firearms to conquer many other daimyō; his consolidation of power began what was known as the Azuchi–Momoyama period. After the death of Nobunaga in 1582, his successor, Toyotomi Hideyoshi, unified the nation in the early 1590s and launched two unsuccessful invasions of Korea in 1592 and 1597. Tokugawa Ieyasu served as regent for Hideyoshi's son Toyotomi Hideyori and used his position to gain political and military support. When open war broke out, Ieyasu defeated rival clans in the Battle of Sekigahara in 1600. He was appointed shōgun by Emperor Go-Yōzei in 1603 and established the Tokugawa shogunate at Edo (modern Tokyo). The shogunate enacted measures including buke shohatto, as a code of conduct to control the autonomous daimyō, and in 1639 the isolationist sakoku ("closed country") policy that spanned the two and a half centuries of tenuous political unity known as the Edo period (1603–1868). Modern Japan's economic growth began in this period, resulting in roads and water transportation routes, as well as financial instruments such as futures contracts, banking and insurance of the Osaka rice brokers. The study of Western sciences (rangaku) continued through contact with the Dutch enclave in Nagasaki. The Edo period gave rise to kokugaku ("national studies"), the study of Japan by the Japanese. ### Modern era The United States Navy sent Commodore Matthew C. Perry to force the opening of Japan to the outside world. Arriving at Uraga with four "Black Ships" in July 1853, the Perry Expedition resulted in the March 1854 Convention of Kanagawa. Subsequent similar treaties with other Western countries brought economic and political crises. The resignation of the shōgun led to the Boshin War and the establishment of a centralized state nominally unified under the emperor (the Meiji Restoration). Adopting Western political, judicial, and military institutions, the Cabinet organized the Privy Council, introduced the Meiji Constitution (November 29, 1890), and assembled the Imperial Diet. During the Meiji period (1868–1912), the Empire of Japan emerged as the most developed nation in Asia and as an industrialized world power that pursued military conflict to expand its sphere of influence. After victories in the First Sino-Japanese War (1894–1895) and the Russo-Japanese War (1904–1905), Japan gained control of Taiwan, Korea and the southern half of Sakhalin. The Japanese population doubled from 35 million in 1873 to 70 million by 1935, with a significant shift to urbanization. The early 20th century saw a period of Taishō democracy (1912–1926) overshadowed by increasing expansionism and militarization. World War I allowed Japan, which joined the side of the victorious Allies, to capture German possessions in the Pacific and in China. The 1920s saw a political shift towards statism, a period of lawlessness following the 1923 Great Tokyo Earthquake, the passing of laws against political dissent, and a series of attempted coups. This process accelerated during the 1930s, spawning several radical nationalist groups that shared a hostility to liberal democracy and a dedication to expansion in Asia. In 1931, Japan invaded and occupied Manchuria; following international condemnation of the occupation, it resigned from the League of Nations two years later. In 1936, Japan signed the Anti-Comintern Pact with Nazi Germany; the 1940 Tripartite Pact made it one of the Axis Powers. The Empire of Japan invaded other parts of China in 1937, precipitating the Second Sino-Japanese War (1937–1945). In 1940, the Empire invaded French Indochina, after which the United States placed an oil embargo on Japan. On December 7–8, 1941, Japanese forces carried out surprise attacks on Pearl Harbor, as well as on British forces in Malaya, Singapore, and Hong Kong, among others, beginning World War II in the Pacific. Throughout areas occupied by Japan during the war, numerous abuses were committed against local inhabitants, with many forced into sexual slavery. After Allied victories during the next four years, which culminated in the Soviet invasion of Manchuria and the atomic bombings of Hiroshima and Nagasaki in 1945, Japan agreed to an unconditional surrender. The war cost Japan its colonies and millions of lives. The Allies (led by the United States) repatriated millions of Japanese settlers from their former colonies and military camps throughout Asia, largely eliminating the Japanese Empire and its influence over the territories it conquered. The Allies convened the International Military Tribunal for the Far East to prosecute Japanese leaders for war crimes. In 1947, Japan adopted a new constitution emphasizing liberal democratic practices. The Allied occupation ended with the Treaty of San Francisco in 1952, and Japan was granted membership in the United Nations in 1956. A period of record growth propelled Japan to become the second-largest economy in the world; this ended in the mid-1990s after the popping of an asset price bubble, beginning the "Lost Decade". On March 11, 2011, Japan suffered one of the largest earthquakes in its recorded history, triggering the Fukushima Daiichi nuclear disaster. On May 1, 2019, after the historic abdication of Emperor Akihito, his son Naruhito became Emperor, beginning the Reiwa era. ## Geography Japan comprises 14,125 islands extending along the Pacific coast of Asia. It stretches over 3000 km (1900 mi) northeast–southwest from the Sea of Okhotsk to the East China Sea. The country's five main islands, from north to south, are Hokkaido, Honshu, Shikoku, Kyushu and Okinawa. The Ryukyu Islands, which include Okinawa, are a chain to the south of Kyushu. The Nanpō Islands are south and east of the main islands of Japan. Together they are often known as the Japanese archipelago. As of 2019, Japan's territory is 377,975.24 km<sup>2</sup> (145,937.06 sq mi). Japan has the sixth-longest coastline in the world at 29,751 km (18,486 mi). Because of its far-flung outlying islands, Japan has the eighth-largest exclusive economic zone in the world, covering 4,470,000 km<sup>2</sup> (1,730,000 sq mi). The Japanese archipelago is 67% forests and 14% agricultural. The primarily rugged and mountainous terrain is restricted for habitation. Thus the habitable zones, mainly in the coastal areas, have very high population densities: Japan is the 40th most densely populated country. Honshu has the highest population density at 450 persons/km<sup>2</sup> (1200/sq mi) as of 2010, while Hokkaido has the lowest density of 64.5 persons/km<sup>2</sup> as of 2016. As of 2014, approximately 0.5% of Japan's total area is reclaimed land (umetatechi). Lake Biwa is an ancient lake and the country's largest freshwater lake. Japan is substantially prone to earthquakes, tsunami and volcanic eruptions because of its location along the Pacific Ring of Fire. It has the 17th highest natural disaster risk as measured in the 2016 World Risk Index. Japan has 111 active volcanoes. Destructive earthquakes, often resulting in tsunami, occur several times each century; the 1923 Tokyo earthquake killed over 140,000 people. More recent major quakes are the 1995 Great Hanshin earthquake and the 2011 Tōhoku earthquake, which triggered a large tsunami. ### Climate The climate of Japan is predominantly temperate but varies greatly from north to south. The northernmost region, Hokkaido, has a humid continental climate with long, cold winters and very warm to cool summers. Precipitation is not heavy, but the islands usually develop deep snowbanks in the winter. In the Sea of Japan region on Honshu's west coast, northwest winter winds bring heavy snowfall during winter. In the summer, the region sometimes experiences extremely hot temperatures because of the foehn. The Central Highland has a typical inland humid continental climate, with large temperature differences between summer and winter. The mountains of the Chūgoku and Shikoku regions shelter the Seto Inland Sea from seasonal winds, bringing mild weather year-round. The Pacific coast features a humid subtropical climate that experiences milder winters with occasional snowfall and hot, humid summers because of the southeast seasonal wind. The Ryukyu and Nanpō Islands have a subtropical climate, with warm winters and hot summers. Precipitation is very heavy, especially during the rainy season. The main rainy season begins in early May in Okinawa, and the rain front gradually moves north. In late summer and early autumn, typhoons often bring heavy rain. According to the Environment Ministry, heavy rainfall and increasing temperatures have caused problems in the agricultural industry and elsewhere. The highest temperature ever measured in Japan, 41.1 °C (106.0 °F), was recorded on July 23, 2018, and repeated on August 17, 2020. ### Biodiversity Japan has nine forest ecoregions which reflect the climate and geography of the islands. They range from subtropical moist broadleaf forests in the Ryūkyū and Bonin Islands, to temperate broadleaf and mixed forests in the mild climate regions of the main islands, to temperate coniferous forests in the cold, winter portions of the northern islands. Japan has over 90,000 species of wildlife as of 2019, including the brown bear, the Japanese macaque, the Japanese raccoon dog, the small Japanese field mouse, and the Japanese giant salamander. A large network of national parks has been established to protect important areas of flora and fauna as well as 52 Ramsar wetland sites. Four sites have been inscribed on the UNESCO World Heritage List for their outstanding natural value. ### Environment In the period of rapid economic growth after World War II, environmental policies were downplayed by the government and industrial corporations; as a result, environmental pollution was widespread in the 1950s and 1960s. Responding to rising concerns, the government introduced environmental protection laws in 1970. The oil crisis in 1973 also encouraged the efficient use of energy because of Japan's lack of natural resources. Japan ranks 20th in the 2018 Environmental Performance Index, which measures a nation's commitment to environmental sustainability. Japan is the world's fifth-largest emitter of carbon dioxide. As the host and signatory of the 1997 Kyoto Protocol, Japan is under treaty obligation to reduce its carbon dioxide emissions and to take other steps to curb climate change. In 2020 the government of Japan announced a target of carbon-neutrality by 2050. Environmental issues include urban air pollution (NOx, suspended particulate matter, and toxics), waste management, water eutrophication, nature conservation, climate change, chemical management and international co-operation for conservation. ## Government and politics Japan is a unitary state and constitutional monarchy in which the power of the Emperor is limited to a ceremonial role. Executive power is instead wielded by the Prime Minister of Japan and his Cabinet, whose sovereignty is vested in the Japanese people. Naruhito is the Emperor of Japan, having succeeded his father Akihito upon his accession to the Chrysanthemum Throne in 2019. Japan's legislative organ is the National Diet, a bicameral parliament. It consists of a lower House of Representatives with 465 seats, elected by popular vote every four years or when dissolved, and an upper House of Councillors with 245 seats, whose popularly-elected members serve six-year terms. There is universal suffrage for adults over 18 years of age, with a secret ballot for all elected offices. The prime minister as the head of government has the power to appoint and dismiss Ministers of State, and is appointed by the emperor after being designated from among the members of the Diet. Fumio Kishida is Japan's prime minister; he took office after winning the 2021 Liberal Democratic Party leadership election. The right-wing big tent Liberal Democratic Party has been the dominant party in the country since the 1950s, often called the 1955 System. Historically influenced by Chinese law, the Japanese legal system developed independently during the Edo period through texts such as Kujikata Osadamegaki. Since the late 19th century, the judicial system has been largely based on the civil law of Europe, notably Germany. In 1896, Japan established a civil code based on the German Bürgerliches Gesetzbuch, which remains in effect with post–World War II modifications. The Constitution of Japan, adopted in 1947, is the oldest unamended constitution in the world. Statutory law originates in the legislature, and the constitution requires that the emperor promulgate legislation passed by the Diet without giving him the power to oppose legislation. The main body of Japanese statutory law is called the Six Codes. Japan's court system is divided into four basic tiers: the Supreme Court and three levels of lower courts. According to data from the Inter-Parliamentary Union, the majority of members of the Japanese parliament are male and range in age from 50 to 70. In April 2023, according to Japanese public broadcaster NHK, Ryosuke Takashima, 26, is Japan's youngest-ever mayor. ### Administrative divisions Japan is divided into 47 prefectures, each overseen by an elected governor and legislature. In the following table, the prefectures are grouped by region: ### Foreign relations A member state of the United Nations since 1956, Japan is one of the G4 nations seeking reform of the Security Council. Japan is a member of the G7, APEC, and "ASEAN Plus Three", and is a participant in the East Asia Summit. It is the world's fifth-largest donor of official development assistance, donating US\$9.2 billion in 2014. In 2021, Japan had the fourth-largest diplomatic network in the world. Japan has close economic and military relations with the United States, with which it maintains a security alliance. The United States is a major market for Japanese exports and a major source of Japanese imports, and is committed to defending the country, with military bases in Japan. Japan is also a member of the Quadrilateral Security Dialogue (more commonly "the Quad"), a multilateral security dialogue reformed in 2017 aiming to limit Chinese influence in the Indo-Pacific region, along with the United States, Australia, and India, reflecting existing relations and patterns of cooperation. Japan's relationship with South Korea had historically been strained because of Japan's treatment of Koreans during Japanese colonial rule, particularly over the issue of comfort women. In 2015, Japan agreed to settle the comfort women dispute with South Korea by issuing a formal apology and paying money to the surviving comfort women. As of 2019, Japan is a major importer of Korean music (K-pop), television (K-dramas), and other cultural products. Japan is engaged in several territorial disputes with its neighbors. Japan contests Russia's control of the Southern Kuril Islands, which were occupied by the Soviet Union in 1945. South Korea's control of the Liancourt Rocks is acknowledged but not accepted as they are claimed by Japan. Japan has strained relations with China and Taiwan over the Senkaku Islands and the status of Okinotorishima. ### Military Japan is the second-highest-ranked Asian country in the 2022 Global Peace Index, after Singapore. It spent 1% of its total GDP on its defence budget in 2020, and maintained the tenth-largest military budget in the world in 2022. The country's military (the Japan Self-Defense Forces) is restricted by Article 9 of the Japanese Constitution, which renounces Japan's right to declare war or use military force in international disputes. The military is governed by the Ministry of Defense, and primarily consists of the Japan Ground Self-Defense Force, the Japan Maritime Self-Defense Force, and the Japan Air Self-Defense Force. The deployment of troops to Iraq and Afghanistan marked the first overseas use of Japan's military since World War II. The Government of Japan has been making changes to its security policy which include the establishment of the National Security Council, the adoption of the National Security Strategy, and the development of the National Defense Program Guidelines. In May 2014, Prime Minister Shinzo Abe said Japan wanted to shed the passiveness it has maintained since the end of World War II and take more responsibility for regional security. In December 2022, Prime Minister Fumio Kishida further confirmed this trend, instructing the government to increase spending by 65% until 2027. Recent tensions, particularly with North Korea and China, have reignited the debate over the status of the JSDF and its relation to Japanese society. ### Domestic law enforcement Domestic security in Japan is provided mainly by the prefectural police departments, under the oversight of the National Police Agency. As the central coordinating body for the Prefectural Police Departments, the National Police Agency is administered by the National Public Safety Commission. The Special Assault Team comprises national-level counter-terrorism tactical units that cooperate with territorial-level Anti-Firearms Squads and Counter-NBC Terrorism Squads. The Japan Coast Guard guards territorial waters surrounding Japan and uses surveillance and control countermeasures against smuggling, marine environmental crime, poaching, piracy, spy ships, unauthorized foreign fishing vessels, and illegal immigration. The Firearm and Sword Possession Control Law strictly regulates the civilian ownership of guns, swords, and other weaponry. According to the United Nations Office on Drugs and Crime, among the member states of the UN that report statistics as of 2018, the incidence rates of violent crimes such as murder, abduction, sexual violence, and robbery are very low in Japan. ### Human rights Japan has faced criticism for not allowing same-sex marriages, despite a majority of Japanese people supporting marriage equality. It is considered to be the least developed out of the G7 nations in terms of LGBT equality. Japan does not have any law which explicitly bans racial or religious discrimination. ## Economy Japan has the world's third-largest economy by nominal GDP, after that of the United States and China; and the fourth-largest economy by PPP. As of 2021, Japan's labor force is the world's eighth-largest, consisting of over 68.6 million workers. As of 2021, Japan has a low unemployment rate of around 2.8%. Its poverty rate is the second highest among the G7 nations, and exceeds 15.7% of the population. Japan has the highest ratio of public debt to GDP among advanced economies, with national debt estimated at 248% relative to GDP as of 2022. The Japanese yen is the world's third-largest reserve currency after the US dollar and the euro. Japan was the world's fifth-largest exporter and fourth-largest importer in 2022. Its exports amounted to 18.4% of its total GDP in 2021. As of 2022, Japan's main export markets were China (23.9 percent, including Hong Kong) and the United States (18.5 percent). Its main exports are motor vehicles, iron and steel products, semiconductors, and auto parts. Japan's main import markets as of 2022 were China (21.1 percent), the United States (9.9 percent), and Australia (9.8 percent). Japan's main imports are machinery and equipment, fossil fuels, foodstuffs, chemicals, and raw materials for its industries. The Japanese variant of capitalism has many distinct features: keiretsu enterprises are influential, and lifetime employment and seniority-based career advancement are common in the Japanese work environment. Japan has a large cooperative sector, with three of the world's ten largest cooperatives, including the largest consumer cooperative and the largest agricultural cooperative as of 2018. It ranks highly for competitiveness and economic freedom. Japan ranked sixth in the Global Competitiveness Report in 2019. It attracted 31.9 million international tourists in 2019, and was ranked eleventh in the world in 2019 for inbound tourism. The 2021 Travel and Tourism Competitiveness Report ranked Japan first in the world out of 117 countries. Its international tourism receipts in 2019 amounted to \$46.1 billion. ### Agriculture and fishery The Japanese agricultural sector accounts for about 1.2% of the total country's GDP as of 2018. Only 11.5% of Japan's land is suitable for cultivation. Because of this lack of arable land, a system of terraces is used to farm in small areas. This results in one of the world's highest levels of crop yields per unit area, with an agricultural self-sufficiency rate of about 50% as of 2018. Japan's small agricultural sector is highly subsidized and protected. There has been a growing concern about farming as farmers are aging with a difficult time finding successors. Japan ranked seventh in the world in tonnage of fish caught and captured 3,167,610 metric tons of fish in 2016, down from an annual average of 4,000,000 tons over the previous decade. Japan maintains one of the world's largest fishing fleets and accounts for nearly 15% of the global catch, prompting critiques that Japan's fishing is leading to depletion in fish stocks such as tuna. Japan has sparked controversy by supporting commercial whaling. ### Industry and services Japan has a large industrial capacity and is home to some of the "largest and most technologically advanced producers of motor vehicles, machine tools, steel and nonferrous metals, ships, chemical substances, textiles, and processed foods". Japan's industrial sector makes up approximately 27.5% of its GDP. The country's manufacturing output is the third highest in the world as of 2019. Japan is the third-largest automobile producer in the world as of 2022 and is home to Toyota, the world's largest automobile company by vehicle production. Quantitatively, Japan was the world's largest exporter of cars in 2021, though it was overtaken by China in early 2023. The Japanese shipbuilding industry faces increasing competition from its East Asian neighbors, South Korea and China; as a 2020 government initiative identified this sector as a target for increasing exports. Japan's service sector accounts for about 69.5% of its total economic output as of 2021. Banking, retail, transportation, and telecommunications are all major industries, with companies such as Toyota, Mitsubishi UFJ, -NTT, ÆON, Softbank, Hitachi, and Itochu listed as among the largest in the world. ### Science and technology Japan is a leading nation in scientific research, particularly in the natural sciences and engineering. The country ranks twelfth among the most innovative countries in the 2020 Bloomberg Innovation Index and 13th in the Global Innovation Index in 2022, up from 15th in 2019. Relative to gross domestic product, Japan's research and development budget is the second highest in the world, with 867,000 researchers sharing a 19-trillion-yen research and development budget as of 2017. The country has produced twenty-two Nobel laureates in either physics, chemistry or medicine, and three Fields medalists. Japan leads the world in robotics production and use, supplying 45% of the world's 2020 total; down from 55% in 2017. Japan has the second highest number of researchers in science and technology per capita in the world with 14 per 1000 employees. Once considered the strongest in the world, the Japanese consumer electronics industry is in a state of decline as regional competition arises in neighboring East Asian countries such as South Korea and China. However, video gaming in Japan remains a major industry. In 2014, Japan's consumer video game market grossed \$9.6 billion, with \$5.8 billion coming from mobile gaming. By 2015, Japan had become the world's fourth-largest PC game market, behind only China, the United States, and South Korea. The Japan Aerospace Exploration Agency is Japan's national space agency; it conducts space, planetary, and aviation research, and leads development of rockets and satellites. It is a participant in the International Space Station: the Japanese Experiment Module (Kibō) was added to the station during Space Shuttle assembly flights in 2008. The space probe Akatsuki was launched in 2010 and achieved orbit around Venus in 2015. Japan's plans in space exploration include building a Moon base and landing astronauts by 2030. In 2007, it launched lunar explorer SELENE (Selenological and Engineering Explorer) from Tanegashima Space Center. The largest lunar mission since the Apollo program, its purpose was to gather data on the Moon's origin and evolution. The explorer entered a lunar orbit on October 4, 2007, and was deliberately crashed into the Moon on June 11, 2009. ## Infrastructure ### Transportation Japan has invested heavily in transportation infrastructure. The country has approximately 1,200,000 kilometers (750,000 miles) of roads made up of 1,000,000 kilometers (620,000 miles) of city, town and village roads, 130,000 kilometers (81,000 miles) of prefectural roads, 54,736 kilometers (34,011 miles) of general national highways and 7641 kilometers (4748 miles) of national expressways as of 2017. Since privatization in 1987, dozens of Japanese railway companies compete in regional and local passenger transportation markets; major companies include seven JR enterprises, Kintetsu, Seibu Railway and Keio Corporation. The high-speed Shinkansen (bullet trains) that connect major cities are known for their safety and punctuality. There are 175 airports in Japan as of 2013. The largest domestic airport, Haneda Airport in Tokyo, was Asia's second-busiest airport in 2019. The Keihin and Hanshin superport hubs are among the largest in the world, at 7.98 and 5.22 million TEU respectively as of 2017. ### Energy As of 2019, 37.1% of energy in Japan was produced from petroleum, 25.1% from coal, 22.4% from natural gas, 3.5% from hydropower and 2.8% from nuclear power, among other sources. Nuclear power was down from 11.2 percent in 2010. By May 2012 all of the country's nuclear power plants had been taken offline because of ongoing public opposition following the Fukushima Daiichi nuclear disaster in March 2011, though government officials continued to try to sway public opinion in favor of returning at least some to service. The Sendai Nuclear Power Plant restarted in 2015, and since then several other nuclear power plants have been restarted. Japan lacks significant domestic reserves and has a heavy dependence on imported energy. The country has therefore aimed to diversify its sources and maintain high levels of energy efficiency. ### Water supply and sanitation Responsibility for the water and sanitation sector is shared between the Ministry of Health, Labour and Welfare, in charge of water supply for domestic use; the Ministry of Land, Infrastructure, Transport, and Tourism, in charge of water resources development as well as sanitation; the Ministry of the Environment, in charge of ambient water quality and environmental preservation; and the Ministry of Internal Affairs and Communications, in charge of performance benchmarking of utilities. Access to an improved water source is universal in Japan. About 98% of the population receives piped water supply from public utilities. ## Demographics Japan has a population of almost 125 million, of which nearly 122 million are Japanese nationals (2022 estimates). A small population of foreign residents makes up the remainder. Japan is the world's fastest aging country and has the highest proportion of elderly citizens of any country, comprising one-third of its total population; this is the result of a post–World War II baby boom, which was followed by an increase in life expectancy and a decrease in birth rates. Japan has a total fertility rate of 1.4, which is below the replacement rate of 2.1, and is among the world's lowest; it has a median age of 48.4, the highest in the world. As of 2020, over 28.7 percent of the population is over 65, or more than one in four out of the Japanese population. As a growing number of younger Japanese are not marrying or remaining childless, Japan's population is expected to drop to around 88 million by 2065. The changes in demographic structure have created several social issues, particularly a decline in the workforce population and an increase in the cost of social security benefits. The Government of Japan projects that there will be almost one elderly person for each person of working age by 2060. Immigration and birth incentives are sometimes suggested as a solution to provide younger workers to support the nation's aging population. On April 1, 2019, Japan's revised immigration law was enacted, protecting the rights of foreign workers to help reduce labor shortages in certain sectors. In 2019, 92% of the total Japanese population lived in cities. The capital city, Tokyo, has a population of 13.9 million (2022). It is part of the Greater Tokyo Area, the biggest metropolitan area in the world with 38,140,000 people (2016). Japan is an ethnically and culturally homogeneous society, with the Japanese people forming 98.1% of the country's population. Minority ethnic groups in the country include the indigenous Ainu and Ryukyuan people. Zainichi Koreans, Chinese, Filipinos, Brazilians mostly of Japanese descent, and Peruvians mostly of Japanese descent are also among Japan's small minority groups. Burakumin make up a social minority group. ### Religion Japan's constitution guarantees full religious freedom. Upper estimates suggest that 84–96 percent of the Japanese population subscribe to Shinto as its indigenous religion. However, these estimates are based on people affiliated with a temple, rather than the number of true believers. Many Japanese people practice both Shinto and Buddhism; they can either identify with both religions or describe themselves as non-religious or spiritual. The level of participation in religious ceremonies as a cultural tradition remains high, especially during festivals and occasions such as the first shrine visit of the New Year. Taoism and Confucianism from China have also influenced Japanese beliefs and customs. Christianity was first introduced into Japan by Jesuit missions starting in 1549. Today, 1% to 1.5% of the population are Christians. Throughout the latest century, Western customs originally related to Christianity (including Western style weddings, Valentine's Day and Christmas) have become popular as secular customs among many Japanese. About 90% of those practicing Islam in Japan are foreign-born migrants as of 2016. As of 2018 there were an estimated 105 mosques and 200,000 Muslims in Japan, 43,000 of which were Japanese nationals. Other minority religions include Hinduism, Judaism, and Baháʼí Faith, as well as the animist beliefs of the Ainu. ### Languages The Japanese language is Japan's de facto national language and the primary written and spoken language of most people in the country. Japanese writing uses kanji (Chinese characters) and two sets of kana (syllabaries based on cursive script and radicals used by kanji), as well as the Latin alphabet and Arabic numerals. English has taken a major role in Japan as a business and international link language. As a result, the prevalence of English in the educational system has increased, with English classes becoming mandatory at all levels of the Japanese school system by 2020. Japanese Sign Language is the primary sign language used in Japan and has gained some official recognition, but its usage has been historically hindered by discriminatory policies and a lack of educational support. Besides Japanese, the Ryukyuan languages (Amami, Kunigami, Okinawan, Miyako, Yaeyama, Yonaguni), part of the Japonic language family, are spoken in the Ryukyu Islands chain. Few children learn these languages, but local governments have sought to increase awareness of the traditional languages. The Ainu language, which is a language isolate, is moribund, with only a few native speakers remaining as of 2014. Additionally, a number of other languages are taught and used by ethnic minorities, immigrant communities, and a growing number of foreign-language students, such as Korean (including a distinct Zainichi Korean dialect), Chinese and Portuguese. ### Education Since the 1947 Fundamental Law of Education, compulsory education in Japan comprises elementary and junior high school, which together last for nine years. Almost all children continue their education at a three-year senior high school. The two top-ranking universities in Japan are the University of Tokyo and Kyoto University. Starting in April 2016, various schools began the academic year with elementary school and junior high school integrated into one nine-year compulsory schooling program; MEXT plans for this approach to be adopted nationwide. The Programme for International Student Assessment (PISA) coordinated by the OECD ranks the knowledge and skills of Japanese 15-year-olds as the third best in the world. Japan is one of the top-performing OECD countries in reading literacy, math and sciences with the average student scoring 520 and has one of the world's highest-educated labor forces among OECD countries. It spent roughly 3.1% of its total GDP on education as of 2018, below the OECD average of 4.9%. In 2021, the country ranked third for the percentage of 25 to 64-year-olds that have attained tertiary education with 55.6%. Approximately 65% of Japanese aged 25 to 34 have some form of tertiary education qualification, and bachelor's degrees are held by 34.2% of Japanese aged 25 to 64, the second most in the OECD after South Korea. In 2020, the share of women among tertiary programmes graduates was 51,8%. ### Health Health care in Japan is provided by national and local governments. Payment for personal medical services is offered through a universal health insurance system that provides relative equality of access, with fees set by a government committee. People without insurance through employers can participate in a national health insurance program administered by local governments. Since 1973, all elderly persons have been covered by government-sponsored insurance. Japan spent 10.74% of its total GDP on healthcare in 2019. In 2020, the overall life expectancy in Japan at birth was 84.62 years (81.64 years for males and 87.74 years for females), the highest in the world; while it had a very low infant mortality rate (2 per 1,000 live births). Since 1981, the principal cause of death in Japan is cancer, which accounted for 27% of the total deaths in 2018—followed by cardiovascular diseases, which led to 15% of the deaths. Japan has one of the world's highest suicide rates, which is considered a major social issue. Another significant public health issue is smoking among Japanese men. However, Japan has the lowest rate of heart disease in the OECD, and the lowest level of dementia among developed countries. ## Culture Contemporary Japanese culture combines influences from Asia, Europe, and North America. Traditional Japanese arts include crafts such as ceramics, textiles, lacquerware, swords and dolls; performances of bunraku, kabuki, noh, dance, and rakugo; and other practices, the tea ceremony, ikebana, martial arts, calligraphy, origami, onsen, Geisha and games. Japan has a developed system for the protection and promotion of both tangible and intangible Cultural Properties and National Treasures. Twenty-two sites have been inscribed on the UNESCO World Heritage List, eighteen of which are of cultural significance. Japan is considered a cultural superpower. ### Art and architecture The history of Japanese painting exhibits synthesis and competition between native Japanese esthetics and imported ideas. The interaction between Japanese and European art has been significant: for example ukiyo-e prints, which began to be exported in the 19th century in the movement known as Japonism, had a significant influence on the development of modern art in the West, most notably on post-Impressionism. Japanese architecture is a combination of local and other influences. It has traditionally been typified by wooden or mud plaster structures, elevated slightly off the ground, with tiled or thatched roofs. The Shrines of Ise have been celebrated as the prototype of Japanese architecture. Traditional housing and many temple buildings see the use of tatami mats and sliding doors that break down the distinction between rooms and indoor and outdoor space. Since the 19th century, Japan has incorporated much of Western modern architecture into construction and design. It was not until after World War II that Japanese architects made an impression on the international scene, firstly with the work of architects like Kenzō Tange and then with movements like Metabolism. ### Literature and philosophy The earliest works of Japanese literature include the Kojiki and Nihon Shoki chronicles and the Man'yōshū poetry anthology, all from the 8th century and written in Chinese characters. In the early Heian period, the system of phonograms known as kana (hiragana and katakana) was developed. The Tale of the Bamboo Cutter is considered the oldest extant Japanese narrative. An account of court life is given in The Pillow Book by Sei Shōnagon, while The Tale of Genji by Murasaki Shikibu is often described as the world's first novel. During the Edo period, the chōnin ("townspeople") overtook the samurai aristocracy as producers and consumers of literature. The popularity of the works of Saikaku, for example, reveals this change in readership and authorship, while Bashō revivified the poetic tradition of the Kokinshū with his haikai (haiku) and wrote the poetic travelogue Oku no Hosomichi. The Meiji era saw the decline of traditional literary forms as Japanese literature integrated Western influences. Natsume Sōseki and Mori Ōgai were significant novelists in the early 20th century, followed by Ryūnosuke Akutagawa, Jun'ichirō Tanizaki, Kafū Nagai and, more recently, Haruki Murakami and Kenji Nakagami. Japan has two Nobel Prize-winning authors – Yasunari Kawabata (1968) and Kenzaburō Ōe (1994). Japanese philosophy has historically been a fusion of both foreign, particularly Chinese and Western, and uniquely Japanese elements. In its literary forms, Japanese philosophy began about fourteen centuries ago. Confucian ideals remain evident in the Japanese concept of society and the self, and in the organization of the government and the structure of society. Buddhism has profoundly impacted Japanese psychology, metaphysics, and esthetics. ### Performing arts Japanese music is eclectic and diverse. Many instruments, such as the koto, were introduced in the 9th and 10th centuries. The popular folk music, with the guitar-like shamisen, dates from the 16th century. Western classical music, introduced in the late 19th century, forms an integral part of Japanese culture. Kumi-daiko (ensemble drumming) was developed in postwar Japan and became very popular in North America. Popular music in post-war Japan has been heavily influenced by American and European trends, which has led to the evolution of J-pop. Karaoke is a significant cultural activity. The four traditional theaters from Japan are noh, kyōgen, kabuki, and bunraku. Noh is one of the oldest continuous theater traditions in the world. ### Holidays Officially, Japan has 16 national, government-recognized holidays. Public holidays in Japan are regulated by the Public Holiday Law (国民の祝日に関する法律, Kokumin no Shukujitsu ni Kansuru Hōritsu) of 1948. Beginning in 2000, Japan implemented the Happy Monday System, which moved a number of national holidays to Monday in order to obtain a long weekend. The national holidays in Japan are New Year's Day on January 1, Coming of Age Day on the second Monday of January, National Foundation Day on February 11, The Emperor's Birthday on February 23, Vernal Equinox Day on March 20 or 21, Shōwa Day on April 29, Constitution Memorial Day on May 3, Greenery Day on May 4, Children's Day on May 5, Marine Day on the third Monday of July, Mountain Day on August 11, Respect for the Aged Day on the third Monday of September, Autumnal Equinox on September 23 or 24, Health and Sports Day on the second Monday of October, Culture Day on November 3, and Labor Thanksgiving Day on November 23. ### Cuisine Japanese cuisine offers a vast array of regional specialties that use traditional recipes and local ingredients. Seafood and Japanese rice or noodles are traditional staples. Japanese curry, since its introduction to Japan from British India, is so widely consumed that it can be termed a national dish, alongside ramen and sushi. Traditional Japanese sweets are known as wagashi. Ingredients such as red bean paste and mochi are used. More modern-day tastes include green tea ice cream. Popular Japanese beverages include sake, which is a brewed rice beverage that typically contains 14–17% alcohol and is made by multiple fermentation of rice. Beer has been brewed in Japan since the late 17th century. Green tea is produced in Japan and prepared in forms such as matcha, used in the Japanese tea ceremony. ### Media According to the 2015 NHK survey on television viewing in Japan, 79 percent of Japanese watch television daily. Japanese television dramas are viewed both within Japan and internationally; other popular shows are in the genres of variety shows, comedy, and news programs. Many Japanese media franchises such as Dragon Ball, One Piece, and Naruto have gained considerable global popularity and are among the world's highest-grossing media franchises. Pokémon in particular is estimated to be the highest-grossing media franchise of all time. Japanese newspapers are among the most circulated in the world as of 2016. Japan has one of the oldest and largest film industries globally. Ishirō Honda's Godzilla became an international icon of Japan and spawned an entire subgenre of kaiju films, as well as the longest-running film franchise in history. Japanese comics, known as manga, developed in the mid-20th century and have become popular worldwide. A large number of manga series have become some of the best-selling comics series of all time, rivalling the American comics industry. Japanese animated films and television series, known as anime, were largely influenced by Japanese manga and have become highly popular internationally. ### Sports Traditionally, sumo is considered Japan's national sport. Japanese martial arts such as judo and kendo are taught as part of the compulsory junior high school curriculum. Baseball is the most popular sport in the country. Japan's top professional league, Nippon Professional Baseball (NPB), was established in 1936. Since the establishment of the Japan Professional Football League (J.League) in 1992, association football gained a wide following. The country co-hosted the 2002 FIFA World Cup with South Korea. Japan has one of the most successful football teams in Asia, winning the Asian Cup four times, and the FIFA Women's World Cup in 2011. Golf is also popular in Japan. In motorsport, Japanese automotive manufacturers have been successful in multiple different categories, with titles and victories in series such as Formula One, MotoGP, and the World Rally Championship. Drivers from Japan have victories at the Indianapolis 500 and the 24 Hours of Le Mans as well as podium finishes in Formula One, in addition to success in domestic championships. Super GT is the most popular national racing series in Japan, while Super Formula is the top-level domestic open-wheel series. The country hosts major races such as the Japanese Grand Prix. Japan hosted the Summer Olympics in Tokyo in 1964 and the Winter Olympics in Sapporo in 1972 and Nagano in 1998. The country hosted the official 2006 Basketball World Championship and will co-host the 2023 Basketball World Championship. Tokyo hosted the 2020 Summer Olympics in 2021, making Tokyo the first Asian city to host the Olympics twice. The country gained the hosting rights for the official Women's Volleyball World Championship on five occasions, more than any other nation. Japan is the most successful Asian Rugby Union country and hosted the 2019 IRB Rugby World Cup. ## See also - Index of Japan-related articles - Outline of Japan
4,941,111
1981 World Snooker Championship
1,159,901,344
Professional snooker tournament held April 1981
[ "1981 in English sport", "1981 in snooker", "April 1981 sports events in the United Kingdom", "Sports competitions in Sheffield", "World Snooker Championships" ]
The 1981 World Snooker Championship, (officially the 1981 Embassy World Snooker Championship) was a ranking professional snooker tournament which took place from 7 April to 20 April 1981 at the Crucible Theatre in Sheffield, England. The tournament was the 1981 edition of the World Snooker Championship, and was the fifth consecutive world championship to take place at the Crucible Theatre since 1977. It was sanctioned by the World Professional Billiards and Snooker Association. The total prize fund for the tournament was £75,000, of which £20,000 went to the winner. Qualifying rounds for the tournament took place from 23 March to 4 April 1981 at two locations — Redwood Lodge Country Club, near Bristol, and at Romiley Forum, near Stockport. The main stage of the tournament featured 24 players: the top 16 players from the snooker world rankings and another eight players from the qualifying rounds. Jimmy White, Tony Knowles and Dave Martin were debutants at the main stage. The defending champion and top seed in the tournament was Cliff Thorburn, who had defeated Alex Higgins 18–16 in the 1980 final. Thorburn lost by 10 to 16 to Steve Davis in the semi-finals. In the other semi-final, Doug Mountjoy defeated second seed Ray Reardon 16–10. Davis went on to achieve the first of his six world titles, taking a 6–0 lead in the final and winning four consecutive frames at the end of the match to win 18–12. There were 13 century breaks made during the tournament, including a new championship record break of 145 by Mountjoy. The cigarette manufacturer Embassy sponsored the tournament, which received daily coverage on BBC television. ## Overview The World Snooker Championship is the official world championship of the game of snooker. The first world championship final took place in 1927 at Camkin's Hall, Birmingham, England. Joe Davis won the inaugural title. Each year since 1977, the event has been held at the Crucible Theatre in Sheffield, England. The 1981 tournament brought together 24 professional snooker players, selected through a mix of the snooker world rankings and a pre-tournament qualification competition. Seedings were based on players' performances in the previous three editions of the world championship. The draw for the event took place on 5 January 1981, in West Bromwich. There were a total of eight qualifying groups, each with one winner meeting a player seeded into the first round, followed by the eight winners of the first-round matches meeting one of eight new players seeded into the second round. Despite not winning any major tournament since the 1978 World Snooker Championship, Ray Reardon was the bookmakers' favourite to win at the time of the draw with bets priced at 3–1. Steve Davis was the second-favourite and priced at 5–1, followed by Terry Griffiths and Alex Higgins both priced at 6–1, and defending champion Cliff Thorburn, who had defeated Higgins 18–16 in the 1980 final, at 10–1. Bookmakers assessed Doug Mountjoy's odds of winning as 20–1. By the time the main event started on 7 April, Davis — who during the season had won his first professional title at the 1980 UK Championship, as well as the 1980 Classic, 1981 Yamaha Organs Trophy and 1981 English Professional Championship — had become the bookmakers' favourite to win, at 7–2. Mike Watterson promoted the championship tournament, with the authority of the World Professional Billiards and Snooker Association (WPBSA). It was broadcast in the United Kingdom on the BBC, with over 80 hours of programming scheduled. Cigarette company Embassy sponsored the event. ### Prize money allocation The breakdown of prize money for the 1981 tournament is shown below: - Winner: £20,000 - Runner-up: £10,000 - Semi-final: £5,000 - Quarter-final: £2,500 - Last 16: £1,800 - Last 24: £875 - Highest break: £1,200 - Maximum break: £10,000 - Total: £75,000 ## Tournament rounds ### Qualifying Qualifying matches took place from 23 March to 4 April, and were held at two locations — Redwood Lodge Country Club, near Bristol, and at Romiley Forum, near Stockport. All qualifying matches were scheduled in best-of-17 format with the first player to win nine progressing to the next round. Former champion John Pulman lost 2–9 to Dave Martin, who was accepted by the WPBSA as a professional only a few days before entries closed. Chris Ross — who experienced a nervous breakdown in his first year playing professionally after winning the 1976 English Amateur Championship — found that his was unsteady, and he was unable to control his properly, resulting in his conceding the match to opponent Tony Knowles when 0–7 behind. ### First round The first-round matches took place from 7 to 10 April and were played as best-of-19 frames. Jimmy White, who turned professional after winning the 1980 World Amateur Championship, made his World Snooker Championship debut at the tournament, as did Tony Knowles and Dave Martin. Steve Davis made the first century break of the tournament, 119, in the fifth frame of his match against White, while building a 4–2 lead by the end of their first . He compiled another century, 102, in their second session, and led 8–4 by the end of that session. In the last session, White closed the gap to one frame, but from 9–8 ahead, Davis won the next and prevailed 10–8. Knowles constructed a 101 in his match against Graham Miles, but lost the match after being tied at 6–6 and 8–8. In the eighteenth frame, at one frame behind, Knowles played a forceful shot on the final , to get a position on the . He failed to pot the black, which would have left Miles unable to win the frame without Knowles conceding penalty points. Miles won that frame, then took the next to win 10–8. David Taylor, the 1968 World Amateur Champion, won the first three frames against Cliff Wilson, the 1978 World Amateur Champion, but then lost the next four. Taylor finished the first session 5–4 ahead and went on to defeat Wilson 10–6. Tony Meo was 4–2 ahead, then later 4–5 behind and 7–5 ahead of John Virgo, before securing his progression to the next round 10–6. Meo made a break of 134 during the match. From 5–4, Kirk Stevens won the next five frames to defeat John Dunning 10–4. Doug Mountjoy was a frame ahead of Willie Thorne at 5–4, and extended his lead to 9–4 before winning 10–6. Bill Werbeniuk eliminated Martin 10–4. Ray Edmonds, twice World Amateur Champion, had never recorded a victory against John Spencer in a significant match and had lost to him twice in the final of the English Amateur Championship, in 1965 and 1966. Edmonds led 5–4 after the first session of their match but then found himself 5–7 behind as Spencer won three consecutive frames. Edmonds equalised the score at 7–7, before Spencer drew ahead again to lead 9–7. Edmonds, aided by fluking a , won the next two frames to force the match to go to a deciding frame. Jack Karnehm, a snooker commentator and author, later suggested that Spencer was able to win the last frame, in which he made a break of 38, because he had the ability to handle pressure better than Edmonds did. ### Second round The second-round matches took place from 10 to 14 April and were played as best-of-25 frames. Steve Davis led 6–2 against Alex Higgins after their first session, but in the second session Davis lost five of the eight frames and made only one break over 30. By the end of the session, Davis led by two frames, 9–7. In the third session, Higgins made a break of 47 in the first frame, but Davis responded with a 45 break and won the frame to move into a three-frame lead rather than having only a one-frame advantage, saying afterwards that his 45 was "the most important break [he had] made for months." Higgins won the second frame of the session before Davis won the third with a break of 71. Davis then took the next two frames for a 13–9 victory. Doug Mountjoy took the first four frames, then lost the next four, against Eddie Charlton. Mountjoy went on to lead 9–6, and won 13–7 to reach his first world championship quarter-final since 1977. Graham Miles only gained a single frame in each of the two sessions against defending champion Cliff Thorburn. He lost the first session 1–7 and the match 2–13. Eight-time former world snooker champion Fred Davis, who was also the reigning world billiards champion, also lost his first session 1–7, and was eliminated 3–13 by David Taylor. Terry Griffiths and Tony Meo finished their first session all-square at 4–4, but Griffiths added nine of the next eleven frames to his tally, and won 13–6. Dennis Taylor went from 9–11 against Kirk Stevens to progress to the next round with a 13–11 scoreline; he compiled breaks of 135 and 133 during the match. Stevens had been unable to use the practice table at the venue before the match because it was being used to record a programme for a television broadcast. According to Karnehm, Stevens was "frustrated and bitterly hot-tempered when he came out for the second session... his pots missed by fractions, his safety shots would unluckily stay in the open, his judgement was becoming erratic." Bill Werbeniuk led Perrie Mans 6–2 after their first session, and went on to win 13–5. Former champions Ray Reardon and John Spencer were level at 11–11, with Reardon then winning 13–11. ### Quarter-finals The tournament's quarter-final matches took place from 10 to 12 April and were played as best-of-25 frames. Steve Davis and Terry Griffiths shared the first eight frames, finishing their first session 4–4. After that, Davis pulled ahead to 9–5, Griffiths compiled a break of 100 in the first frame of the third session, making the scoreline 6–9. The players then won alternate frames until Davis took the match 13–9. Snooker historian Clive Everton later wrote that "strongly as the opposition resisted, Davis never really looked like being broken". David Taylor, who had lost to Cliff Thorburn in the semi-finals in 1980, won two of the first three frames in their quarter-final. Taylor took a lead of 4–3, but Thorburn then had the better of the second session, establishing a 10–6 advantage. He eliminated Taylor 13–6. Doug Mountjoy was 5–3 ahead of Dennis Taylor, before falling 5–6 behind, and defeated Taylor 13–8. The highest break of the match was 100, by Thorburn in the 15th frame. Ray Reardon defeated Bill Werbeniuk 13–8, to reach his first semi-final since 1978, and compiled a 112 break in the 16th frame. ### Semi-finals The semi-final matches took place from 17 to 18 April and were played as best-of-31 frames. Doug Mountjoy made a new record world snooker championship break of 145 in the 12th frame against Ray Reardon, pocketing blacks after all reds except the eighth, when he potted the . Mountjoy won the match 16–10. Everton's analysis was that whilst in previous matches between the players Reardon had been able to prevail due to his superior tactics, by 1981 Mountjoy's tactical capacity had improved greatly, and his break-building was better than Reardon's. The second semi-final match, which was played between Davis and Thorburn, was described by Karnehm as the best of the 1981 World Championship. Two weeks before the tournament, Thorburn lost 0–6 to Davis in a challenge match in Romford, Davis's home area. According to Karnehm, Thorburn "was still seething at this result and the remarks of the gloating Romford fans in their own stronghold." According to Karnehm, the players barely acknowledged each other's presence in the first session of the semi-final. Davis went 4–3 ahead of Thorburn after the first session, extending his lead to 6–4 after the break, but went 6–8 behind as Thorburn won four frames in succession, scoring 347 points across these frames to Davis's 35. It was level at 9–9, before Davis won 16–10. In the 22nd frame, Davis was ahead with a score of 80–23 with only the pink and black remaining, leaving Thorburn no realistic chance of winning the frame. However, when Davis offered Thorburn a handshake, the acceptance of which would have been an acknowledgement by Thorburn that the frame was lost, Thorburn declined, started to aim for the pink, and "in an elaborate mockery of the Steve Davis habit, went over to his chair, [and] took a minute sip of water." Thorburn later apologised for this behaviour to Davis and, on television, to the public. In his autobiography, Playing for Keeps (1987), Thorburn wrote that in the third session he had been distracted by Davis's supporters in the arena whistling when he was playing, and that he was frustrated that Davis did nothing to stop this. ### Final The final was played across four sessions on 19 and 20 April as a best-of-35 frames match. It was the first world professional snooker championship final for both players, Steve Davis and Doug Mountjoy. Mountjoy led 40–0 in points in the first frame, but Davis made a break of 59 to win the frame. Davis went on to take all of the first six frames, making breaks of 52, 49, 56, and 40. In the eighth frame, Davis was 49–48 ahead with only the last three balls left on the table. The black ball was very close to the , with the blue ball nearby. The two players had a total of 37 turns playing whilst three were left, before the frame was abandoned and restarted due to the stalemate. Mountjoy won the restarted frame with a break of 76 – which was the highest of the first session. Mountjoy won the last frame of the first session, leaving Davis 6–3 ahead. In the second session, Davis won the first frame, then Mountjoy the next two, and Davis took the following one, leaving Davis 8–5 ahead at the mid-session interval. Mountjoy compiled a break of 129, his fourth century of the event, in frame 14, and a couple of frames later, Davis fluked the blue to win the 17th. Mountjoy won the last frame of the second session to finish 8–10 behind. On the second day of the final, Davis compiled a break of 83 to win the first frame and took the next frame, making it 12–8. Mountjoy then won two consecutive frames to halve Davis's lead. He subsequently won two of the session's last four frames to leave Davis 14–12 ahead of the fourth and final session. Mountjoy led by 46 points in the 24th frame before Davis made a break of 55 to win it. Davis made a break of 84 in the first frame of the fourth session, followed by a break of 119 in the second, and won the next two frames to defeat Mountjoy 18–12. After his win, Davis's manager Barry Hearn ran excitedly into the arena, lifting Davis in celebration. In a post-match interview, Mountjoy said of Davis, "He's the player to beat from now on. The top players are all on a par, but he is a black better." It was the first of a total of six World Snooker Championship wins for Davis as he dominated the sport in the 1980s, the last of them in 1989. In 1982, the number of players in the main tournament increased to 32; the level of public interest in the 1981 tournament was high enough for the BBC to decide to increase its television coverage to 17 days, the full duration of the championship, in 1982. ## Main draw The tournament ladder and results are shown below. The numbers in brackets to the right of players' names indicate the top 16 seeds, whilst match winners are noted by bold type. ### Final The final was played as a best-of-35 frames match at the Crucible Theatre, Sheffield, on 19 and 20 April 1981, refereed by John Williams. Two sessions were held each day. Davis won the match by 18 frames to 12. Both players compiled one century break during the final; Mountjoy compiled a 129, and Davis made a 119. Davis had a further eight breaks of fifty or more, against two by Mountjoy. ## Qualifying matches The results from the qualifying competition are shown below with match winners shown in bold type. Qualifying matches were held at Redwood Lodge Country Club, near Bristol, and at Romiley Forum, Stockport. ## Century breaks There were 13 century breaks during the championship, equalling the record from 1979. Mountjoy set a World Championship record by compiling a 145 break, surpassing the 142 breaks by Rex Williams in 1965 and Bill Werbeniuk in 1979. Mountjoy earned a £5,000 bonus for his achievement, and his record stood until the 1983 tournament, when Thorburn compiled a maximum break. - 145, 129, 110 — Doug Mountjoy - 135, 133 — Dennis Taylor - 134 — Tony Meo - 119, 119, 106 — Steve Davis - 112 — Ray Reardon - 101 — Tony Knowles - 100 — Terry Griffiths - 100 — Cliff Thorburn
300,208
Namco
1,171,878,256
Defunct Japanese video game developer and publisher
[ "1955 establishments in Japan", "Amusement companies of Japan", "Bandai Namco Holdings", "Companies disestablished in 2006", "Defunct video game companies of Japan", "Japanese brands", "Multinational companies headquartered in Japan", "Namco", "Software companies based in Tokyo", "Video game companies established in 1955", "Video game companies of Japan", "Video game development companies", "Video game publishers" ]
was a Japanese multinational video game and entertainment company, headquartered in Ōta, Tokyo. It held several international branches, including Namco America in Santa Clara, California, Namco Europe in London, Namco Taiwan in Kaohsiung, and Shanghai Namco in mainland China. Namco was founded by Masaya Nakamura on June 1, 1955, as beginning as an operator of coin-operated amusement rides. After reorganizing to Nakamura Seisakusho Co., Ltd. in 1959, a partnership with Walt Disney Productions provided the company with the resources to expand its operations. In the 1960s, it manufactured electro-mechanical arcade games such as the 1965 hit Periscope. It entered the video game industry after acquiring the struggling Japanese division of Atari in 1974, distributing games such as Breakout in Japan. The company renamed itself Namco in 1977 and published Gee Bee, its first original video game, a year later. Among Namco's first major hits was the fixed shooter Galaxian in 1979. It was followed by Pac-Man in 1980, the best-selling arcade game of all time. Namco prospered during the golden age of arcade video games in the early 1980s, releasing popular titles such as Galaga, Xevious, and Pole Position. Namco entered the home console market in 1984 with conversions of its arcade games for the MSX and the Nintendo Family Computer. Its American division majority-acquired Atari Games in 1985, before selling a portion of it in 1987 following disagreements between the two companies. Arguments over licensing contracts with Nintendo led Namco to produce games for competing platforms, such as the Sega Genesis, TurboGrafx-16, and PlayStation. Namco continued to produce hit games in the 1990s, including Ridge Racer, Tekken, and Taiko no Tatsujin. Namco endured numerous financial difficulties in the late 1990s and 2000s as a result of the struggling Japanese economy and diminishing arcade market. In 2005, Namco merged with Bandai to form Namco Bandai Holdings, a Japanese entertainment conglomerate. It continued producing games until it was merged into Namco Bandai Games in 2006. Namco produced several multi-million-selling game franchises, such as Pac-Man, Galaxian, Tekken, Tales, Ridge Racer, and Ace Combat. It operated video arcades and amusement parks globally, and also produced films, toys, and arcade cabinets and operated a chain of restaurants. Namco is remembered in retrospect for its unique corporate model, its importance to the industry, and its advancements in technology. Its successor, Bandai Namco Entertainment, and its subsidiaries continue to use the Namco brand for their video arcades and other entertainment products. ## History ### Origins and acquisition of Atari Japan (1955–1977) On June 1, 1955, Japanese businessman Masaya Nakamura founded Nakamura Seisakusho Co., Ltd. in Ikegami, Tokyo. The son of a shotgun repair business owner, Nakamura proved unable to find work in his chosen profession of ship building in the struggling post-World War II economy. Nakamura established his own company after his father's business saw success with producing pop cork guns. Beginning with only ¥300,000 (US\$12,000), Nakamura spent the money on two hand-cranked rocking horses that he installed on the roof garden of a Matsuya department store in Yokohama. The horses were loved by children and turned a decent profit for Nakamura, who began expanding his business to cover other smaller locations. A 1959 business reorganization renamed the company Nakamura Seisakusho Company, Ltd. The Mitsukoshi department store chain noticed his success in 1963, and approached him with the idea of constructing a rooftop amusement space for its store in Nihonbashi, Tokyo. It consisted of horse rides, a picture viewing machine, and a goldfish scooping pond, with the centerpiece being a moving train named Roadaway Race. The space was a hit and lead to Mitsukoshi requesting rooftop amusement parks for all of its stores. Along with Taito, Rosen Enterprises, and Nihon Goraku Bussan, Nakamura Seisakusho became one of Japan's leading amusement companies. As the business grew in size, it used its clout to purchase amusement machines in bulk from other manufacturers at a discount, and then sell them to smaller outlets at full price. While its machines sold well, Nakamura Seisakusho lacked the manufacturing lines and distribution networks of its competitors, which made the production of them longer and more expensive. The company was unable to place its machines inside stores because other manufacturers already had exclusive rights to these locations. In response, Nakamura Seisakusho opened a production plant in February 1966, moving its corporate office to a four-story building in Ōta, Tokyo. The company secured a deal with Walt Disney Productions to produce children's rides in the likenesses of its characters, in addition to those using popular anime characters like Q-Taro; this move allowed the business to further expand its operations and become a driving force in the Japanese coin-op market. Though the manufacturing facility was largely reserved for its Disney and anime rides, Nakamura also used it to construct larger, more elaborate electro-mechanical games. The first of these was Torpedo Launcher (1965), a submarine warfare shooting gallery later titled Periscope. Its other products included Ultraman-themed gun games and pinball-like games branded with Osomatsu-kun characters. The name Namco was introduced in 1971 as a brand for several of its machines. The company grew to having ten employees, which included Nakamura himself. It saw continued success with its arcade games, which had become commonplace in bowling alleys and grocery stores. The company also established a robotics division to produce robots for entertainment centers and festivals, such as those that distributed pamphlets, ribbon making machines, and a robot named Putan that solved pre-built mazes. In August 1973, American game company Atari began establishing a series of divisions in Asia, one of which was named Atari Japan. Its president, Kenichi Takumi, approached Nakamura in early 1974 to have his business become the distributor of Atari games across Japan. Nakamura, already planning global expansion following his company's success, agreed to the deal. In part due to employee theft, Atari Japan was a financial disaster and nearly collapsed in its first few years of operation. When Takumi stopped showing up to work, the company was handed to Hideyuki Nakajima, a former employee of the Japan Art Paper Company. Atari co-founder Nolan Bushnell, whose company was already struggling in America, chose to sell the Japanese division. His fixer, Ron Gordon, was given the task of finding the buyer for Atari Japan. After being turned down by Sega and Taito, Gordon's offer was accepted by Nakamura for 296 million (\$1.18M), though Nakamura informed Bushnell his company was unable to pay the money by the deadline. With no other takers for Atari Japan, Bushnell ultimately allowed Nakamura to only pay \$550,000 and then \$250,000 a year for three years. The acquisition allowed Nakamura Seisakusho to distribute Atari games across Japan, and would make it one of the country's largest arcade game companies. The Atari Japan purchase was not an immediate success, in part due to the medal game fad of the 1970s. While Nakamura Seisakusho saw some success with imports such as Kee Games's Tank, the Japanese video game industry's decrease in popularity did not make them as profitable as hoped. The market became more viable once restrictions on medal games were imposed by the Japanese government in 1976, as Nakamura Seisakusho began returning higher profits; its import of Atari's Breakout was so successful that it led to rampant piracy in the industry. By the end of the year, Nakamura Seisakusho was one of Japan's leading video game companies. ### Galaxian, Pac-Man, and arcade success (1977–1984) Nakamura Seisakusho changed its corporate name to Namco in June 1977. It opened a division in Hong Kong named Namco Enterprises Asia, which maintained video arcades and amusement centers. As Namco's presence in Japan was steadily rising, Nakajima suggested to Nakamura that he open a division in the United States to increase worldwide brand awareness. Nakamura agreed to the proposal, and on September 1, 1978, established Namco America in Sunnyvale, California. With Nakajima as its president and Satashi Bhutani as vice president, Namco America's aim was to import games and license them to companies such as Atari and Bally Manufacturing. Namco America would release a few non-video arcade games itself, such as Shoot Away (1977). As the video game industry prospered in Japan during the 1970s with the release of Taito's Space Invaders, Namco turned its attention towards making its own video games. While its licensed Atari games were still profitable, sales were decreasing and the quality of the hardware used began deteriorating. Per the recommendation of company engineer Shigekazu Ishimura, the company retrofitted its Ōta manufacturing facility into a small game division and purchased old stock computers from NEC for employees to study. Namco released Gee Bee, its first original game, in October 1978. Designed by new hire Toru Iwatani, it is a video pinball game that incorporates elements from Breakout and similar "block breaker" clones. Though Gee Bee fell short of the company's sales expectations and was unable to compete with games such as Space Invaders, it allowed Namco to gain a stronger foothold in the video game market. In 1979, Namco published its first major hit Galaxian, one of the first video games to incorporate RGB color graphics, score bonuses, and a tilemap hardware model. Galaxian is considered historically important for these innovations, and for its mechanics building off those in Space Invaders. It was released in North America by Midway Manufacturing, the video game division of Bally, where it became one of its best-selling titles and formed a relationship between Midway and Namco. The space shooter genre became ubiquitous by the end of the decade, with games such as Galaxian and Space Invaders becoming commonplace in Japanese amusement centers. As video games often depicted the killing of enemies and shooting of targets, the industry possessed a predominately male playerbase. Toru Iwatani began work on a maze video game that was targeted primarily towards women, with simplistic gameplay and recognizable characters. Alongside a small team, he created a game named Puck Man, where players controlled a character that had to eat dots in an enclosed maze while avoiding four ghosts that pursued them. Iwatani based the gameplay off eating and designed its characters with soft colors and simplistic facial features. Puck Man was test-marketed in Japan on May 22, 1980 and given a wide-scale in July. It was only a modest success; players were more accustom to the shooting gameplay of Galaxian as opposed to Puck Man's visually distinctive characters and gameplay style. In North America, it was released as Pac-Man in November 1980. Pac-Man's simplicity and abstract characters made it a fixture in popular culture, spawning a multi-million-selling media franchise. Namco regularly released several successful games throughout the early 1980s. It published Galaga, the follow-up to Galaxian, in 1981 to critical acclaim, usurping its predecessor in popularity with its fast-paced action and power-ups. 1982 saw the release of Pole Position, a racing game that is the first to use a real racetrack (the Fuji Speedway) and helped laydown the foundations for the racing genre. It released Dig Dug the same year, a maze chaser that allowed players to create their own mazes. Namco's biggest post-Pac-Man success was the vertical-scrolling shooter Xevious in 1983, designed by new-hire Masanobu Endō. Xevious's early usage of pre-rendered visuals, boss fights, and a cohesive world made it an astounding success in Japan, recording record-breaking sales figures that hadn't been seen since Space Invaders. The game's success led to merchandise, tournament play, and the first video game soundtrack album. The same year, Namco released Mappy, an early side-scrolling platformer, and the Pole Position sequel Pole Position II. Endō went on to design The Tower of Druaga a year later, a maze game that helped establish the concept for the action role-playing game. Druaga's design influenced games such as Nintendo's The Legend of Zelda. 1984 also saw the release of Pac-Land, a Pac-Man-themed platform game that paved the way for similar games such as Super Mario Bros., and Gaplus, a moderately successful update to Galaga. The success of Namco's arcade games prompted it to launch its own print publication, Namco Community Magazine NG, to allow its fans to connect with developers. ### Success with home consoles (1984–1989) In July 1983, Nintendo released the Family Computer, a video game console that utilized interchangeable cartridges to play games. The console's launch came with ports of some of Nintendo's popular arcade games, like Donkey Kong, which at the time were considered high quality. Though Namco recognized the system's potential to allow consumers to play accurate versions of its games, the company chose to hold off on the idea after its ports for platforms such as the Sord M5 flopped. Nakamura suggested that his son-in-law, Shigeichi Ishimura, work with a team to reverse-engineer and study the Famicom's hardware in the meantime. His team created a conversion of Galaxian with their newfound knowledge of the console's capabilities, which exceeded the quality of previous home releases. The port was presented to Nintendo president Hiroshi Yamauchi alongside notification that Namco intended to release it with or without Nintendo's approval. Namco's demonstration was the impetus for Nintendo's decision to create a licensing program for the console. Namco signed a five-year royalties contract that included several preferential terms, such as the ability to produce its own cartridges. A subsidiary named was established in 1984 to act as Namco's console game division. It released its first four titles in September: Galaxian, Pac-Man, Xevious, and Mappy. Xevious sold over 1.5 million copies and became the Famicom's first "killer app". Namcot also began releasing games for the MSX, a popular Japanese computer. Namco's arcade game ports were considered high-quality and helped increase sales of the console. Namcot was financially successful and became an important pillar within the company; when Namco moved its headquarters to Ōta, Tokyo in 1985, it used the profits generated from the Famicom conversion of Xevious to fund its construction (the building was nicknamed "Xevious" as a result). The Talking Aid, a speech impairment device, was part of the company's attempts in venturing into other markets. By the time the Video game crash of 1983 concluded in 1985 with the release of the Nintendo Entertainment System (NES), Atari had effectively collapsed. After enduring numerous financial difficulties and losing its control in the industry, parent Warner Communications sold the company's personal computer and home console divisions to Commodore International founder Jack Tramiel, who renamed his company Tramel Technology to Atari Corporation. Warner was left with Atari's arcade game and computer software divisions, which it renamed Atari Games. Namco America purchased a 60% stake in Atari Games on February 4, 1985 through its AT Games subsidiary, with Warner holding the remaining 40%. The acquisition gave Namco the exclusive rights to distribute Atari games in Japan. Nakamura began losing interest and patience in Atari Games not long after the acquisition. As he started viewing Atari as a competitor to Namco, he was hesitant to pour additional funds and resources into the company. Nakamura also disliked having to share ownership with Warner Communications. Nakajima grew frustrated with Nakamura's attempts at marketing Atari video games in Japan, and had constant disagreements with him over which direction to take the company. Viewing the majority-acquisition as a failure, in 1987 Namco America sold 33% of its ownership stake to a group of Atari Games employees led by Nakajima. This prompted Nakajima to resign from Namco America and become president of Atari Games. He established Tengen, a publisher that challenged Nintendo's licensing restrictions for the NES by selling several unlicensed games, which included ports of Namco arcade games. Though its selloff made Atari Games an independent entity, Namco still held a minority stake in the company and Nakamura retained his position as its board chairman until the middle of 1988. In Japan, Namco continued to see expeditious growth. It published Pro Baseball: Family Stadium for the Famicom, which was critically acclaimed and sold over 2.5 million copies. Its sequel, Pro Baseball: Family Stadium '87, sold an additional two million. In 1986, Namco entered the restaurant industry by acquiring the Italian Tomato café chain. It also released Sweet Land, a popular candy-themed prize machine. One of Namco's biggest hits from the era was the racing game Final Lap from 1987. It is credited as the first arcade game to allow multiple machines to be connected—or "linked"—together to allow for additional players. Final Lap was one of the most-profitable coin-operated games of the era in Japan, remaining towards the top of sales charts for the rest of the decade. Namco's continued success in arcades provided its arcade division with the revenue and resources needed to fund its research and development (R&D) departments. Among their first creations was the helicopter shooter Metal Hawk in 1988, fitted in a motion simulator arcade cabinet. Its high development costs prevented it from being massed-produced. While most of its efforts were commercially unsuccessful, Namco grew interested in motion-based arcade games and began designing those at a larger scale. In 1988, Namco became involved in film production when it distributed the film Mirai Ninja in theaters, with a tie-in video game coinciding with its release. Namco also developed the beat 'em up Splatterhouse, which attracted attention for its fixture on gore and dismemberment, and Gator Panic, a derivative of Whack-a-Mole that became a mainstay in Japanese arcades and entertainment centers. In early 1989, Namco unveiled its System 21 arcade system, one of the earliest arcade boards to utilize true 3D polygonal graphics. Nicknamed "Polygonizer", the company demonstrated its power through the Formula One racer Winning Run. With an arcade cabinet that shook and swayed the player as they drove, the game was seen as "a breakthrough product in term of programming technique" and garnered significant attention from the press. Winning Run was commercially successful, convincing Namco to continue researching 3D video game hardware. Video arcades under the Namco banner continued opening up in Japan and overseas, such as the family-friendly Play City Carrot chain. ### Expansion into other markets (1989–1994) Namco saw continued success in the consumer game market as a result of the "Famicom boom" in the late 1980s. By 1989, sales of games for the Famicom and NES accounted for 40% of its annual revenue. During the same time frame, the company's licensing contract with Nintendo expired; when Namco attempted to renew its license, Nintendo chose to revoke many of the preferential terms it originally possessed. Hiroshi Yamauchi insisted that all companies, including Namco, had to follow the same guidelines. The revocation of Namco's terms enraged Nakamura, who announced the company would abandon Nintendo hardware and focus on production of games for competing systems such as the PC Engine. Executives resisted the idea, fearing it would severely impact the company financially. Against Nakamura's protest, Namco signed Nintendo's new licensee contract anyway. While it continued to produce games for Nintendo hardware, most of Namco's quality releases came from the PC Engine and Mega Drive. In 1989, it was reported that Namco was underway with developing its own video game console to compete against companies such as Nintendo and NEC. Electronic Gaming Monthly claimed that the system, which was nearing completion, featured hardware comparable to the then-upcoming Nintendo Super Famicom. According to company engineer Yutaka Isokawa, it was produced to compete against the Mega Drive, a 16-bit console by Namco's arcade rival Sega. With the console industry being crowded by other competing systems, publications were unsure how well it would perform in the market. While the console was never released, it allowed Namco to familiarize itself with designing home video game hardware. Tadashi Manabe replaced Nakamura as president of Namco on May 2, 1990. Manabe, who had been the company's representative director since 1981, was tasked with strengthening relationships and teamwork ethics of management. Two months later, the company dissolved its remaining connections with Atari Games when Time Warner reacquired Namco America's remaining 40% stake in Atari Games. In return, Namco America was given Atari's video arcade management division, Atari Operations, allowing the company to operate video arcades across the United States. Namco began distributing games in North America directly from its US office, rather than through Atari. Namco Hometek was established as the home console game division of Namco America; the latter's relations with Atari Games and Tengen made the company ineligible to become a Nintendo third-party licensee, instead relying on publishers such as Bandai to release its games in North America. In Japan, Namco developed two theme park attractions, which were demonstrated at the 1990 International Garden and Greenery Exposition (Expo '90): Galaxian3: Project Dragoon, a 3D rail shooter that supported 28 players, and a dark ride based on The Tower of Druaga. As part of the company's idea of "hyperentertainment" video games, Namco engineers had drafted ideas for a possible theme park based on Namco's experience with designing and operating indoor play areas and entertainment complexes. Both attractions were commercially successful and among the most popular of Expo 90's exhibitions. In arcades, Namco released Starblade, a 3D rail shooter noteworthy for its cinematic presentation. This led to Namco dominating the Japanese dedicated arcade cabinet charts by October 1991, holding the top six positions that month with Starblade at the top. In February 1992, Namco opened its own theme park, Wonder Eggs, in the Futakotamagawa Time Spark area in Setagaya, Tokyo. Described as an "urban amusement center", Wonder Eggs was the first amusement park operated by a video game company. In addition to Galaxian3 and The Tower of Druaga, the park featured carnival games, carousels, motion simulators, and Fighter Camp, the first flight simulator available to the public. The park saw regularly high attendance numbers; 500,000 visitors attended in its first few months of operation and over one million by the end of the year. Namco created the park out of its interest in designing a Disneyland-inspired theme park that featured the same kind of stories and characters present in its games. Wonder Eggs contributed to Namco's 34% increase in revenue by December 1992. Namco also designed smaller, indoor theme parks for its larger entertainment complexes across the country, such as Plabo Sennichimae Tempo in Osaka. Manabe resigned as president on May 1, 1992 due to a serious anxiety disorder, and Nakamura once again assumed the role. Manabe instead served as the company's vice chairman until his death in 1994. The company's arcade division, in the meantime, began work on a new 3D arcade board named System 22, capable of displaying polygonal 3D models with fully-textured graphics. Namco enlisted the help of Evans & Sutherland, a designer of combat flight simulators for The Pentagon, to assist in the board's development. The System 22 powered Ridge Racer, a racing game, in 1993. Ridge Racer usage of 3D textured polygons and drifting made it a popular title in arcades and one of Namco's most-successful releases, and is labeled a milestone in 3D computer graphics. The company followed its success with Tekken, a 3D fighting game, a year later. Designed by Seiichi Ishii, the co-creator of Sega's landmark fighting game Virtua Fighter, Tekken's wide array of playable characters and consistent framerate helped it outperform Sega's game in popularity, and launched a multi-million-selling franchise as a result. The company continued expanding its operations overseas, such as the acquisition of Bally's Aladdin's Castle, Inc., the owners of the Aladdin's Castle chain of mall arcades. In December, Namco acquired Nikkatsu, Japan's oldest-surviving film studio that at the time was undergoing bankruptcy procedures. The purchase allowed Nikkatsu to utilize Namco's computer graphics hardware for its films, while Namco was able to gain a foothold in the Japanese film industry. ### Relationship with Sony (1994–1998) In early 1994, Sony announced that it was developing its own video game console, the 32-bit PlayStation. The console began as a collaboration between Nintendo and Sony to create a CD-based peripheral for the Super Nintendo Entertainment System in 1988. Fearing that Sony would assume control of the entire project, Nintendo silently scrapped the add-on. Sony chose to refocus its efforts in designing the PlayStation in-house as its own console. As it lacked the resources to produce its own games, Sony called for the support of third-party companies to develop PlayStation software. Namco, frustrated with Nintendo and Sega's licensing conditions for its consoles, agreed to support the PlayStation and became its first third-party developer. The company began work on a conversion of Ridge Racer, its most-popular arcade game at the time. The PlayStation was released in Japan on December 3, 1994, with Ridge Racer as one of its first titles. Sony moved 100,000 units on launch day alone; publications attributed Ridge Racer to the PlayStation's early success, giving it an edge over its competitor, the Sega Saturn. For a time, it was the best-selling PlayStation game in Japan. Namcot was consolidated into Namco in 1995; its final game was a PlayStation port of Tekken, published in March in Japan and in November worldwide. Tekken was designed for Namco's System 11 arcade system board, which was based on raw PlayStation hardware; this allowed the home version to be a near-perfect rendition of its arcade counterpart. Tekken became the first PlayStation game to sell one million copies and played a vital role in the console's mainstream success. Sony recognized Namco's commitment to the console, leading to Namco receiving special treatment from Sony and early promotional material adopting the tagline "PlayStation: Powered by Namco". Namco was also given the rights to produce controllers, such as the NeGcon, which it designed with the knowledge it gained through developing its cancelled console. Though it had signed contracts to produce games for systems such as the Sega Saturn and 3DO Interactive Multiplayer, Namco concentrated its consumer software efforts on PlayStation for the remainder of the decade. As a means to draw players into its video arcades, Namco's arcade game division began releasing titles that featured unique and novel control styles and gameplay. In 1995, the company released Alpine Racer, an alpine skiing game that was awarded "Best New Equipment" during the year's Amusement and Music Operators Association (AMOA) exposition. Time Crisis, a lightgun shooter noteworthy for its pedal ducking mechanic, helped set the standard for the genre as a whole, while Prop Cycle gained notoriety for its usage of a bicycle controller the player pedaled. The photo booth machine Star Audition, which offered players the chance of becoming a star in the show business, became a media sensation in Japan. Namco Operations, which was renamed Namco Cybertainment in 1996, acquired the Edison Brothers Stores arcade chain in April. Namco also introduced the Postpaid System, a centralized card payment system, as a means to combat the piracy of IC Cards in Japanese arcades. In September 1997, Namco announced it would begin development of games for the Nintendo 64, a console struggling to receive support from third-party developers. Namco signed a contract with Nintendo that allowed the company to produce two games for the console: Famista 64, a version of its Family Stadium series, and an untitled RPG for the 64DD peripheral. The RPG was never released while the 64DD went on to become a commercial failure. In October 1998, which one publication described as being "the most stunning alliance this industry has seen in a long while", Namco announced a partnership deal with long-time rival Sega to bring some of its titles to the newly unveiled Dreamcast. As Namco primarily developed games for Sony hardware, and were among the biggest third-party developers for the PlayStation, the announcement surprised news outlets. For its PlayStation-based System 12 arcade board, Namco released the weapon-based fighting game Soulcalibur in 1998. Its 1999 Dreamcast port, which features multiple graphical enhancements and new game modes, is an early instance of a console game being better than its arcade version. Soulcalibur sold over one million units, won multiple awards, and contributed to the early success of the Dreamcast. ### Financial decline and restructuring (1998–2005) Namco began experiencing decline in its consumer software sales by 1998 as a result of the Japanese recession, which affected the demand for video games as consumers had less time to play them. The company's arcade division had similar struggles, having slumped by 21% at the end of its fiscal year ending March 1998. Namco Cybertainment filed for Chapter 11 bankruptcy protection in August, being forced to closed several hundreds of its under-performing arcades in North America as its parent underwent reorganization. In its 1998 annual report, Namco reported a 26.3% drop in net sales, which it partly blamed on low consumer spending. A further 55% drop was reported in November 1999 when its home console game output decreased. As a means to diversify itself from its arcade and consumer game markets, Namco entered the mobile phone game market with the Namco Station, a marketplace for i-Mode cellular devices that featured ports of its arcade games like Pac-Man and Galaxian. The company also majority-acquired Monolith Soft, an action role-playing game developer best known for creating the Xenosaga series. It continued introducing novel concepts for arcades to help attract players, such as the Cyber Lead II, an arcade cabinet that features PlayStation and Dreamcast VMU memory card slots. Namco's financial losses worsened in the 2000s. In October 2000, the Japanese newspaper Nihon Keizai Shimbun reported that the company projected a loss of 2.1 billion (\$19.3M) for the fiscal year ending March 2001. Namco had previously hinted at this during an event with industry analysts, blaming its struggles on the depressed Japanese economy and dwindling arcade game market. The company closed its Wonder Eggs park on December 31, 2000, which by that point saw an attendance number of six million visitors, in addition to shuttering many of its video arcades that returned substandard profits. In February 2001, Namco updated its projections and reported it now expected a 6.5 billion (\$56.3M) net loss and a drop in revenue by 95% for the fiscal year ending March 2001, which severely impacted the company's release schedule and corporate structure. The company's earnings forecasts were lowered to accommodate its losses, its development strategy was reorganized to focus largely on established franchises, and 250 of its employees were laid off in what it described as "early retirement". Namco underwent restructuring to increase its income, which included the shuffling of its management and the announcement of production of games for Nintendo's GameCube and Microsoft's Xbox. Following its financial struggles, Namco's arcade division underwent mass reorganization. This division achieved strong success with Taiko no Tatsujin, a popular drum-based rhythm game where players hit a taiko drum controller to the beat of a song. Taiko no Tatsujin became a best-seller and created one of the company's most popular and prolific franchises. Namco's North American divisions, in the meantime, underwent reorganization and restructuring as a result of decreasing profits. Namco Hometek was stripped of its research and development divisions following Namco's disappointment in the quality of its releases. Its continuing expansion into other non-video game divisions, including rehabilitation electronics and travel agency websites, prompted the creation of the Namco Incubation Center, which would control these businesses. The Incubation Center also hosted the Namco Digital Hollywood Game Laboratory game school, which designed the sleeper hit Katamari Damacy (2004). Nakamura resigned as company president later in the year, being replaced with Kyushiro Takagi. Anxious about the company's continuing financial struggles, Nakamura suggested that Namco begin looking into the possibility of merging with another company. Namco first looked to Final Fantasy developer Square and Dragon Quest publisher Enix, offering to combine the three companies into one. Yoichi Wada, the president of Square, disliked Namco's financial showing and declined the offer. Square instead agreed to a business alliance with Namco. Following this, Namco then approached Sega, a company struggling to stay afloat after the commercial failure of the Dreamcast. Sega's development teams and extensive catalog of properties caught Namco's interest, and believed a merge could allow the two to increase their competitiveness. Sega was already discussing a merge with pachinko manufacturer Sammy Corporation; executives at Sammy were infuriated at Sega's consideration of Namco's offer. A failed attempt to overturn the merge led Namco to withdraw its offer the same day Sega announced it turned down Sammy's. While Namco stated it was willing to negotiate with Sega on a future deal, Sega turned down the idea. Shigeichi Ishimura, the son in-law of Nakamura, succeeded Takagi as Namco president on April 1, 2005; Nakamura retained his role as the company's executive chairman. This was part of Namco's continuing efforts at reorganizing itself to be in line with changing markets. On July 26, as part of its 50th anniversary event, Namco published NamCollection—a compilation of several of its PlayStation games—for the PlayStation 2 in Japan. Namco also opened the Riraku no Mori, a companion to its Namja Town park that held massage parlors for visitors; Namco believed it would help make relaxation a source of entertainment. The Idolmaster, a rhythm game that incorporated elements of life simulations, was widely successful in Japan and resulted in the creation of a multi-million-grossing franchise. ### Bandai takeover and dissolution (2005–2006) In early 2005, Namco began merger talks with Bandai, a toy and anime company. The two discussed a year prior about a possible business alliance after Namco collaborated with Bandai subsidiary Banpresto to create an arcade game based on Mobile Suit Gundam. Bandai showed interest in Namco's game development skills and believed combining this with its wide library of profitable characters and franchises, such as Sailor Moon and Tamagotchi, could increase their competitiveness in the industry. Nakamura and Namco's content development division advisors pushed against the idea, as they felt Bandai's corporate model wouldn't blend well with Namco's more agricultural work environment. Namco's advisors were also critical of Bandai for focusing on promotion and marketing over quality. As Namco's financial state continued to deteriorate, Ishimura pressured Nakamura into supporting the merger. Bandai's offer was accepted on May 2, with both companies stating in a joint statement their financial difficulties were the reason for the merger. The business takeover, where Bandai acquired Namco for 175.3 billion (\$1.7B), was finalized on September 29. An entertainment conglomerate named Namco Bandai Holdings was established the same day; while their executive departments merged, Bandai and Namco became independently-operating subsidiaries of the new umbrella holding company. Kyushiro Takagi, Namco's vice chairman, was appointed chairman and director of Namco Bandai Holdings. The combined revenues of the new company were estimated to be 458 billion (\$4.34B), making Namco Bandai the third-largest Japanese game company after Nintendo and Sega Sammy Holdings. As its parent company was preparing for a full business integration, Namco continued its normal operations, such as releasing Ridge Racer 6 as a launch title for the newly-unveiled Xbox 360 in October and collaborating with Nintendo to produce the arcade game Mario Kart Arcade GP. The company honored the 25th anniversary of its Pac-Man series with Pac-Pix, a puzzle game for the Nintendo DS, and entered the massively multiplayer online game market with Tales of Eternia Online, an action role-playing game based on its Tales franchise. On January 4, 2006, Namco Hometek was merged with Bandai Games—Bandai America's consumer game division—to create Namco Bandai Games America, absorbing Namco America's subsidiaries and completing Namco and Bandai's merge in North America. Namco's console game, business program, mobile phone, and research facility divisions were merged with Bandai's console division to create a new company, Namco Bandai Games, on March 31, as Namco was effectively dissolved. The Namco name was repurposed for a new Namco Bandai subsidiary the same day, which absorbed its predecessor's amusement facility and theme park operations. Namco's European division was folded into Namco Bandai Networks Europe on January 1, 2007, as it was reorganized into the company's mobile game and website division. Until April 2014, Namco Bandai Games used the Namco logo on its games to represent the brand's legacy. The Namco Cybertainment division was renamed Namco Entertainment in January 2012, and to Namco USA in 2015. A division of Bandai Namco Holdings USA, Namco USA worked with chains such as AMC Theatres to host its video arcades in their respective locations. The second Namco company was renamed Bandai Namco Amusement on April 1, 2018 following a corporate restructuring by its parent. Amusement took over the arcade game development branch of Namco Bandai Games, which renamed itself to Bandai Namco Entertainment in 2015. Namco USA was absorbed into Bandai Namco Amusement's North American branch in 2021 following its parent company's decision to exit the arcade management industry in the United States. This makes Namco Enterprises Asia and Namco Funscape―Bandai Namco's arcade division in Europe―the last companies to use the original Namco trademark in their names. Bandai Namco Holdings and its subsidiaries continue to use the Namco name for a variety of products, including mobile phone applications, streaming programs, and eSports-focused arcade centers in Japan. ## Legacy Namco was one of the world's largest producers of video arcade games, having published over 300 titles since 1978. Many of its games are considered some of the greatest of all time, including Pac-Man, Galaga, Xevious, Ridge Racer, Tekken 3, and Katamari Damacy. Pac-Man is considered one of the most important video games ever made, having helped encourage originality and creative thinking within the industry. Namco was recognized for the game's worldwide success in 2005 by Guinness World Records; by that timeframe, Pac-Man sold over 300,000 arcade units and grossed over \$1 billion in quarters globally. In an obituary for Masaya Nakamura in 2017, Nintendo Life's Damien McFerran wrote: "without Namco and Pac-Man, the video game arena would be very different today." Namco's corporate philosophy and innovation have received recognition from publications. In a 1994 retrospective on the company, a writer for Edge described Namco as being "among the true pioneers of the coin-op business", a developer with a catalog of well-received and historically significant titles. The writer believed that Namco's success lay in its forward-thinking and firmness on quality, which they argued made it stand out from other developers. A staff member of Edge's sister publication, Next Generation, wrote in 1998: "In a world where today's stars almost always become tomorrow's has-beens, Namco has produced consistently excellent games throughout most of its history." The writer credited the company's connections with its players and its influential releases, namely Pac-Man, Xevious, and Winning Run, as the keys to its success in a rapidly changing industry. Publications and industry journalists have identified Namco's importance to the industry. Hirokazu Hamamura, chief editor of Famitsu, credited the company's quality releases to the rise in popularity of video game consoles, and, in turn, the entirety of Japan's video game industry. Writers for Ultimate Future Games and Official UK PlayStation Magazine have credited the company and its games to the early success of the PlayStation, one of the most iconic entertainment brands worldwide. In addition, Official UK PlayStation Magazine wrote that Namco serves as "the godfather of game developers", and one of the most important video game developers in history. Staff for IGN in 1997 claimed that Namco represents the industry as a whole, with titles like Pac-Man and Galaga being associated with and representing video games. They wrote: "Tracing the history of Namco is like tracing the history of the industry itself. From its humble beginnings on the roof of a Yokohama department store, to the impending release of Tekken 3 for the PlayStation, Namco has always stayed ahead of the pack." In 2012, IGN listed Namco among the greatest video game companies of all time, writing that many of its games—including Galaga, Pac-Man, Dig Dug, and Ridge Racer—were of consistent quality and helped define the industry as a whole. ## See also - List of Namco games - Namcot Collection
49,363,541
York City War Memorial
1,159,234,501
Grade II* listed memorial in York, England
[ "1925 establishments in England", "1925 sculptures", "Buildings and structures completed in 1925", "Grade II* listed buildings in York", "Grade II* listed monuments and memorials", "Monuments and memorials in North Yorkshire", "Outdoor sculptures in England", "Stone sculptures in the United Kingdom", "War memorials by Edwin Lutyens", "Works of Edwin Lutyens in England", "World War I memorials in England", "World War II memorials in England" ]
The York City War Memorial is a First World War memorial designed by Sir Edwin Lutyens and located in York in the north of England. Proposals for commemorating York's war dead originated in 1919 but proved controversial. Initial discussions focused on whether a memorial should be a monument or should take on some utilitarian purpose. Several functional proposals were examined until a public meeting in January 1920 opted for a monument. The city engineer produced a cost estimate and the war memorial committee engaged Lutyens, who had recently been commissioned by the North Eastern Railway (NER) to design their own war memorial, also to be sited in York. Lutyens' first design was approved, but controversy enveloped proposals for both the city's and the NER's memorials. Members of the local community became concerned that the memorials as planned were not in keeping with York's existing architecture, especially as both were in close proximity to the ancient city walls, and that the NER's memorial would overshadow the city's. Continued public opposition forced the committee to abandon the proposed site in favour of one on Leeman Road, just outside the walls, and Lutyens submitted a new design of a War Cross and Stone of Remembrance to fit the location. This was scaled back to the cross alone due to lack of funds. Prince Albert, Duke of York (later King George VI), unveiled the memorial on 25 June 1925, six years after the memorial fund was opened. It consists of a stone cross 33 feet (10 metres) high on three stone blocks and a stone base, beneath which are two further blocks and two shallow steps. It sits in a memorial garden, with an entrance designed by Lutyens using the remaining funds for the memorial. The memorial itself is a grade II\* listed building, having been upgraded when Lutyens' war memorials were designated a national collection in 2015. The piers and gate at the entrance to the garden are listed separately at grade II. ## Background In the aftermath of the First World War, which saw over one million British deaths, thousands of war memorials were built across Britain. Amongst the most prominent designers of memorials was architect Sir Edwin Lutyens, described by Historic England as "the leading English architect of his generation". Lutyens designed The Cenotaph in London, which became the focus for the national Remembrance Sunday commemorations; the Thiepval Memorial to the Missing, the largest British war memorial anywhere in the world; and the Stone of Remembrance, which appears in all large Commonwealth War Graves Commission cemeteries and in several of Lutyens' civic memorials. The York City Memorial was the fifteenth and final War Cross designed by Lutyens, all to a broadly similar design. Most were commissioned for villages—the Devon County War Memorial in Exeter is the only other example of a War Cross serving as a civic memorial in a city. Proposals for a war memorial in York were mired in controversy from the outset. A war memorial committee was established after a council meeting in May 1919 and the committee opened a memorial fund for donations in August, but six years elapsed before the City War Memorial was unveiled. The first point of contention was one that arose in many communities when considering a war memorial. Some felt that the war dead should be commemorated through a building with some community purpose rather than a purely decorative monument. Multiple ideas were put forward and the council tasked the war memorial committee with considering several proposals, including a new city hall and a convalescent home. The committee generated several ideas of its own including a new bridge over the River Ouse, homes for war widows, a maternity hospital, and several ideas for an educational institution. A series of public meetings produced still further ideas until a meeting on 14 January 1920, where a consensus was established in favour of a monument rather than any utilitarian proposal. The committee requested that the city engineer produce a design for a memorial garden with an archway and a cenotaph. The city engineer reported back with a design which he estimated would cost around £7,000 and the war memorial committee appointed Lutyens to oversee the project. Lutyens had recently been commissioned to design a memorial for the North Eastern Railway Company (NER) which was based in York and planned to erect its own memorial in the city dedicated to those of its staff who fought and died in the war. ## Inception The committee gave Lutyens a budget of £2,000 (1920). The architect visited York on 12 August 1920. Accompanied by the lord mayor and the city engineer, he reviewed nine potential sites for the memorial. His preference was for a former cholera burial ground just outside the city walls, but the committee opted for his second choice of a site inside the walls in the moat by Lendal Bridge, 100 yards (90 metres) from the proposed location for the NER's memorial. The committee asked Lutyens to submit a formal proposal, which they received eleven weeks later. The design consisted of Lutyens' Stone of Remembrance, complete with its characteristic base of three shallow steps, raised on a large podium taking it 18 feet (5.5 metres) off the ground. It was the only design for the Stone of Remembrance to treat it as an object of veneration—in all of Lutyens' other designs for the stone it functioned as an altar, albeit more symbolic than practical—and one of the most ambitious of all his war memorial projects. The committee endorsed the proposal on 24 June 1920, after which it was published in the local newspapers as part of a public consultation. It was eventually approved at a further public meeting on 25 November 1920. Nonetheless, objections were raised after the approval. The York Archaeological Society (YAS) and the Yorkshire Architectural and York Archaeological Society (YAYAS) felt that the scheme was not in keeping with the existing architecture in the area, particularly York's ancient city walls, and that it would obstruct views for pedestrians coming into the city from the railway station. Other members of the community, including a local councillor, were concerned that the city's memorial in its proposed location would be overshadowed by the railway company's, given that the NER had granted Lutyens a budget of £20,000—ten times that allocated by the city—for which he had proposed a 54-foot (16-metre) obelisk and large screen wall. Given the proximity to the city walls (Lutyens' initial proposal for the NER abutted the walls) both the city's scheme and the NER's required the consent of the Ancient Monuments Board (later English Heritage and then Historic England). Charles Reed Peers, the board's chief inspector of ancient monuments, attended a meeting at the NER's offices on 8 July 1922 to hear representations for and against both schemes. He requested Lutyens make modification to the NER's memorial, but approved the city's, noting that the proposed site was not part of the walls' rampart and had been created when Lendal Bridge was built in the mid 19th century. Public opposition to the proposed site mounted, even after the Ancient Monuments Board's approval, and the YAYAS continued to apply pressure, calling another public meeting—which it scheduled for 3 May 1923—forcing the war memorial committee to reconsider. The committee revisited a site on Leeman Road, outside the city walls, which had originally been proposed in 1921. Lutyens sent his assistant Albert J Thomas (an architect in his own right) to examine the site on 8 August 1923 and all parties agreed to it. By coincidence, the site was owned by the NER, which donated it to the city in a mark of gratitude for the good relations between the company and the city, the NER having recently been amalgamated into the London and North Eastern Railway. Lutyens submitted a revised design to account for the new location—a War Cross and a Stone of Remembrance—which would have cost almost £2,500. The scheme was scaled back to just the cross and the council undertook to conduct the work using its own staff in order to keep within the £1,100 that had been raised by public subscription. ## History and design The memorial was unveiled a year after the North Eastern Railway War Memorial, at a ceremony on 25 June 1925, which was attended by large crowds. Prince Albert, the Duke of York (later King George VI), performed the unveiling and the Archbishop of York Cosmo Gordon Lang gave a dedication. The Duchess of York had earlier that day unveiled the Five Sisters window in York Minster, dedicated to "women of the Empire" killed in the First World War. Of Portland stone construction, the memorial is in the form of a 33-foot (10-metre) high, lozenge-shaped shaft with short, chamfered arms, moulded where they meet the shaft to form a cross. The cross stands on a base of four uneven rectangular blocks, below which is an undercut square platform, which itself stands on two square blocks. At the very bottom are two wide, shallow steps. The largest block of the base bears the only inscription on the memorial: "TO THE CITIZENS OF YORK 1914 – 1918, 1939 – 1945" on the south face, and "THEIR NAME LIVETH FOR EVERMORE" on the north; the dates of the Second World War were added later. As a memento, a bottle, several coins, and a newspaper were placed inside the structure. The memorial stands in a war memorial garden on the south bank of the River Ouse; it overlooks the river and the ruins of St Mary's Abbey on the opposite bank. When the accounts were reconciled in April 1926, there remained £400 in the memorial fund after Lutyens' fee of £122 and expenses of £20, so the committee commissioned Lutyens to design a set of entrance gates and a pair of supporting piers at the entrance to the memorial garden. The tall, rectangular piers are of limestone construction with cornices and finials in the shape of balls. The gates themselves are iron, painted black and gold, with iron panels linking them to the piers and an overthrow above, in the centre of which is the City of York's coat of arms. The gates open towards the memorial and are aligned with it. To wind up the memorial fund, the committee spent the remaining £17 on three wooden benches for the memorial garden. The York City War Memorial was designated a grade II listed building (a status which offers statutory protection from demolition or modification, applied to structures of "special interest, warranting every effort to preserve them") on 10 September 1970 and the gates and piers were separately listed at grade II on 24 June 1983. The nearby NER memorial, just the other side of the city walls, was listed at grade II\* (defined as "particularly important buildings of more than special interest" and applied to about 5.5% of listed buildings) on 10 September 1970. In November 2015, as part of commemorations for the centenary of the First World War, Lutyens' war memorials were recognised as a national collection and all of his free-standing memorials in England were listed or had their listing status reviewed; their National Heritage List for England list entries were also updated and expanded. As part of this process, the York City memorial was upgraded from grade II to grade II\*. ## See also - Grade II\* listed buildings in the City of York - Grade II\* listed war memorials in England
30,879,349
No. 1 Flying Training School RAAF
1,125,800,438
Royal Australian Air Force training unit
[ "1921 establishments in Australia", "1993 disestablishments in Australia", "Flying training schools of the RAAF", "Military units and formations disestablished in 1993", "Military units and formations established in 1921" ]
No. 1 Flying Training School (No. 1 FTS) is a school of the Royal Australian Air Force (RAAF). It is one of the Air Force's original units, dating back to the service's formation in 1921, when it was established at RAAF Point Cook, Victoria. By the early 1930s, the school comprised training, fighter, and seaplane components. It was re-formed several times in the ensuing years, initially as No. 1 Service Flying Training School (No. 1 SFTS) in 1940, under the wartime Empire Air Training Scheme. After graduating nearly 3,000 pilots, No. 1 SFTS was disbanded in late 1944, when there was no further requirement to train Australian aircrew for service in Europe. The school was re-established in 1946 as No. 1 FTS at RAAF Station Uranquinty, New South Wales, and transferred to Point Cook the following year. Under a restructure of flying training to cope with the demands of the Korean War and Malayan Emergency, No. 1 FTS was re-formed in 1952 as No. 1 Applied Flying Training School (No. 1 AFTS); it moved to RAAF Base Pearce, Western Australia, in 1958. For much of this period the school was also responsible for training the RAAF's air traffic controllers. Its pilot trainees included Army, Navy, and foreign students as well as RAAF personnel. The RAAF's reorganisation of aircrew training in the early 1950s had led to the formation at Uranquinty of No. 1 Basic Flying Training School (No. 1 BFTS), which transferred to Point Cook in 1958. In 1969, No. 1 AFTS was re-formed as No. 2 Flying Training School and No. 1 BFTS was re-formed as No. 1 FTS. Rationalisation of RAAF flying training resulted in the disbandment of No. 1 FTS in 1993. The school re-formed at RAAF Base East Sale in 2019, flying the Pilatus PC-21 and conducting ab initio flight training. ## History ### Early years No. 1 Flying Training School (No. 1 FTS) was the first unit to be formally established as part of the new Australian Air Force on 31 March 1921 (the term "Royal" was added in August that year). No. 1 FTS was formed from the remnants of Australia's original military flying unit, Central Flying School, at RAAF Point Cook, Victoria. Squadron Leader William Anderson, who was also in charge of the Point Cook base, was No. 1 FTS's first commanding officer. The school's initial complement of staff was twelve officers and 67 airmen. In December 1921, the Australian Air Board prepared to form its first five squadrons and allocate aircraft to each, as well as to the nascent flying school. The plan was for No. 1 FTS to receive twelve Avro 504Ks and four Sopwith Pups, and the squadrons a total of eight Royal Aircraft Factory S.E.5s, eight Airco DH.9s, and three Fairey IIIs. Funding problems forced the Air Force to disband the newly raised squadrons on 1 July 1922 and re-form them as flights in a composite squadron under No. 1 FTS. The same month, Flight Lieutenant Frank McNamara, VC, took command of the school. The inaugural flying course commenced in January 1923. Basic instruction took place on the Avro 504Ks, and more advanced or specialised training on the school's other aircraft. Fourteen students commenced the year-long course, and twelve graduated. As well as flying, they studied aeronautics, communications, navigation, armament and general military subjects. Squadron Leader Anderson resumed command of No. 1 FTS in 1925; the following year he handed over to Wing Commander Adrian Cole, who led the unit until 1929. The first Citizen Air Force (active reserve) pilots' course ran from December 1925 to March 1926, 26 of 30 students completing the training. Although 24 accidents occurred, there were no fatalities, leading Cole to remark at the graduation ceremony that the students were either made of India rubber or had learned how to crash "moderately safely". The 1926 Permanent Air Force (PAF) cadet course was marred by three fatal accidents. The following year, 29 students graduated—thirteen PAF, nine reserve, and seven destined for exchange with the Royal Air Force (RAF). In June 1928, the school's Avro 504Ks were replaced by de Havilland DH.60 Cirrus Moths; these were augmented by Gipsy Moths commencing in 1930. Squadron Leader McNamara resumed command of No. 1 FTS in October 1930. By then, two sub-units had been raised at Point Cook under the school's auspices: "Fighter Squadron", operating Bristol Bulldogs; and "Seaplane Squadron", operating Supermarine Southamptons, among other types. As of February 1934, No. 1 FTS was organised into Training Squadron, operating Moths and Westland Wapitis, Fighter Squadron and Seaplane Squadron. Fighter and Seaplane Squadrons were formally established as units that month, but remained under the control of the flying school and were "really little more than flights", according to the official history of the pre-war RAAF. As well as participating in training exercises, Fighter Squadron was often employed for aerobatic displays and flag-waving duties. One of No. 1 FTS's leading instructors during the early 1930s, Flight Lieutenant Frederick Scherger, was also a flight commander in Fighter Squadron. Seaplane Squadron undertook naval co-operation and survey tasks, as well as seaplane training. Fighter Squadron was dissolved in December 1935 when its Bulldogs were transferred to No. 1 Squadron at RAAF Laverton; Seaplane Squadron continued to function until June 1939, when it was separated to form the nucleus of No. 10 Squadron. In 1932, No. 1 FTS started running two courses each year, the first commencing in January and the second in July; it also ceased graduating non-commissioned officers as pilots, and thus took on a character resembling the other armed services' cadet colleges, the Royal Australian Naval College and the Royal Military College, Duntroon. The roughly 1,200 applications for each flying course competed for around twelve places. Wing Commander Hippolyte De La Rue became commanding officer in early 1933. The following year, No. 1 FTS commenced regular courses in signals, photography, air observation, and aircraft maintenance. In April 1936, the school took delivery of its first Avro Cadets, procured as an intermediate trainer to bridge the gap between the Gipsy Moth employed for elementary flying instruction and the Wapiti used for advanced training. De La Rue was succeeded by Wing Commander Frank Lukis in January 1938. By this time the school was training up to 96 new pilots per year, a small percentage of whom were slated for secondment to the RAF on short-service commissions. Link Trainer simulators were introduced in March 1939. ### World War II RAAF flying training was heavily reorganised soon after the outbreak of World War II, in response to Australia's participation in the Empire Air Training Scheme (EATS). Several elementary flying training schools were formed, to provide basic flight instruction to cadets; more advanced pilot instruction was to take place at service flying training schools. On 1 May 1940, No. 1 FTS was re-formed at Point Cook as No. 1 Service Flying Training School (No. 1 SFTS). Its inaugural commanding officer was Group Captain John Summers, who led Fighter Squadron in the early 1930s and had taken over No. 1 FTS in December 1939. The school's Instructors' Training Squadron was detached to become the nucleus of a re-formed Central Flying School, which relocated to Camden, New South Wales, in June. Courses at the service flying training schools consisted of two streams, intermediate and advanced; the total duration varied during the war as demand for aircrew fluctuated. Initially running for sixteen weeks, the course was cut to ten weeks (which included 75 hours flying time) in October 1940. A year later it was raised to twelve weeks (including 100 hours flying time), and again to sixteen weeks two months later. It continued to increase after this, peaking at 28 weeks in June 1944. No. 1 SFTS came under the control of Southern Area Command, headquartered in Melbourne. The school's complement of 52 aircraft included Wapitis, Cadets, Avro Ansons, Hawker Demons, and a de Havilland Tiger Moth. Group Captain John McCauley served as commanding officer from October 1940 until July 1941, when he handed over to Wing Commander Roy King, who went on to take charge of Station Headquarters Point Cook in October. As of July, No. 1 SFTS was operating more than 100 aircraft, including Gipsy Moths, de Havilland DH.89 Dragon Rapides, Douglas C-47 Dakotas, CAC Wirraways and Airspeed Oxfords, the last two being the mainstays. In August 1941, control of all training units in Victoria passed from Southern Area Command to the newly formed No. 1 Training Group. By September, the school had an establishment of 100 officers and over 2,000 airmen, including 300 cadets. It was organised into Intermediate Training Squadron, Advanced Training Squadron, Maintenance Wing, Armament School, and Signal School. Wing Commander Charles Read held command of No. 1 SFTS from October 1943 until its disbandment on 15 September 1944, by which time almost 3,000 pilots had graduated. Among these were Nicky Barr, who became one of Australia's leading fighter aces in North Africa, and Bill Newton, awarded the Victoria Cross for bombing raids in New Guinea. The RAAF had ordered the school's closure in August 1944 as part of a general reduction in aircrew training, after being informed by the British Air Ministry that it no longer required EATS graduates for the war in Europe. Significant reserves of trained Commonwealth aircrew had been built up in the UK early in 1944 before the invasion of Normandy, but lower-than-anticipated casualties had resulted in an over-supply that by 30 June numbered 3,000 Australians. ### Cold War On 1 March 1946, No. 5 Service Flying Training School at RAAF Station Uranquinty, New South Wales, was re-formed as No. 1 FTS, under Southern Area Command. Its complement of aircraft included one Anson, two Tiger Moths, and 55 Wirraways, though the unit was mainly responsible for the maintenance of equipment and little flying was undertaken apart from refresher courses for pilots posting to the British Commonwealth Occupation Force in Japan. By 1 September 1947, No. 1 FTS had transferred to Point Cook, initially as "Flying Training School", under Wing Commander Read. The RAAF's first post-war flying training course at the school consisted of 42 students and commenced in February 1948, finishing in August the following year. Flight grading took place after six months of general military training, at which point students were selected to be trainee pilots or navigators; the former remained at No. 1 FTS, and the latter transferred to the School of Air Navigation at RAAF Base East Sale, Victoria. Unlike some other air forces, which placed students into specialised aircraft roles after basic training, the RAAF's philosophy was to give all pilots essentially the same training from induction to graduation, so they would be able to convert more easily from one aircraft type to another as operational requirements evolved. In September 1949, Read handed over to Squadron Leader Glen Cooper, who commanded the school until August 1951. In response to demands for more aircrew to fulfil Australia's commitments to the Korean War and Malayan Emergency, flying training underwent major changes in 1951–52, the syllabus at No. 1 FTS being split among three separately located units. No. 1 Initial Flying Training School (No. 1 IFTS) was raised at RAAF Station Archerfield, Queensland, to impart students with general aeronautical and military knowledge, after which they received their flight grading during twelve hours on Tiger Moths. Graduate pilots of No. 1 IFTS went on to the newly formed No. 1 Basic Flying Training School (No. 1 BFTS) at Uranquinty, where they underwent a further 90 hours of aerial instruction that included instrument, formation and night flying, first on Tiger Moths and then on Wirraways. Successful students finally transferred to No. 1 FTS, which was renamed No. 1 Applied Flying Training School (No. 1 AFTS) in March 1952. There they undertook 100 flying hours of advanced weapons and combat training on Wirraways, before graduating as sergeant pilots. RAAF College, formed at Point Cook in 1947, was to be the Air Force's primary source of commissioned officers. The Tiger Moths and Wirraways of No. 1 BFTS were subsequently replaced by the CAC Winjeel, first delivered in 1955. By the time it was re-formed as No. 1 AFTS, the flying school at Point Cook had also been made responsible for training the RAAF's air traffic controllers; this role was transferred to Central Flying School at East Sale in December 1956. Southern Area Command was re-formed as Training Command in September 1953. On 28 May 1958, No. 1 AFTS relocated to RAAF Base Pearce, Western Australia, where its Wirraways were replaced by de Havilland Vampire jet trainers, which required a runway longer than that at Point Cook. The school's place at Point Cook was taken by No. 1 BFTS, which transferred from Uranquinty on 19 December. By this time the RAAF had decided to commission all pilots and navigators, who would be selected for these roles upon induction into the service; navigators therefore went straight to the School of Air Navigation at East Sale, without attending flying training school. On 31 December 1958, the Flying Training Squadron of RAAF College was disbanded, and the flight instruction component of the four-year cadet course became the responsibility of No. 1 BFTS (for basic training) and No. 1 AFTS (for advanced training). Previously, the cadets had used FTS aircraft under RAAF College instructors, but from 1959 their flight training was fully integrated with the FTS system. The demand for trained aircrew, which had lessened in the mid-1950s, rose again the following decade as a result of the RAAF embarking on a major re-equipment program, and Australia's increasing involvement in the Vietnam War. The RAAF also had an ongoing commitment to providing flying training to students from the Australian Army and Royal Australian Navy. By adding instructors and increasing the ratio of pupils to instructors, the number of Air Force graduates was progressively raised from 38 in 1963, to 100 in 1968. Also in 1968, Macchi MB-326H jet trainers began replacing the Vampires of No. 1 AFTS. The introduction of the Macchi led to a brief flirtation with "all-through" jet training in the Air Force, consisting of 210 hours on this one type of aircraft. The experiment was dropped after two courses as being, in the words of the official historian of the post-war RAAF, "an expensive way of finding out that some pupils lacked the aptitude to become military pilots"; by 1971 students were receiving 60 hours of basic training on Winjeels at Point Cook, and the Maachi course at Pearce was reduced to 150 hours. On 31 December 1968, No. 1 AFTS was disbanded at Pearce, re-forming on 1 January 1969 as No. 2 Flying Training School . At the same time, No. 1 BFTS was disbanded at Point Cook and re-formed as No. 1 FTS. The Winjeels of No. 1 FTS were replaced by CT-4A Airtrainers in late 1975. The first CT-4 pilots' course of 34 students included six from the Royal Australian Navy and three from Malaysia. By 1977, the school was organised into Air Training, Ground Training and Maintenance Squadrons. As well as maintaining its own aircraft, it was responsible for technical support of other units at Point Cook. The Queen's Colour was presented to No. 1 FTS by the Governor-General, Sir Zelman Cowen, in 1981. In November 1989, one of the school's CT-4s re-created the first trans-Australia flight that had taken place 70 years before, when Captain Henry Wrigley and Sergeant Arthur "Spud" Murphy flew a Royal Aircraft Factory B.E.2 biplane from Point Cook to Darwin, Northern Territory, between 16 November and 12 December 1919. A review of undergraduate flying training, commissioned by the Chief of the Air Staff (CAS), Air Marshal Ray Funnell, and aimed at reducing failure rates and improving cost-effectiveness, saw the retirement of the CT-4s in December 1992, followed by the closure of No. 1 FTS. The last RAAF flying course completed on 12 June 1992, and the last Army pilots' course in December. The school was disbanded on 31 January 1993, bringing to an end almost 80 years of military flying training at Point Cook, Australia's oldest military air base. The occasion was marked by a parading of the Queen's Colour and a flypast by six CT-4s in front of the new CAS, Air Marshal Barry Gration. This was followed by a service at the RAAF Chapel of the Holy Trinity overflown by four Winjeels and a Tiger Moth, and later an all-ranks dining-in night. Concurrent with the phase-out of training at No. 1 FTS, British Aerospace was contracted to conduct flight grading at its base in Tamworth, New South Wales. Subsequent all-through flight training on the Pilatus PC-9 took place at No. 2 FTS, Pearce. In 1998, British Aerospace was granted a contract to supply tri-service basic flying instruction at the newly formed Australian Defence Force Basic Flying Training School (ADFBFTS) in Tamworth, the first course commencing in January 1999 on CT-4B Airtrainers, and No. 2 FTS again became responsible for advanced flying training only. ADFBFTS thus became, according to the school's head of training, "the No. 1 Flying Training School you have when you don't have a No. 1 Flying Training School". ### Reactivation Following the disbandment of the ADFBFTS, No. 1 FTS was re-formed in January 2019 at RAAF Base East Sale to conduct basic flying training on the Pilatus PC-21. The school commenced its first course since reactivation on 14 January, and ten students graduated on 12 July. The re-formed No. 1 FTS came under the control of Air Academy, part of Air Force Training Group.
26,197
Radiocarbon dating
1,171,361,144
Method of determining the age of objects
[ "1940s introductions", "American inventions", "Carbon", "Conservation and restoration of cultural heritage", "Isotopes of carbon", "Radioactivity", "Radiocarbon dating", "Radiometric dating" ]
Radiocarbon dating (also referred to as carbon dating or carbon-14 dating) is a method for determining the age of an object containing organic material by using the properties of radiocarbon, a radioactive isotope of carbon. The method was developed in the late 1940s at the University of Chicago by Willard Libby. It is based on the fact that radiocarbon (<sup>14</sup> C) is constantly being created in the Earth's atmosphere by the interaction of cosmic rays with atmospheric nitrogen. The resulting <sup>14</sup> C combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire <sup>14</sup> C by eating the plants. When the animal or plant dies, it stops exchanging carbon with its environment, and thereafter the amount of <sup>14</sup> C it contains begins to decrease as the <sup>14</sup> C undergoes radioactive decay. Measuring the proportion of <sup>14</sup> C in a sample from a dead plant or animal, such as a piece of wood or a fragment of bone, provides information that can be used to calculate when the animal or plant died. The older a sample is, the less <sup>14</sup> C there is to be detected, and because the half-life of <sup>14</sup> C (the period of time after which half of a given sample will have decayed) is about 5,730 years, the oldest dates that can be reliably measured by this process date to approximately 50,000 years ago (in this interval about 99.8% of the <sup>14</sup> C will have decayed), although special preparation methods occasionally make an accurate analysis of older samples possible. In 1960, Libby received the Nobel Prize in Chemistry for his work. Research has been ongoing since the 1960s to determine what the proportion of <sup>14</sup> C in the atmosphere has been over the past 50,000 years. The resulting data, in the form of a calibration curve, is now used to convert a given measurement of radiocarbon in a sample into an estimate of the sample's calendar age. Other corrections must be made to account for the proportion of <sup>14</sup> C in different types of organisms (fractionation), and the varying levels of <sup>14</sup> C throughout the biosphere (reservoir effects). Additional complications come from the burning of fossil fuels such as coal and oil, and from the above-ground nuclear tests performed in the 1950s and 1960s. Because the time it takes to convert biological materials to fossil fuels is substantially longer than the time it takes for its <sup>14</sup> C to decay below detectable levels, fossil fuels contain almost no <sup>14</sup> C. As a result, beginning in the late 19th century, there was a noticeable drop in the proportion of <sup>14</sup> C in the atmosphere as the carbon dioxide generated from burning fossil fuels began to accumulate. Conversely, nuclear testing increased the amount of <sup>14</sup> C in the atmosphere, which reached a maximum in about 1965 of almost double the amount present in the atmosphere prior to nuclear testing. Measurement of radiocarbon was originally done with beta-counting devices, which counted the amount of beta radiation emitted by decaying <sup>14</sup> C atoms in a sample. More recently, accelerator mass spectrometry has become the method of choice; it counts all the <sup>14</sup> C atoms in the sample and not just the few that happen to decay during the measurements; it can therefore be used with much smaller samples (as small as individual plant seeds), and gives results much more quickly. The development of radiocarbon dating has had a profound impact on archaeology. In addition to permitting more accurate dating within archaeological sites than previous methods, it allows comparison of dates of events across great distances. Histories of archaeology often refer to its impact as the "radiocarbon revolution". Radiocarbon dating has allowed key transitions in prehistory to be dated, such as the end of the last ice age, and the beginning of the Neolithic and Bronze Age in different regions. ## Background ### History In 1939, Martin Kamen and Samuel Ruben of the Radiation Laboratory at Berkeley began experiments to determine if any of the elements common in organic matter had isotopes with half-lives long enough to be of value in biomedical research. They synthesized <sup>14</sup> C using the laboratory's cyclotron accelerator and soon discovered that the atom's half-life was far longer than had been previously thought. This was followed by a prediction by Serge A. Korff, then employed at the Franklin Institute in Philadelphia, that the interaction of thermal neutrons with <sup>14</sup> N in the upper atmosphere would create <sup>14</sup> C. It had previously been thought that <sup>14</sup> C would be more likely to be created by deuterons interacting with <sup>13</sup> C. At some time during World War II, Willard Libby, who was then at Berkeley, learned of Korff's research and conceived the idea that it might be possible to use radiocarbon for dating. In 1945, Libby moved to the University of Chicago, where he began his work on radiocarbon dating. He published a paper in 1946 in which he proposed that the carbon in living matter might include <sup>14</sup> C as well as non-radioactive carbon. Libby and several collaborators proceeded to experiment with methane collected from sewage works in Baltimore, and after isotopically enriching their samples they were able to demonstrate that they contained <sup>14</sup> C. By contrast, methane created from petroleum showed no radiocarbon activity because of its age. The results were summarized in a paper in Science in 1947, in which the authors commented that their results implied it would be possible to date materials containing carbon of organic origin. Libby and James Arnold proceeded to test the radiocarbon dating theory by analyzing samples with known ages. For example, two samples taken from the tombs of two Egyptian kings, Zoser and Sneferu, independently dated to 2625 BC plus or minus 75 years, were dated by radiocarbon measurement to an average of 2800 BC plus or minus 250 years. These results were published in Science in December 1949. Within 11 years of their announcement, more than 20 radiocarbon dating laboratories had been set up worldwide. In 1960, Libby was awarded the Nobel Prize in Chemistry for this work. ### Physical and chemical details In nature, carbon exists as three isotopes: two stable, nonradioactive (carbon-12 (<sup>12</sup> C), and carbon-13 (<sup>13</sup> C), and one radioactive carbon-14 (<sup>14</sup> C), also known as "radiocarbon"). The half-life of <sup>14</sup> C (the time it takes for half of a given amount of <sup>14</sup> C to decay) is about 5,730 years, so its concentration in the atmosphere might be expected to decrease over thousands of years, but <sup>14</sup> C is constantly being produced in the lower stratosphere and upper troposphere, primarily by galactic cosmic rays, and to a lesser degree by solar cosmic rays. These cosmic rays generate neutrons as they travel through the atmosphere which can strike nitrogen-14 (<sup>14</sup> N) atoms and turn them into <sup>14</sup> C. The following nuclear reaction is the main pathway by which <sup>14</sup> C is created: n + <sup>14</sup> <sub>7</sub>N → <sup>14</sup> <sub>6</sub>C + p where n represents a neutron and p represents a proton. Once produced, the <sup>14</sup> C quickly combines with the oxygen (O) in the atmosphere to form first carbon monoxide (CO), and ultimately carbon dioxide (CO <sub>2</sub>). <sup>14</sup>C + O<sub>2</sub> → <sup>14</sup>CO + O <sup>14</sup>CO + OH → <sup>14</sup>CO<sub>2</sub> + H Carbon dioxide produced in this way diffuses in the atmosphere, is dissolved in the ocean, and is taken up by plants via photosynthesis. Animals eat the plants, and ultimately the radiocarbon is distributed throughout the biosphere. The ratio of <sup>14</sup> C to <sup>12</sup> C is approximately 1.25 parts of <sup>14</sup> C to 10<sup>12</sup> parts of <sup>12</sup> C. In addition, about 1% of the carbon atoms are of the stable isotope <sup>13</sup> C. The equation for the radioactive decay of <sup>14</sup> C is: <sup>14</sup> <sub>6</sub>C → <sup>14</sup> <sub>7</sub>N + + By emitting a beta particle (an electron, e<sup>−</sup>) and an electron antineutrino (), one of the neutrons in the <sup>14</sup> C nucleus changes to a proton and the <sup>14</sup> C nucleus reverts to the stable (non-radioactive) isotope <sup>14</sup> N. ### Principles During its life, a plant or animal is in equilibrium with its surroundings by exchanging carbon either with the atmosphere or through its diet. It will, therefore, have the same proportion of <sup>14</sup> C as the atmosphere, or in the case of marine animals or plants, with the ocean. Once it dies, it ceases to acquire <sup>14</sup> C, but the <sup>14</sup> C within its biological material at that time will continue to decay, and so the ratio of <sup>14</sup> C to <sup>12</sup> C in its remains will gradually decrease. Because <sup>14</sup> C decays at a known rate, the proportion of radiocarbon can be used to determine how long it has been since a given sample stopped exchanging carbon – the older the sample, the less <sup>14</sup> C will be left. The equation governing the decay of a radioactive isotope is: $N = N_0 \, e^{-\lambda t}\,$ where N<sub>0</sub> is the number of atoms of the isotope in the original sample (at time t = 0, when the organism from which the sample was taken died), and N is the number of atoms left after time t. λ is a constant that depends on the particular isotope; for a given isotope it is equal to the reciprocal of the mean-life – i.e. the average or expected time a given atom will survive before undergoing radioactive decay. The mean-life, denoted by τ, of <sup>14</sup> C is 8,267 years, so the equation above can be rewritten as: $t = \ln(N_0/N) \cdot \text{8267 years}$ The sample is assumed to have originally had the same <sup>14</sup> C/<sup>12</sup> C ratio as the ratio in the atmosphere, and since the size of the sample is known, the total number of atoms in the sample can be calculated, yielding N<sub>0</sub>, the number of <sup>14</sup> C atoms in the original sample. Measurement of N, the number of <sup>14</sup> C atoms currently in the sample, allows the calculation of t, the age of the sample, using the equation above. The half-life of a radioactive isotope (usually denoted by t<sub>1/2</sub>) is a more familiar concept than the mean-life, so although the equations above are expressed in terms of the mean-life, it is more usual to quote the value of <sup>14</sup> C's half-life than its mean-life. The currently accepted value for the half-life of <sup>14</sup> C is 5,700 ± 30 years. This means that after 5,700 years, only half of the initial <sup>14</sup> C will remain; a quarter will remain after 11,400 years; an eighth after 17,100 years; and so on. The above calculations make several assumptions, such as that the level of <sup>14</sup> C in the atmosphere has remained constant over time. In fact, the level of <sup>14</sup> C in the atmosphere has varied significantly and as a result, the values provided by the equation above have to be corrected by using data from other sources. This is done by calibration curves (discussed below), which convert a measurement of <sup>14</sup> C in a sample into an estimated calendar age. The calculations involve several steps and include an intermediate value called the "radiocarbon age", which is the age in "radiocarbon years" of the sample: an age quoted in radiocarbon years means that no calibration curve has been used − the calculations for radiocarbon years assume that the atmospheric <sup>14</sup> C/<sup>12</sup> C ratio has not changed over time. Calculating radiocarbon ages also requires the value of the half-life for <sup>14</sup> C. In Libby's 1949 paper he used a value of 5720 ± 47 years, based on research by Engelkemeir et al. This was remarkably close to the modern value, but shortly afterwards the accepted value was revised to 5568 ± 30 years, and this value was in use for more than a decade. It was revised again in the early 1960s to 5,730 ± 40 years, which meant that many calculated dates in papers published prior to this were incorrect (the error in the half-life is about 3%). For consistency with these early papers, it was agreed at the 1962 Radiocarbon Conference in Cambridge (UK) to use the "Libby half-life" of 5568 years. Radiocarbon ages are still calculated using this half-life, and are known as "Conventional Radiocarbon Age". Since the calibration curve (IntCal) also reports past atmospheric <sup>14</sup> C concentration using this conventional age, any conventional ages calibrated against the IntCal curve will produce a correct calibrated age. When a date is quoted, the reader should be aware that if it is an uncalibrated date (a term used for dates given in radiocarbon years) it may differ substantially from the best estimate of the actual calendar date, both because it uses the wrong value for the half-life of <sup>14</sup> C, and because no correction (calibration) has been applied for the historical variation of <sup>14</sup> C in the atmosphere over time. ### Carbon exchange reservoir Carbon is distributed throughout the atmosphere, the biosphere, and the oceans; these are referred to collectively as the carbon exchange reservoir, and each component is also referred to individually as a carbon exchange reservoir. The different elements of the carbon exchange reservoir vary in how much carbon they store, and in how long it takes for the <sup>14</sup> C generated by cosmic rays to fully mix with them. This affects the ratio of <sup>14</sup> C to <sup>12</sup> C in the different reservoirs, and hence the radiocarbon ages of samples that originated in each reservoir. The atmosphere, which is where <sup>14</sup> C is generated, contains about 1.9% of the total carbon in the reservoirs, and the <sup>14</sup> C it contains mixes in less than seven years. The ratio of <sup>14</sup> C to <sup>12</sup> C in the atmosphere is taken as the baseline for the other reservoirs: if another reservoir has a lower ratio of <sup>14</sup> C to <sup>12</sup> C, it indicates that the carbon is older and hence that either some of the <sup>14</sup> C has decayed, or the reservoir is receiving carbon that is not at the atmospheric baseline. The ocean surface is an example: it contains 2.4% of the carbon in the exchange reservoir, but there is only about 95% as much <sup>14</sup> C as would be expected if the ratio were the same as in the atmosphere. The time it takes for carbon from the atmosphere to mix with the surface ocean is only a few years, but the surface waters also receive water from the deep ocean, which has more than 90% of the carbon in the reservoir. Water in the deep ocean takes about 1,000 years to circulate back through surface waters, and so the surface waters contain a combination of older water, with depleted <sup>14</sup> C, and water recently at the surface, with <sup>14</sup> C in equilibrium with the atmosphere. Creatures living at the ocean surface have the same <sup>14</sup> C ratios as the water they live in, and as a result of the reduced <sup>14</sup> C/<sup>12</sup> C ratio, the radiocarbon age of marine life is typically about 400 years. Organisms on land are in closer equilibrium with the atmosphere and have the same <sup>14</sup> C/<sup>12</sup> C ratio as the atmosphere. These organisms contain about 1.3% of the carbon in the reservoir; sea organisms have a mass of less than 1% of those on land and are not shown in the diagram. Accumulated dead organic matter, of both plants and animals, exceeds the mass of the biosphere by a factor of nearly 3, and since this matter is no longer exchanging carbon with its environment, it has a <sup>14</sup> C/<sup>12</sup> C ratio lower than that of the biosphere. ## Dating considerations The variation in the <sup>14</sup> C/<sup>12</sup> C ratio in different parts of the carbon exchange reservoir means that a straightforward calculation of the age of a sample based on the amount of <sup>14</sup> C it contains will often give an incorrect result. There are several other possible sources of error that need to be considered. The errors are of four general types: - variations in the <sup>14</sup> C/<sup>12</sup> C ratio in the atmosphere, both geographically and over time; - isotopic fractionation; - variations in the <sup>14</sup> C/<sup>12</sup> C ratio in different parts of the reservoir; - contamination. ### Atmospheric variation In the early years of using the technique, it was understood that it depended on the atmospheric <sup>14</sup> C/<sup>12</sup> C ratio having remained the same over the preceding few thousand years. To verify the accuracy of the method, several artefacts that were datable by other techniques were tested; the results of the testing were in reasonable agreement with the true ages of the objects. Over time, however, discrepancies began to appear between the known chronology for the oldest Egyptian dynasties and the radiocarbon dates of Egyptian artefacts. Neither the pre-existing Egyptian chronology nor the new radiocarbon dating method could be assumed to be accurate, but a third possibility was that the <sup>14</sup> C/<sup>12</sup> C ratio had changed over time. The question was resolved by the study of tree rings: comparison of overlapping series of tree rings allowed the construction of a continuous sequence of tree-ring data that spanned 8,000 years. (Since that time the tree-ring data series has been extended to 13,900 years.) In the 1960s, Hans Suess was able to use the tree-ring sequence to show that the dates derived from radiocarbon were consistent with the dates assigned by Egyptologists. This was possible because although annual plants, such as corn, have a <sup>14</sup> C/<sup>12</sup> C ratio that reflects the atmospheric ratio at the time they were growing, trees only add material to their outermost tree ring in any given year, while the inner tree rings do not get their <sup>14</sup> C replenished and instead start losing <sup>14</sup> C through decay. Hence each ring preserves a record of the atmospheric <sup>14</sup> C/<sup>12</sup> C ratio of the year it grew in. Carbon-dating the wood from the tree rings themselves provides the check needed on the atmospheric <sup>14</sup> C/<sup>12</sup> C ratio: with a sample of known date, and a measurement of the value of N (the number of atoms of <sup>14</sup> C remaining in the sample), the carbon-dating equation allows the calculation of N<sub>0</sub> – the number of atoms of <sup>14</sup> C in the sample at the time the tree ring was formed – and hence the <sup>14</sup> C/<sup>12</sup> C ratio in the atmosphere at that time. Equipped with the results of carbon-dating the tree rings, it became possible to construct calibration curves designed to correct the errors caused by the variation over time in the <sup>14</sup> C/<sup>12</sup> C ratio. These curves are described in more detail below. Coal and oil began to be burned in large quantities during the 19th century. Both are sufficiently old that they contain little or no detectable <sup>14</sup> C and, as a result, the CO <sub>2</sub> released substantially diluted the atmospheric <sup>14</sup> C/<sup>12</sup> C ratio. Dating an object from the early 20th century hence gives an apparent date older than the true date. For the same reason, <sup>14</sup> C concentrations in the neighbourhood of large cities are lower than the atmospheric average. This fossil fuel effect (also known as the Suess effect, after Hans Suess, who first reported it in 1955) would only amount to a reduction of 0.2% in <sup>14</sup> C activity if the additional carbon from fossil fuels were distributed throughout the carbon exchange reservoir, but because of the long delay in mixing with the deep ocean, the actual effect is a 3% reduction. A much larger effect comes from above-ground nuclear testing, which released large numbers of neutrons into the atmosphere, resulting in the creation of <sup>14</sup> C. From about 1950 until 1963, when atmospheric nuclear testing was banned, it is estimated that several tonnes of <sup>14</sup> C were created. If all this extra <sup>14</sup> C had immediately been spread across the entire carbon exchange reservoir, it would have led to an increase in the <sup>14</sup> C/<sup>12</sup> C ratio of only a few per cent, but the immediate effect was to almost double the amount of <sup>14</sup> C in the atmosphere, with the peak level occurring in 1964 for the northern hemisphere, and in 1966 for the southern hemisphere. The level has since dropped, as this bomb pulse or "bomb carbon" (as it is sometimes called) percolates into the rest of the reservoir. ### Isotopic fractionation Photosynthesis is the primary process by which carbon moves from the atmosphere into living things. In photosynthetic pathways <sup>12</sup> C is absorbed slightly more easily than <sup>13</sup> C, which in turn is more easily absorbed than <sup>14</sup> C. The differential uptake of the three carbon isotopes leads to <sup>13</sup> C/<sup>12</sup> C and <sup>14</sup> C/<sup>12</sup> C ratios in plants that differ from the ratios in the atmosphere. This effect is known as isotopic fractionation. To determine the degree of fractionation that takes place in a given plant, the amounts of both <sup>12</sup> C and <sup>13</sup> C isotopes are measured, and the resulting <sup>13</sup> C/<sup>12</sup> C ratio is then compared to a standard ratio known as PDB. The <sup>13</sup> C/<sup>12</sup> C ratio is used instead of <sup>14</sup> C/<sup>12</sup> C because the former is much easier to measure, and the latter can be easily derived: the depletion of <sup>13</sup> C relative to <sup>12</sup> C is proportional to the difference in the atomic masses of the two isotopes, so the depletion for <sup>14</sup> C is twice the depletion of <sup>13</sup> C. The fractionation of <sup>13</sup> C, known as δ<sup>13</sup>C, is calculated as follows: $\delta \ce{^{13}C} = \left( \frac{\left( \frac{\ce{^{13}C}}{\ce{^{12}C}} \right)_{\text{sample}}}{\left( \frac{\ce{^{13}C}}{\ce{^{12}C}} \right)_{\text{standard}}} - 1 \right) \times 1000$ ‰ where the ‰ sign indicates parts per thousand. Because the PDB standard contains an unusually high proportion of <sup>13</sup> C, most measured δ<sup>13</sup>C values are negative. For marine organisms, the details of the photosynthesis reactions are less well understood, and the δ<sup>13</sup>C values for marine photosynthetic organisms are dependent on temperature. At higher temperatures, CO <sub>2</sub> has poor solubility in water, which means there is less CO <sub>2</sub> available for the photosynthetic reactions. Under these conditions, fractionation is reduced, and at temperatures above 14 °C (57 °F) the δ<sup>13</sup>C values are correspondingly higher, while at lower temperatures, CO <sub>2</sub> becomes more soluble and hence more available to marine organisms. The δ<sup>13</sup>C value for animals depends on their diet. An animal that eats food with high δ<sup>13</sup>C values will have a higher δ<sup>13</sup>C than one that eats food with lower δ<sup>13</sup>C values. The animal's own biochemical processes can also impact the results: for example, both bone minerals and bone collagen typically have a higher concentration of <sup>13</sup> C than is found in the animal's diet, though for different biochemical reasons. The enrichment of bone <sup>13</sup> C also implies that excreted material is depleted in <sup>13</sup> C relative to the diet. Since <sup>13</sup> C makes up about 1% of the carbon in a sample, the <sup>13</sup> C/<sup>12</sup> C ratio can be accurately measured by mass spectrometry. Typical values of δ<sup>13</sup>C have been found by experiment for many plants, as well as for different parts of animals such as bone collagen, but when dating a given sample it is better to determine the δ<sup>13</sup>C value for that sample directly than to rely on the published values. The carbon exchange between atmospheric CO <sub>2</sub> and carbonate at the ocean surface is also subject to fractionation, with <sup>14</sup> C in the atmosphere more likely than <sup>12</sup> C to dissolve in the ocean. The result is an overall increase in the <sup>14</sup> C/<sup>12</sup> C ratio in the ocean of 1.5%, relative to the <sup>14</sup> C/<sup>12</sup> C ratio in the atmosphere. This increase in <sup>14</sup> C concentration almost exactly cancels out the decrease caused by the upwelling of water (containing old, and hence <sup>14</sup> C-depleted, carbon) from the deep ocean, so that direct measurements of <sup>14</sup> C radiation are similar to measurements for the rest of the biosphere. Correcting for isotopic fractionation, as is done for all radiocarbon dates to allow comparison between results from different parts of the biosphere, gives an apparent age of about 400 years for ocean surface water. ### Reservoir effects Libby's original exchange reservoir hypothesis assumed that the <sup>14</sup> C/<sup>12</sup> C ratio in the exchange reservoir is constant all over the world, but it has since been discovered that there are several causes of variation in the ratio across the reservoir. #### Marine effect The CO <sub>2</sub> in the atmosphere transfers to the ocean by dissolving in the surface water as carbonate and bicarbonate ions; at the same time the carbonate ions in the water are returning to the air as CO <sub>2</sub>. This exchange process brings <sup>14</sup> C from the atmosphere into the surface waters of the ocean, but the <sup>14</sup> C thus introduced takes a long time to percolate through the entire volume of the ocean. The deepest parts of the ocean mix very slowly with the surface waters, and the mixing is uneven. The main mechanism that brings deep water to the surface is upwelling, which is more common in regions closer to the equator. Upwelling is also influenced by factors such as the topography of the local ocean bottom and coastlines, the climate, and wind patterns. Overall, the mixing of deep and surface waters takes far longer than the mixing of atmospheric CO <sub>2</sub> with the surface waters, and as a result water from some deep ocean areas has an apparent radiocarbon age of several thousand years. Upwelling mixes this "old" water with the surface water, giving the surface water an apparent age of about several hundred years (after correcting for fractionation). This effect is not uniform – the average effect is about 400 years, but there are local deviations of several hundred years for areas that are geographically close to each other. These deviations can be accounted for in calibration, and users of software such as CALIB can provide as an input the appropriate correction for the location of their samples. The effect also applies to marine organisms such as shells, and marine mammals such as whales and seals, which have radiocarbon ages that appear to be hundreds of years old. #### Hemisphere effect The northern and southern hemispheres have atmospheric circulation systems that are sufficiently independent of each other that there is a noticeable time lag in mixing between the two. The atmospheric <sup>14</sup> C/<sup>12</sup> C ratio is lower in the southern hemisphere, with an apparent additional age of about 40 years for radiocarbon results from the south as compared to the north. This is because the greater surface area of ocean in the southern hemisphere means that there is more carbon exchanged between the ocean and the atmosphere than in the north. Since the surface ocean is depleted in <sup>14</sup> C because of the marine effect, <sup>14</sup> C is removed from the southern atmosphere more quickly than in the north. The effect is strengthened by strong upwelling around Antarctica. #### Other effects If the carbon in freshwater is partly acquired from aged carbon, such as rocks, then the result will be a reduction in the <sup>14</sup> C/<sup>12</sup> C ratio in the water. For example, rivers that pass over limestone, which is mostly composed of calcium carbonate, will acquire carbonate ions. Similarly, groundwater can contain carbon derived from the rocks through which it has passed. These rocks are usually so old that they no longer contain any measurable <sup>14</sup> C, so this carbon lowers the <sup>14</sup> C/<sup>12</sup> C ratio of the water it enters, which can lead to apparent ages of thousands of years for both the affected water and the plants and freshwater organisms that live in it. This is known as the hard water effect because it is often associated with calcium ions, which are characteristic of hard water; other sources of carbon such as humus can produce similar results, and can also reduce the apparent age if they are of more recent origin than the sample. The effect varies greatly and there is no general offset that can be applied; additional research is usually needed to determine the size of the offset, for example by comparing the radiocarbon age of deposited freshwater shells with associated organic material. Volcanic eruptions eject large amounts of carbon into the air. The carbon is of geological origin and has no detectable <sup>14</sup> C, so the <sup>14</sup> C/<sup>12</sup> C ratio in the vicinity of the volcano is depressed relative to surrounding areas. Dormant volcanoes can also emit aged carbon. Plants that photosynthesize this carbon also have lower <sup>14</sup> C/<sup>12</sup> C ratios: for example, plants in the neighbourhood of the Furnas caldera in the Azores were found to have apparent ages that ranged from 250 years to 3320 years. ### Contamination Any addition of carbon to a sample of a different age will cause the measured date to be inaccurate. Contamination with modern carbon causes a sample to appear to be younger than it really is: the effect is greater for older samples. If a sample that is 17,000 years old is contaminated so that 1% of the sample is modern carbon, it will appear to be 600 years younger; for a sample that is 34,000 years old, the same amount of contamination would cause an error of 4,000 years. Contamination with old carbon, with no remaining <sup>14</sup> C, causes an error in the other direction independent of age – a sample contaminated with 1% old carbon will appear to be about 80 years older than it truly is, regardless of the date of the sample. ## Samples Samples for dating need to be converted into a form suitable for measuring the <sup>14</sup> C content; this can mean conversion to gaseous, liquid, or solid form, depending on the measurement technique to be used. Before this can be done, the sample must be treated to remove any contamination and any unwanted constituents. This includes removing visible contaminants, such as rootlets that may have penetrated the sample since its burial. Alkali and acid washes can be used to remove humic acid and carbonate contamination, but care has to be taken to avoid removing the part of the sample that contains the carbon to be tested. ### Material considerations - It is common to reduce a wood sample to just the cellulose component before testing, but since this can reduce the volume of the sample to 20% of its original size, testing of the whole wood is often performed as well. Charcoal is often tested but is likely to need treatment to remove contaminants. - Unburnt bone can be tested; it is usual to date it using collagen, the protein fraction that remains after washing away the bone's structural material. Hydroxyproline, one of the constituent amino acids in bone, was once thought to be a reliable indicator as it was not known to occur except in bone, but it has since been detected in groundwater. - For burnt bone, testability depends on the conditions under which the bone was burnt. If the bone was heated under reducing conditions, it (and associated organic matter) may have been carbonized. In this case, the sample is often usable. - Shells from both marine and land organisms consist almost entirely of calcium carbonate, either as aragonite or as calcite, or some mixture of the two. Calcium carbonate is very susceptible to dissolving and recrystallizing; the recrystallized material will contain carbon from the sample's environment, which may be of geological origin. If testing recrystallized shell is unavoidable, it is sometimes possible to identify the original shell material from a sequence of tests. It is also possible to test conchiolin, an organic protein found in shell, but it constitutes only 1–2% of shell material. - The three major components of peat are humic acid, humins, and fulvic acid. Of these, humins give the most reliable date as they are insoluble in alkali and less likely to contain contaminants from the sample's environment. A particular difficulty with dried peat is the removal of rootlets, which are likely to be hard to distinguish from the sample material. - Soil contains organic material, but because of the likelihood of contamination by humic acid of more recent origin, it is very difficult to get satisfactory radiocarbon dates. It is preferable to sieve the soil for fragments of organic origin, and date the fragments with methods that are tolerant of small sample sizes. - Other materials that have been successfully dated include ivory, paper, textiles, individual seeds and grains, straw from within mud bricks, and charred food remains found in pottery. ### Preparation and size Particularly for older samples, it may be useful to enrich the amount of <sup>14</sup> C in the sample before testing. This can be done with a thermal diffusion column. The process takes about a month and requires a sample about ten times as large as would be needed otherwise, but it allows more precise measurement of the <sup>14</sup> C/<sup>12</sup> C ratio in old material and extends the maximum age that can be reliably reported. Once contamination has been removed, samples must be converted to a form suitable for the measuring technology to be used. Where gas is required, CO <sub>2</sub> is widely used. For samples to be used in liquid scintillation counters, the carbon must be in liquid form; the sample is typically converted to benzene. For accelerator mass spectrometry, solid graphite targets are the most common, although gaseous CO <sub>2</sub> can also be used. The quantity of material needed for testing depends on the sample type and the technology being used. There are two types of testing technology: detectors that record radioactivity, known as beta counters, and accelerator mass spectrometers. For beta counters, a sample weighing at least 10 grams (0.35 ounces) is typically required. Accelerator mass spectrometry is much more sensitive, and samples containing as little as 0.5 milligrams of carbon can be used. ## Measurement and results For decades after Libby performed the first radiocarbon dating experiments, the only way to measure the <sup>14</sup> C in a sample was to detect the radioactive decay of individual carbon atoms. In this approach, what is measured is the activity, in number of decay events per unit mass per time period, of the sample. This method is also known as "beta counting", because it is the beta particles emitted by the decaying <sup>14</sup> C atoms that are detected. In the late 1970s an alternative approach became available: directly counting the number of <sup>14</sup> C and <sup>12</sup> C atoms in a given sample, via accelerator mass spectrometry, usually referred to as AMS. AMS counts the <sup>14</sup> C/<sup>12</sup> C ratio directly, instead of the activity of the sample, but measurements of activity and <sup>14</sup> C/<sup>12</sup> C ratio can be converted into each other exactly. For some time, beta counting methods were more accurate than AMS, but AMS is now more accurate and has become the method of choice for radiocarbon measurements. In addition to improved accuracy, AMS has two further significant advantages over beta counting: it can perform accurate testing on samples much too small for beta counting, and it is much faster – an accuracy of 1% can be achieved in minutes with AMS, which is far quicker than would be achievable with the older technology. ### Beta counting Libby's first detector was a Geiger counter of his own design. He converted the carbon in his sample to lamp black (soot) and coated the inner surface of a cylinder with it. This cylinder was inserted into the counter in such a way that the counting wire was inside the sample cylinder, in order that there should be no material between the sample and the wire. Any interposing material would have interfered with the detection of radioactivity, since the beta particles emitted by decaying <sup>14</sup> C are so weak that half are stopped by a 0.01 mm (0.00039 in) thickness of aluminium. Libby's method was soon superseded by gas proportional counters, which were less affected by bomb carbon (the additional <sup>14</sup> C created by nuclear weapons testing). These counters record bursts of ionization caused by the beta particles emitted by the decaying <sup>14</sup> C atoms; the bursts are proportional to the energy of the particle, so other sources of ionization, such as background radiation, can be identified and ignored. The counters are surrounded by lead or steel shielding, to eliminate background radiation and to reduce the incidence of cosmic rays. In addition, anticoincidence detectors are used; these record events outside the counter and any event recorded simultaneously both inside and outside the counter is regarded as an extraneous event and ignored. The other common technology used for measuring <sup>14</sup> C activity is liquid scintillation counting, which was invented in 1950, but which had to wait until the early 1960s, when efficient methods of benzene synthesis were developed, to become competitive with gas counting; after 1970 liquid counters became the more common technology choice for newly constructed dating laboratories. The counters work by detecting flashes of light caused by the beta particles emitted by <sup>14</sup> C as they interact with a fluorescing agent added to the benzene. Like gas counters, liquid scintillation counters require shielding and anticoincidence counters. For both the gas proportional counter and liquid scintillation counter, what is measured is the number of beta particles detected in a given time period. Since the mass of the sample is known, this can be converted to a standard measure of activity in units of either counts per minute per gram of carbon (cpm/g C), or becquerels per kg (Bq/kg C, in SI units). Each measuring device is also used to measure the activity of a blank sample – a sample prepared from carbon old enough to have no activity. This provides a value for the background radiation, which must be subtracted from the measured activity of the sample being dated to get the activity attributable solely to that sample's <sup>14</sup> C. In addition, a sample with a standard activity is measured, to provide a baseline for comparison. ### Accelerator mass spectrometry AMS counts the atoms of <sup>14</sup> C and <sup>12</sup> C in a given sample, determining the <sup>14</sup> C/<sup>12</sup> C ratio directly. The sample, often in the form of graphite, is made to emit C<sup>−</sup> ions (carbon atoms with a single negative charge), which are injected into an accelerator. The ions are accelerated and passed through a stripper, which removes several electrons so that the ions emerge with a positive charge. The ions, which may have from 1 to 4 positive charges (C<sup>+</sup> to C<sup>4+</sup>), depending on the accelerator design, are then passed through a magnet that curves their path; the heavier ions are curved less than the lighter ones, so the different isotopes emerge as separate streams of ions. A particle detector then records the number of ions detected in the <sup>14</sup> C stream, but since the volume of <sup>12</sup> C (and <sup>13</sup> C, needed for calibration) is too great for individual ion detection, counts are determined by measuring the electric current created in a Faraday cup. The large positive charge induced by the stripper forces molecules such as <sup>13</sup> CH, which has a weight close enough to <sup>14</sup> C to interfere with the measurements, to dissociate, so they are not detected. Most AMS machines also measure the sample's δ<sup>13</sup>C, for use in calculating the sample's radiocarbon age. The use of AMS, as opposed to simpler forms of mass spectrometry, is necessary because of the need to distinguish the carbon isotopes from other atoms or molecules that are very close in mass, such as <sup>14</sup> N and <sup>13</sup> CH. As with beta counting, both blank samples and standard samples are used. Two different kinds of blank may be measured: a sample of dead carbon that has undergone no chemical processing, to detect any machine background, and a sample known as a process blank made from dead carbon that is processed into target material in exactly the same way as the sample which is being dated. Any <sup>14</sup> C signal from the machine background blank is likely to be caused either by beams of ions that have not followed the expected path inside the detector or by carbon hydrides such as <sup>12</sup> CH <sub>2</sub> or <sup>13</sup> CH. A <sup>14</sup> C signal from the process blank measures the amount of contamination introduced during the preparation of the sample. These measurements are used in the subsequent calculation of the age of the sample. ### Calculations The calculations to be performed on the measurements taken depend on the technology used, since beta counters measure the sample's radioactivity whereas AMS determines the ratio of the three different carbon isotopes in the sample. To determine the age of a sample whose activity has been measured by beta counting, the ratio of its activity to the activity of the standard must be found. To determine this, a blank sample (of old, or dead, carbon) is measured, and a sample of known activity is measured. The additional samples allow errors such as background radiation and systematic errors in the laboratory setup to be detected and corrected for. The most common standard sample material is oxalic acid, such as the HOxII standard, 1,000 lb (450 kg) of which was prepared by the National Institute of Standards and Technology (NIST) in 1977 from French beet harvests. The results from AMS testing are in the form of ratios of <sup>12</sup> C, <sup>13</sup> C, and <sup>14</sup> C, which are used to calculate Fm, the "fraction modern". This is defined as the ratio between the <sup>14</sup> C/<sup>12</sup> C ratio in the sample and the <sup>14</sup> C/<sup>12</sup> C ratio in modern carbon, which is in turn defined as the <sup>14</sup> C/<sup>12</sup> C ratio that would have been measured in 1950 had there been no fossil fuel effect. Both beta counting and AMS results have to be corrected for fractionation. This is necessary because different materials of the same age, which because of fractionation have naturally different <sup>14</sup> C/<sup>12</sup> C ratios, will appear to be of different ages because the <sup>14</sup> C/<sup>12</sup> C ratio is taken as the indicator of age. To avoid this, all radiocarbon measurements are converted to the measurement that would have been seen had the sample been made of wood, which has a known δ<sup>13</sup> C value of −25‰. Once the corrected <sup>14</sup> C/<sup>12</sup> C ratio is known, a "radiocarbon age" is calculated using: $\text{Age} = - \ln (\text{Fm})\cdot 8033\text{ years}$ The calculation uses 8,033 years, the mean-life derived from Libby's half-life of 5,568 years, not 8,267 years, the mean-life derived from the more accurate modern value of 5,730 years. Libby's value for the half-life is used to maintain consistency with early radiocarbon testing results; calibration curves include a correction for this, so the accuracy of final reported calendar ages is assured. ### Errors and reliability The reliability of the results can be improved by lengthening the testing time. For example, if counting beta decays for 250 minutes is enough to give an error of ± 80 years, with 68% confidence, then doubling the counting time to 500 minutes will allow a sample with only half as much <sup>14</sup> C to be measured with the same error term of 80 years. Radiocarbon dating is generally limited to dating samples no more than 50,000 years old, as samples older than that have insufficient <sup>14</sup> C to be measurable. Older dates have been obtained by using special sample preparation techniques, large samples, and very long measurement times. These techniques can allow measurement of dates up to 60,000 and in some cases up to 75,000 years before the present. Radiocarbon dates are generally presented with a range of one standard deviation (usually represented by the Greek letter sigma as 1σ) on either side of the mean. However, a date range of 1σ represents only a 68% confidence level, so the true age of the object being measured may lie outside the range of dates quoted. This was demonstrated in 1970 by an experiment run by the British Museum radiocarbon laboratory, in which weekly measurements were taken on the same sample for six months. The results varied widely (though consistently with a normal distribution of errors in the measurements), and included multiple date ranges (of 1σ confidence) that did not overlap with each other. The measurements included one with a range from about 4,250 to about 4,390 years ago, and another with a range from about 4,520 to about 4,690. Errors in procedure can also lead to errors in the results. If 1% of the benzene in a modern reference sample accidentally evaporates, scintillation counting will give a radiocarbon age that is too young by about 80 years. ### Calibration The calculations given above produce dates in radiocarbon years: i.e. dates that represent the age the sample would be if the <sup>14</sup> C/<sup>12</sup> C ratio had been constant historically. Although Libby had pointed out as early as 1955 the possibility that this assumption was incorrect, it was not until discrepancies began to accumulate between measured ages and known historical dates for artefacts that it became clear that a correction would need to be applied to radiocarbon ages to obtain calendar dates. To produce a curve that can be used to relate calendar years to radiocarbon years, a sequence of securely dated samples is needed which can be tested to determine their radiocarbon age. The study of tree rings led to the first such sequence: individual pieces of wood show characteristic sequences of rings that vary in thickness because of environmental factors such as the amount of rainfall in a given year. These factors affect all trees in an area, so examining tree-ring sequences from old wood allows the identification of overlapping sequences. In this way, an uninterrupted sequence of tree rings can be extended far into the past. The first such published sequence, based on bristlecone pine tree rings, was created by Wesley Ferguson. Hans Suess used this data to publish the first calibration curve for radiocarbon dating in 1967. The curve showed two types of variation from the straight line: a long term fluctuation with a period of about 9,000 years, and a shorter-term variation, often referred to as "wiggles", with a period of decades. Suess said he drew the line showing the wiggles by "cosmic schwung", by which he meant that the variations were caused by extraterrestrial forces. It was unclear for some time whether the wiggles were real or not, but they are now well-established. These short term fluctuations in the calibration curve are now known as de Vries effects, after Hessel de Vries. A calibration curve is used by taking the radiocarbon date reported by a laboratory and reading across from that date on the vertical axis of the graph. The point where this horizontal line intersects the curve will give the calendar age of the sample on the horizontal axis. This is the reverse of the way the curve is constructed: a point on the graph is derived from a sample of known age, such as a tree ring; when it is tested, the resulting radiocarbon age gives a data point for the graph. Over the next thirty years many calibration curves were published using a variety of methods and statistical approaches. These were superseded by the IntCal series of curves, beginning with IntCal98, published in 1998, and updated in 2004, 2009, 2013, and 2020. The improvements to these curves are based on new data gathered from tree rings, varves, coral, plant macrofossils, speleothems, and foraminifera. The IntCal20 data includes separate curves for the northern and southern hemispheres, as they differ systematically because of the hemisphere effect. The southern curve (SHCAL20) is based on independent data where possible and derived from the northern curve by adding the average offset for the southern hemisphere where no direct data was available. There is also a separate marine calibration curve, MARINE20. For a set of samples forming a sequence with a known separation in time, these samples form a subset of the calibration curve. The sequence can be compared to the calibration curve and the best match to the sequence established. This "wiggle-matching" technique can lead to more precise dating than is possible with individual radiocarbon dates. Wiggle-matching can be used in places where there is a plateau on the calibration curve, and hence can provide a much more accurate date than the intercept or probability methods are able to produce. The technique is not restricted to tree rings; for example, a stratified tephra sequence in New Zealand, believed to predate human colonization of the islands, has been dated to 1314 AD ± 12 years by wiggle-matching. The wiggles also mean that reading a date from a calibration curve can give more than one answer: this occurs when the curve wiggles up and down enough that the radiocarbon age intercepts the curve in more than one place, which may lead to a radiocarbon result being reported as two separate age ranges, corresponding to the two parts of the curve that the radiocarbon age intercepted. Bayesian statistical techniques can be applied when there are several radiocarbon dates to be calibrated. For example, if a series of radiocarbon dates is taken from different levels in a stratigraphic sequence, Bayesian analysis can be used to evaluate dates which are outliers and can calculate improved probability distributions, based on the prior information that the sequence should be ordered in time. When Bayesian analysis was introduced, its use was limited by the need to use mainframe computers to perform the calculations, but the technique has since been implemented on programs available for personal computers, such as OxCal. ### Reporting dates Several formats for citing radiocarbon results have been used since the first samples were dated. As of 2019, the standard format required by the journal Radiocarbon is as follows. Uncalibrated dates should be reported as ": ± BP", where: - identifies the laboratory that tested the sample, and the sample ID - is the laboratory's determination of the age of the sample, in radiocarbon years - is the laboratory's estimate of the error in the age, at 1σ confidence. - 'BP' stands for "before present", referring to a reference date of 1950, so that "500 BP" means the year AD 1450. For example, the uncalibrated date "UtC-2020: 3510 ± 60 BP" indicates that the sample was tested by the Utrecht van der Graaff Laboratorium ("UtC"), where it has a sample number of "2020", and that the uncalibrated age is 3510 years before present, ± 60 years. Related forms are sometimes used: for example, "2.3 ka BP" means 2,300 radiocarbon years before present (i.e. 350 BC), and "<sup>14</sup> C yr BP" might be used to distinguish the uncalibrated date from a date derived from another dating method such as thermoluminescence. Calibrated <sup>14</sup> C dates are frequently reported as "cal BP", "cal BC", or "cal AD", again with 'BP' referring to the year 1950 as the zero date. Radiocarbon gives two options for reporting calibrated dates. A common format is "cal ", where: - is the range of dates corresponding to the given confidence level - indicates the confidence level for the given date range. For example, "cal 1220–1281 AD (1σ)" means a calibrated date for which the true date lies between AD 1220 and AD 1281, with a confidence level of '1 sigma', or approximately 68%. Calibrated dates can also be expressed as "BP" instead of using "BC" and "AD". The curve used to calibrate the results should be the latest available IntCal curve. Calibrated dates should also identify any programs, such as OxCal, used to perform the calibration. In addition, an article in Radiocarbon in 2014 about radiocarbon date reporting conventions recommends that information should be provided about sample treatment, including the sample material, pretreatment methods, and quality control measurements; that the citation to the software used for calibration should specify the version number and any options or models used; and that the calibrated date should be given with the associated probabilities for each range. ## Use in archaeology ### Interpretation A key concept in interpreting radiocarbon dates is archaeological association: what is the true relationship between two or more objects at an archaeological site? It frequently happens that a sample for radiocarbon dating can be taken directly from the object of interest, but there are also many cases where this is not possible. Metal grave goods, for example, cannot be radiocarbon dated, but they may be found in a grave with a coffin, charcoal, or other material which can be assumed to have been deposited at the same time. In these cases, a date for the coffin or charcoal is indicative of the date of deposition of the grave goods, because of the direct functional relationship between the two. There are also cases where there is no functional relationship, but the association is reasonably strong: for example, a layer of charcoal in a rubbish pit provides a date which has a relationship to the rubbish pit. Contamination is of particular concern when dating very old material obtained from archaeological excavations and great care is needed in the specimen selection and preparation. In 2014, Thomas Higham and co-workers suggested that many of the dates published for Neanderthal artifacts are too recent because of contamination by "young carbon". As a tree grows, only the outermost tree ring exchanges carbon with its environment, so the age measured for a wood sample depends on where the sample is taken from. This means that radiocarbon dates on wood samples can be older than the date at which the tree was felled. In addition, if a piece of wood is used for multiple purposes, there may be a significant delay between the felling of the tree and the final use in the context in which it is found. This is often referred to as the old wood problem. One example is the Bronze Age trackway at Withy Bed Copse, in England; the trackway was built from wood that had clearly been worked for other purposes before being re-used in the trackway. Another example is driftwood, which may be used as construction material. It is not always possible to recognize re-use. Other materials can present the same problem: for example, bitumen is known to have been used by some Neolithic communities to waterproof baskets; the bitumen's radiocarbon age will be greater than is measurable by the laboratory, regardless of the actual age of the context, so testing the basket material will give a misleading age if care is not taken. A separate issue, related to re-use, is that of lengthy use, or delayed deposition. For example, a wooden object that remains in use for a lengthy period will have an apparent age greater than the actual age of the context in which it is deposited. ### Use outside archaeology Archaeology is not the only field to make use of radiocarbon dating. Radiocarbon dates can also be used in geology, sedimentology, and lake studies, for example. The ability to date minute samples using AMS has meant that palaeobotanists and palaeoclimatologists can use radiocarbon dating directly on pollen purified from sediment sequences, or on small quantities of plant material or charcoal. Dates on organic material recovered from strata of interest can be used to correlate strata in different locations that appear to be similar on geological grounds. Dating material from one location gives date information about the other location, and the dates are also used to place strata in the overall geological timeline. Radiocarbon is also used to date carbon released from ecosystems, particularly to monitor the release of old carbon that was previously stored in soils as a result of human disturbance or climate change. Recent advances in field collection techniques also allow the radiocarbon dating of methane and carbon dioxide, which are important greenhouse gases. ### Notable applications #### Pleistocene/Holocene boundary in Two Creeks Fossil Forest The Pleistocene is a geological epoch that began about 2.6 million years ago. The Holocene, the current geological epoch, begins about 11,700 years ago when the Pleistocene ends. Establishing the date of this boundary − which is defined by sharp climatic warming − as accurately as possible has been a goal of geologists for much of the 20th century. At Two Creeks, in Wisconsin, a fossil forest was discovered (Two Creeks Buried Forest State Natural Area), and subsequent research determined that the destruction of the forest was caused by the Valders ice readvance, the last southward movement of ice before the end of the Pleistocene in that area. Before the advent of radiocarbon dating, the fossilized trees had been dated by correlating sequences of annually deposited layers of sediment at Two Creeks with sequences in Scandinavia. This led to estimates that the trees were between 24,000 and 19,000 years old, and hence this was taken to be the date of the last advance of the Wisconsin glaciation before its final retreat marked the end of the Pleistocene in North America. In 1952 Libby published radiocarbon dates for several samples from the Two Creeks site and two similar sites nearby; the dates were averaged to 11,404 BP with a standard error of 350 years. This result was uncalibrated, as the need for calibration of radiocarbon ages was not yet understood. Further results over the next decade supported an average date of 11,350 BP, with the results thought to be the most accurate averaging 11,600 BP. There was initial resistance to these results on the part of Ernst Antevs, the palaeobotanist who had worked on the Scandinavian varve series, but his objections were eventually discounted by other geologists. In the 1990s samples were tested with AMS, yielding (uncalibrated) dates ranging from 11,640 BP to 11,800 BP, both with a standard error of 160 years. Subsequently, a sample from the fossil forest was used in an interlaboratory test, with results provided by over 70 laboratories. These tests produced a median age of 11,788 ± 8 BP (2σ confidence) which when calibrated gives a date range of 13,730 to 13,550 cal BP. The Two Creeks radiocarbon dates are now regarded as a key result in developing the modern understanding of North American glaciation at the end of the Pleistocene. #### Dead Sea Scrolls In 1947, scrolls were discovered in caves near the Dead Sea that proved to contain writing in Hebrew and Aramaic, most of which are thought to have been produced by the Essenes, a small Jewish sect. These scrolls are of great significance in the study of Biblical texts because many of them contain the earliest known version of books of the Hebrew bible. A sample of the linen wrapping from one of these scrolls, the Great Isaiah Scroll, was included in a 1955 analysis by Libby, with an estimated age of 1,917 ± 200 years. Based on an analysis of the writing style, palaeographic estimates were made of the age of 21 of the scrolls, and samples from most of these, along with other scrolls which had not been palaeographically dated, were tested by two AMS laboratories in the 1990s. The results ranged in age from the early 4th century BC to the mid 4th century AD. In all but two cases the scrolls were determined to be within 100 years of the palaeographically determined age. The Isaiah scroll was included in the testing and was found to have two possible date ranges at a 2σ confidence level, because of the shape of the calibration curve at that point: there is a 15% chance that it dates from 355 to 295 BC, and an 84% chance that it dates from 210 to 45 BC. Subsequently, these dates were criticized on the grounds that before the scrolls were tested, they had been treated with modern castor oil in order to make the writing easier to read; it was argued that failure to remove the castor oil sufficiently would have caused the dates to be too young. Multiple papers have been published both supporting and opposing the criticism. ### Impact Soon after the publication of Libby's 1949 paper in Science, universities around the world began establishing radiocarbon-dating laboratories, and by the end of the 1950s there were more than 20 active <sup>14</sup> C research laboratories. It quickly became apparent that the principles of radiocarbon dating were valid, despite certain discrepancies, the causes of which then remained unknown. The development of radiocarbon dating has had a profound impact on archaeology – often described as the "radiocarbon revolution". In the words of anthropologist R. E. Taylor, "<sup>14</sup> C data made a world prehistory possible by contributing a time scale that transcends local, regional and continental boundaries". It provides more accurate dating within sites than previous methods, which usually derived either from stratigraphy or from typologies (e.g. of stone tools or pottery); it also allows comparison and synchronization of events across great distances. The advent of radiocarbon dating may even have led to better field methods in archaeology since better data recording leads to a firmer association of objects with the samples to be tested. These improved field methods were sometimes motivated by attempts to prove that a <sup>14</sup> C date was incorrect. Taylor also suggests that the availability of definite date information freed archaeologists from the need to focus so much of their energy on determining the dates of their finds, and led to an expansion of the questions archaeologists were willing to research. For example, from the 1970s questions about the evolution of human behaviour were much more frequently seen in archaeology. The dating framework provided by radiocarbon led to a change in the prevailing view of how innovations spread through prehistoric Europe. Researchers had previously thought that many ideas spread by diffusion through the continent, or by invasions of peoples bringing new cultural ideas with them. As radiocarbon dates began to prove these ideas wrong in many instances, it became apparent that these innovations must sometimes have arisen locally. This has been described as a "second radiocarbon revolution", and with regard to British prehistory, archaeologist Richard Atkinson has characterized the impact of radiocarbon dating as "radical [...] therapy" for the "progressive disease of invasionism". More broadly, the success of radiocarbon dating stimulated interest in analytical and statistical approaches to archaeological data. Taylor has also described the impact of AMS, and the ability to obtain accurate measurements from very small samples, as ushering in a third radiocarbon revolution. Occasionally, radiocarbon dating techniques date an object of popular interest, for example, the Shroud of Turin, a piece of linen cloth thought by some to bear an image of Jesus Christ after his crucifixion. Three separate laboratories dated samples of linen from the Shroud in 1988; the results pointed to 14th-century origins, raising doubts about the shroud's authenticity as an alleged 1st-century relic. Researchers have studied other isotopes created by cosmic rays to determine if they could also be used to assist in dating objects of archaeological interest; such isotopes include <sup>3</sup> He, <sup>10</sup> Be, <sup>21</sup> Ne, <sup>26</sup> Al, and <sup>36</sup> Cl. With the development of AMS in the 1980s it became possible to measure these isotopes precisely enough for them to be the basis of useful dating techniques, which have been primarily applied to dating rocks. Naturally occurring radioactive isotopes can also form the basis of dating methods, as with potassium–argon dating, argon–argon dating, and uranium series dating. Other dating techniques of interest to archaeologists include thermoluminescence, optically stimulated luminescence, electron spin resonance, and fission track dating, as well as techniques that depend on annual bands or layers, such as dendrochronology, tephrochronology, and varve chronology. ## See also - 774–775 carbon-14 spike - Chronological dating, archaeological chronology - Absolute dating - Relative dating - Geochronology - Radiometric dating
16,425
Justus
1,165,057,143
7th-century missionary, Archbishop of Canterbury, and saint
[ "7th-century Christian clergy", "7th-century Christian saints", "7th-century English bishops", "7th-century archbishops", "7th-century deaths", "Archbishops of Canterbury", "Bishops of Rochester", "Clergy from Rome", "Gregorian mission", "Kentish saints", "Year of birth unknown", "Year of death uncertain" ]
Justus (died on 10 November between 627 and 631) was the fourth Archbishop of Canterbury. He was sent from Italy to England by Pope Gregory the Great, on a mission to Christianize the Anglo-Saxons from their native paganism, probably arriving with the second group of missionaries despatched in 601. Justus became the first Bishop of Rochester in 604, and attended a church council in Paris in 614. Following the death of King Æthelberht of Kent in 616, Justus was forced to flee to Gaul, but was reinstated in his diocese the following year. In 624 Justus became Archbishop of Canterbury, overseeing the despatch of missionaries to Northumbria. After his death he was revered as a saint, and had a shrine in St Augustine's Abbey, Canterbury. ## Arrival in Britain Justus was a member of the Gregorian mission sent to England by Pope Gregory I. Almost everything known about Justus and his career is derived from the early 8th-century Historia ecclesiastica gentis Anglorum of Bede. As Bede does not describe Justus' origins, nothing is known about him prior to his arrival in England. He probably arrived in England with the second group of missionaries, sent at the request of Augustine of Canterbury in 601. Some modern writers describe Justus as one of the original missionaries who arrived with Augustine in 597, but Bede believed that Justus came in the second group. The second group included Mellitus, who later became Bishop of London and Archbishop of Canterbury. If Justus was a member of the second group of missionaries, then he arrived with a gift of books and "all things which were needed for worship and the ministry of the Church". A 15th-century Canterbury chronicler, Thomas of Elmham, claimed that there were a number of books brought to England by that second group still at Canterbury in his day, although he did not identify them. An investigation of extant Canterbury manuscripts shows that one possible survivor is the St. Augustine Gospels, now in Cambridge, Corpus Christi College, Manuscript (MS) 286. ## Bishop of Rochester Augustine consecrated Justus as a bishop in 604, over a province including the Kentish town of Rochester. The historian Nicholas Brooks argues that the choice of Rochester was probably not because it had been a Roman-era bishopric, but rather because of its importance in the politics of the time. Although the town was small, with just one street, it was at the junction of Watling Street and the estuary of the Medway, and was thus a fortified town. Because Justus was probably not a monk (he was not called that by Bede), his cathedral clergy was very likely non-monastic too. A charter purporting to be from King Æthelberht, dated 28 April 604, survives in the Textus Roffensis, as well as a copy based on the Textus in the 14th-century Liber Temporalium. Written mostly in Latin but using an Old English boundary clause, the charter records a grant of land near the city of Rochester to Justus' church. Among the witnesses is Laurence, Augustine's future successor, but not Augustine himself. The text turns to two different addressees. First, Æthelberht is made to admonish his son Eadbald, who had been established as a sub-ruler in the region of Rochester. The grant itself is addressed directly to Saint Andrew, the patron saint of the church, a usage parallelled by other charters in the same archive. Historian Wilhelm Levison, writing in 1946, was sceptical about the authenticity of this charter. In particular, he felt that the two separate addresses were incongruous and suggested that the first address, occurring before the preamble, may have been inserted by someone familiar with Bede to echo Eadbald's future conversion (see below). A more recent and more positive appraisal by John Morris argues that the charter and its witness list are authentic because it incorporates titles and phraseology that had fallen out of use by 800. Æthelberht built Justus a cathedral church in Rochester; the foundations of a nave and chancel partly underneath the present-day Rochester Cathedral may date from that time. What remains of the foundations of an early rectangular building near the southern part of the current cathedral might also be contemporary with Justus or may be part of a Roman building. Together with Mellitus, the Bishop of London, Justus signed a letter written by Archbishop Laurence of Canterbury to the Irish bishops urging the native church to adopt the Roman method of calculating the date of Easter. This letter also mentioned the fact that Irish missionaries, such as Dagan, had refused to share meals with the missionaries. Although the letter has not survived, Bede quoted from parts of it. In 614, Justus attended the Council of Paris, held by the Frankish king, Chlothar II. It is unclear why Justus and Peter, the abbot of Sts Peter and Paul in Canterbury, were present. It may have been just chance, but historian James Campbell has suggested that Chlothar summoned clergy from Britain to attend in an attempt to assert overlordship over Kent. The historian N. J. Higham offers another explanation for their attendance, arguing that Æthelberht sent the pair to the council because of shifts in Frankish policy towards the Kentish kingdom, which threatened Kentish independence, and that the two clergymen were sent to negotiate a compromise with Chlothar. A pagan backlash against Christianity followed Æthelberht's death in 616, forcing Justus and Mellitus to flee to Gaul. The pair probably took refuge with Chlothar, hoping that the Frankish king would intervene and restore them to their sees, and by 617 Justus had been reinstalled in his bishopric by the new king. Mellitus also returned to England, but the prevailing pagan mood did not allow him to return to London; after Laurence's death, Mellitus became Archbishop of Canterbury. According to Bede, Justus received letters of encouragement from Pope Boniface V (619–625), as did Mellitus, although Bede does not record the actual letters. The historian J. M. Wallace-Hadrill assumes that both letters were general statements of encouragement to the missionaries. ## Archbishop Justus became Archbishop of Canterbury in 624, receiving his pallium—the symbol of the jurisdiction entrusted to archbishops—from Pope Boniface V, following which Justus consecrated Romanus as his successor at Rochester. Boniface also gave Justus a letter congratulating him on the conversion of King "Aduluald" (probably King Eadbald of Kent), a letter which is included in Bede's Historia ecclesiastica gentis Anglorum. Bede's account of Eadbald's conversion states that it was Laurence, Justus' predecessor at Canterbury, who converted the King to Christianity, but the historian D. P. Kirby argues that the letter's reference to Eadbald makes it likely that it was Justus. Other historians, including Barbara Yorke and Henry Mayr-Harting, conclude that Bede's account is correct, and that Eadbald was converted by Laurence. Yorke argues that there were two kings of Kent during Eadbald's reign, Eadbald and Æthelwald, and that Æthelwald was the "Aduluald" referred to by Boniface. Yorke argues that Justus converted Æthelwald back to Christianity after Æthelberht's death. Justus consecrated Paulinus as the first Bishop of York, before the latter accompanied Æthelburg of Kent to Northumbria for her marriage to King Edwin of Northumbria. Bede records Justus as having died on 10 November, but does not give a year, although it is likely to have between 627 and 631. After his death, Justus was regarded as a saint, and was given a feast day of 10 November. The ninth century Stowe Missal commemorates his feast day, along with Mellitus and Laurence. In the 1090s, his remains were translated, or ritually moved, to a shrine beside the high altar of St Augustine's Abbey in Canterbury. At about the same time, a Life was written about him by Goscelin of Saint-Bertin, as well as a poem by Reginald of Canterbury. Other material from Thomas of Elmham, Gervase of Canterbury, and William of Malmesbury, later medieval chroniclers, adds little to Bede's account of Justus' life. ## See also - List of members of the Gregorian mission
1,237,273
Battle of Khafji
1,173,585,020
1991 battle of the Gulf War
[ "1991 in Saudi Arabia", "Battles in 1991", "Battles involving Iraq", "Battles involving Kuwait", "Battles involving Morocco", "Battles involving Qatar", "Battles involving Saudi Arabia", "Battles involving Senegal", "Battles involving the United Kingdom", "Battles involving the United States", "Battles of the Gulf War", "Eastern Province, Saudi Arabia", "February 1991 events in Asia", "Friendly fire incidents", "Invasions by Iraq", "Invasions of Saudi Arabia", "Iraq–Saudi Arabia relations", "January 1991 events in Asia", "Tank battles involving the United States", "United States Marine Corps in the 20th century", "Urban warfare" ]
The Battle of Khafji was the first major ground engagement of the Gulf War. It took place in and around the Saudi Arabian city of Khafji, from 29 January to 1 February 1991 Iraqi leader Saddam Hussein, who had already tried and failed to draw into costly ground engagements by shelling Saudi Arabian positions and oil storage tanks and firing Scud surface-to-surface missiles at Israel, ordered the invasion of Saudi Arabia from southern Kuwait. The 1st and 5th Mechanized Divisions and 3rd Armored Division were ordered to conduct a multi-pronged invasion toward Khafji, engaging Saudi Arabian, Kuwaiti, and U.S. forces along the coastline, with a supporting Iraqi commando force ordered to infiltrate further south by sea and harass the Coalition's rear. These three divisions, which had been heavily damaged by Coalition aircraft in the preceding days, attacked on 29 January. Most of their attacks were repulsed by U.S. Marine Corps and U.S. Army forces but one of the Iraqi columns occupied Khafji on the night of 29–30 January. Between 30 January and 1 February, two Saudi Arabian National Guard battalions and two Qatari tank companies attempted to retake control of the city, aided by Coalition aircraft and U.S. artillery. By 1 February, the city had been recaptured at the cost of 43 Coalition servicemen dead and 52 wounded. Iraqi Army fatalities numbered between 60 and 300, while an estimated 400 were captured as prisoners of war. Although the invasion of Khafji was initially a propaganda victory for the Ba'athist Iraqi regime, it was swiftly recaptured by Coalition forces. The battle demonstrated the ability of air power to support ground forces. ## Background On 2 August 1990, the Iraqi Army invaded and occupied the neighboring state of Kuwait. The invasion, which followed the inconclusive Iran–Iraq War and three decades of political conflict with Kuwait, offered Saddam Hussein the opportunity to distract political dissent at home and add Kuwait's oil resources to Iraq's own, a boon in a time of declining petroleum prices. In response, the United Nations began to pass a series of resolutions demanding the withdrawal of Iraqi forces from Kuwait. Afraid that Saudi Arabia would be invaded next, the Saudi Arabian government requested immediate military aid. As a result, the United States began marshalling forces from a variety of nations, styled the Coalition, on the Arabian peninsula. Initially, Saddam Hussein attempted to deter Coalition military action by threatening Kuwait's and Iraq's petroleum production and export. In December 1990, Iraq experimented with the use of explosives to destroy wellheads in the area of the Ahmadi loading complex, developing their capability to destroy Kuwait's petroleum infrastructure on a large scale. On 16 January, Iraqi artillery destroyed an oil storage tank in Khafji, Saudi Arabia, and on 19 January the pumps at the Ahmadi loading complex were opened, pouring crude oil into the Persian Gulf. The oil flowed into the sea at a rate of 200,000 barrels a day, becoming one of the worst ecological disasters to that date. Despite these Iraqi threats, the Coalition launched a 38-day aerial campaign on 17 January 1991. Flying an estimated 2,000 sorties a day, Coalition aircraft rapidly crippled the Iraqi air defense systems and effectively destroyed the Iraqi Air Force, whose daily sortie rate plummeted from a prewar level of an estimated 200 per day to almost none by 17 January. On the third day of the campaign, many Iraqi pilots fled across the Iranian border in their aircraft rather than be destroyed. The air campaign also targeted command-and-control sites, bridges, railroads, and petroleum storage facilities. Saddam Hussein, who is believed to have said, "The air force has never decided a war," nevertheless worried that the air campaign would erode Iraq's national morale. The Iraqi leader also believed that the United States would not be willing to lose many troops in action, and therefore sought to draw Coalition ground troops into a decisive battle. In an attempt to provoke a ground battle, he directed Iraqi forces to launch Scud missiles against Israel, while continuing to threaten the destruction of oilfields in Kuwait. These efforts were unsuccessful in provoking a large ground battle, so Saddam Hussein decided to launch a limited offensive into Saudi Arabia with the aim of inflicting heavy losses on the Coalition armies. As the air campaign continued, the Coalition's expectations of an Iraqi offensive decreased. As a result, the United States redeployed the XVIII Airborne Corps and the VII Corps 480 kilometers (300 mi) to the west. The Coalition's leadership believed that should an Iraqi force go on the offensive, it would be launched from the al-Wafra oil fields, in Southern Kuwait. ## Order of battle The Iraqi Army had between 350,000 and 500,000 soldiers in theater, organized into 51 divisions, including eight Republican Guard divisions. Republican Guard units normally received the newest equipment; for example, most of the estimated 1,000 T-72 tanks in the Iraqi Army on the eve of the war were in Republican Guard divisions. The Iraqi Army in the Kuwaiti Theater of Operations (KTO) also included nine heavy divisions, composed mostly of professional soldiers, but with weapons of a generally lesser grade than those issued to the Republican Guard. Most non-Republican Guard armored units had older tank designs, mainly the T-55 or its Chinese equivalents, the Type 59 and Type 69. The remaining 34 divisions were composed of poorly trained conscripts. These divisions were deployed to channel the Coalition's forces through a number of break points along the front, allowing the Iraqi Army's heavy divisions and the Republican Guard units to isolate them and counterattack. However, the Iraqis left their western flank open, failing to account for tactics made possible by the Global Positioning System and other new technologies. In Saudi Arabia, the Coalition originally deployed over 200,000 soldiers, 750 aircraft and 1,200 tanks. This quickly grew to 3,600 tanks and over 600,000 personnel, of whom over 500,000 were from the United States. ### Iraqi forces Earmarked for the offensive into Saudi Arabia was the Iraqi Third Corps, the 1st Mechanized Division from Fourth Corps and a number of commando units. Third Corps, commanded by Major General Salah Aboud Mahmoud (who would also command the overall offensive), had the 3rd Armored Division and 5th Mechanized Division, as well as a number of infantry divisions. Fourth Corps' commander was Major General Ayad Khalil Zaki. The 3rd Armored Division had a number of T-72 tanks, the only non-Republican Guard force to have them, while the other armored battalions had T-62s and T-55s, a few of which had an Iraqi appliqué armor similar to the Soviet bulging armor also known as "brow" laminate armor or BDD. During the battle of Khafji, these upgraded T-55s survived impacts from MILAN anti-tank missiles. These divisions also had armored personnel vehicles such as the BMP-1, scout vehicles such as the BRDM-2, and several types of artillery. Also deployed along this portion of the front, though not chosen to participate in the invasion, were five infantry divisions that were under orders to remain in their defensive positions along the border. U.S. Marine Corps reconnaissance estimated that the Iraqi Army had amassed around 60,000 troops across the border, near the Kuwaiti town of Wafra, in as many as 5 or 6 divisions. Infantry divisions normally consisted of three brigades with an attached commando unit, although some infantry divisions could have up to eight brigades–however most infantry divisions along the border were understrength, primarily due to desertion. Armored and mechanized divisions normally made use of three brigades, with each brigade having up to four combat battalions; depending on the division type, these were generally a three to one mix, with either three mechanized battalions and one armored battalion, or vice versa. Given the size of the forces deployed across the border, it is thought that the Iraqi Army planned to continue the offensive, after the successful capture of Khafji, in order to seize the valuable oil fields at Dammam. The attack would consist of a four-prong offensive. The 1st Mechanized Division would pass through the 7th and 14th Infantry Divisions to protect the flank of the 3rd Armored Division, which would provide a blocking force west of Khafji while the 5th Mechanized Division took the town. The 1st Mechanized and 3rd Armored divisions would then retire to Kuwait, while the 5th Mechanized Division would wait until the Coalition launched a counteroffensive. The principal objectives were to inflict heavy casualties on the attacking Coalition soldiers and take prisoners of war, who Saddam Hussein theorized would be an excellent bargaining tool with the Coalition. As the units moved to the Saudi Arabian border, many were attacked by Coalition aircraft. Around the Al-Wafrah forest, about 1,000 Iraqi armored fighting vehicles were attacked by Harrier aircraft with Rockeye cluster bombs. Another Iraqi convoy of armored vehicles was hit by A-10s, which destroyed the first and last vehicles, before systematically attacking the stranded remainders. Such air raids prevented the majority of the Iraqi troops deployed for the offensive from taking part in it. ### Coalition forces During the buildup of forces, the U.S. had built observation posts along the Kuwaiti-Saudi Arabian border to gather intelligence on Iraqi forces. These were manned by U.S. Navy SEALs, U.S. Marine Corps Force Reconnaissance and Army Special Forces personnel. Observation post 8 was farthest to the east, on the coast, and another seven observation posts were positioned each 20 km (12 mi) until the end of the "heel", the geographic panhandle of southernmost Kuwait. Observation posts 8 and 7 overlooked the coastal highway that ran to Khafji, considered the most likely invasion route of the city. 1st Marine Division had three companies positioned at observation posts 4, 5 and 6 (Task Force Shepard), while the 2nd Marine Division's 2nd Light Armored Infantry Battalion set up a screen between observation post 1 and the Al-Wafrah oil fields. The U.S. Army's 2nd Armored Division provided its 1st Brigade to give the Marines some much needed armored support. The Saudi Arabians gave responsibility for the defense of Khafji to the 2nd Saudi Arabian National Guard Brigade, attached to Task Force Abu Bakr. The 5th Battalion of the 2nd Saudi Arabian National Guard Brigade set up a screen north and west of Khafji, under observation post 7. At the time, a Saudi Arabian National Guard Brigade could have up to four motorized battalions, each with three line companies. The brigade had a nominal strength of an estimated 5,000 soldiers. The Saudi Arabians also deployed the Tariq Task Force, composed of Saudi Arabian marines, a Moroccan mechanized infantry battalion, and two Senegalese infantry companies. Two further task forces, Othman and Omar Task Forces, consisted of two Mechanized Ministry of Defense and Aviation Brigades, providing screens about 3 km (1.9 mi) south of the border. The road south of Khafji was covered by one battalion of Saudi Arabian National Guard supported by one battalion of Qatari tanks. The country's main defenses were placed 20 km (12 mi) south of the screen. The majority of the Arab contingent was led by General Khaled bin Sultan. The forces around Khafji were organized into the Joint Forces Command-East, while Joint Forces Command-North defended the border between observation post 1 and the Kuwaiti-Iraqi border. ## Battle On 27 January 1991, Iraqi President Saddam Hussein met in Basra with the two Iraqi army corps commanders who were to lead the operation, and Major General Salah Mahmoud told him that Khafji would be his by 30 January. During his return trip to Baghdad, Saddam Hussein's convoy was attacked by Coalition aircraft; the Iraqi leader escaped unscathed. Throughout 28 January, the Coalition received a number of warnings suggesting an impending Iraqi offensive. The Coalition was flying two brand-new E-8A Joint Surveillance Target Attack Radar System (Joint STARS) aircraft, which picked up the deployment and movement of Iraqi forces to the area opposite of Khafji. Observation posts 2, 7 and 8 also detected heavy Iraqi reconnoitering along the border, and their small teams of air-naval gunfire liaison Marines called in air and artillery strikes throughout the day. Lieutenant Colonel Richard Barry, commander of the forward headquarters of the 1st Surveillance, Reconnaissance and Intelligence Group, sent warnings about an impending attack to Central Command. CentCom leaders were too preoccupied with the air campaign to heed them however, and so the Iraqi operation came as a surprise. ### Beginning of Iraqi offensive: 29 January The Iraqi offensive began on the night of 29 January, when approximately 2,000 soldiers in several hundred armored fighting vehicles moved south. Post-war analysis by the US Air Force's Air University suggests Iraq planned to utilize the 3rd Armored Division and 5th Mechanized Division to make the actual attack on Khafji, with the 1st Mechanized Division assigned to protect the attacking force's western flank. The Iraqi incursion into Saudi Arabia consisted of three columns, mostly made up of T-62 tanks and armored personnel carriers (APCs). The Gulf War's first ground engagement was near observation post 4 (OP-4), built on the Al-Zabr police building. Elements of the Iraqi 6th Armored Brigade, ordered to take the heights above Al-Zabr, engaged Coalition units at Al-Zabr. At 20:00 hours, U.S. Marines at the observation post, who had noticed large groups of armored vehicles through their night vision devices, attempted to talk to battalion headquarters but received no response. Since contact earlier was no problem, there was a strong presumption that the reconnaissance platoon's radios were being jammed. Using runners, Lieutenant Ross alerted his platoon and continued trying to get through and inform higher headquarters and Company D of the oncoming Iraqi force. Contact was not established until 20:30 hours, which prompted Task Force Shepard to respond to the threat. Coalition soldiers at observation post 4 were lightly armed, and could only respond with TOW anti-tank missiles before calling in air support. Air support arrived by 21:30 and took the form of several F-15E, F-16C, four A-10 Tank Killers and three AC-130 gunships, which intervened in a heavy firefight between Iraqi and Coalition ground forces at OP-4. The reconnaissance platoon stationed at OP-4 was the first to come under attack, their withdrawal from the engagement was facilitated by another company providing cover fire. The attempt by the soldiers stationed at OP-4 to fend off or delay the Iraqi advance cost them several casualties, and in the face of a heavy Iraqi response they were forced to retire south, by order of its commanding officer. To cover the withdrawal, the company's platoon of LAV-25s and LAV-ATs (anti-tank variants) moved to engage the Iraqi force. After receiving permission, one of the anti-tank vehicles opened fire at what it believed was an Iraqi tank. Instead, the missile destroyed a friendly LAV-AT a few hundred meters in front of it. Despite this loss, the platoon continued forward and soon opened fire on the Iraqi tanks with the LAV-25s' autocannons. The fire could not penetrate the tanks' armor, but did damage their optics and prevented the tanks from fighting back effectively. Soon thereafter, a number of A-10 ground-attack aircraft arrived but found it difficult to pinpoint enemy targets and began dropping flares to illuminate the zone. One of these flares landed on a friendly vehicle, and although the vehicle radioed in its position, it was hit by an AGM-65 Maverick air-to-ground missile that killed the entire crew except for the driver. Following the incident, the company was withdrawn and the remaining vehicles reorganized into another nearby company. With observation post 4 cleared, the Iraqi 6th Armored Brigade withdrew over the border to Al-Wafrah under heavy fire from Coalition aircraft. Coalition forces had lost 11 troops to friendly fire and none to enemy action. While the events at observation post 4 were unfolding, the Iraqi 5th Mechanized Division crossed the Saudi Arabian border near observation post 1. A Company of the 2nd Light Infantry Armored Battalion, which was screening the Iraqi unit, reported a column of 60–100 BMPs. The column was engaged by Coalition A-10s and Harrier jump jets. This was then followed by another column with an estimated 29 tanks. One of the column's T-62 tanks was engaged by an anti-tank missile and destroyed. Coalition air support, provided by A-10s and F-16s, engaged the Iraqi drive through observation post 1 and ultimately repulsed the attack back over the Kuwaiti border. Aircraft continued to engage the columns throughout the night, until the next morning. Another column of Iraqi tanks, approaching observation post 2, were engaged by aircraft and also repulsed that night. An additional Iraqi column crossed the Saudi Arabian border to the East, although still along the coast, towards the city of Khafji. These Iraqi tanks were screened by the 5th Mechanized Battalion of the 2nd Saudi Arabian National Guard Brigade. This battalion withdrew when it came under heavy fire, as it had been ordered to not engage the Iraqi column. Elements of the 8th and 10th Saudi Arabian National Guard Brigades also conducted similar screening operations. Due to the order to not engage, the road to Khafji was left open. At one point, Iraqi T-55s of another column rolled up to the Saudi Arabian border, signaling that they intended to surrender. As they were approached by Saudi Arabian troops, they reversed their turrets and opened fire. This prompted air support from a nearby AC-130, destroying 13 vehicles. Nevertheless, the Iraqi advance towards Khafji continued on this sector, despite repeated attacks from an AC-130. Attempts by the Saudi Arabian commanders to call in additional air strikes on the advancing Iraqi column failed when the requested heavy air support never arrived. Khafji was occupied by approximately 00:30 on 30 January, trapping two six-man reconnaissance teams from the 1st Marine Division in the city. The teams occupied two apartment buildings in the southern sector of the city and called artillery fire on their position to persuade the Iraqis to call off a search of the area. Throughout the night, Coalition air support composed of helicopters and fixed-wing aircraft continued to engage Iraqi tanks and artillery. ### Initial response: 30 January Distressed by the occupation of Khafji, Saudi Arabian commander General Khaled bin Sultan appealed to U.S. General Norman Schwarzkopf for an immediate air campaign against Iraqi forces in and around the city. However this was turned down because the buildings would make it difficult for aircraft to spot targets without getting too close. It was instead decided that the city would be retaken by Arab ground forces. The task fell to the 2nd Saudi Arabian National Guard Brigade's 7th Battalion, composed of Saudi Arabian infantry with V-150 armored cars and two Qatari tank companies attached to the task force. These were supported by U.S. Army Special Forces and Marine Reconnaissance personnel. The force was put under the command of Saudi Arabian Lieutenant Colonel Matar, who moved out by 17:00 hours. The force met up with elements of the U.S. 3rd Marine Regiment, south of Khafji, and were ordered to directly attack the city. A platoon of Iraqi T-55s attacked south of the city, leading to the destruction of three T-55s by Qatari AMX-30s, and the capture of a fourth Iraqi tank. Lacking any coordinated artillery support, artillery fire was provided by the 10th Marine Regiment. An initial attack on the city was called off after the Iraqi occupants opened up with heavy fire, prompting the Saudi Arabians to reinforce the 7th Battalion with two more companies from adjacent Saudi Arabian units. The attempt to retake the city had been preceded by a 15-minute preparatory fire from U.S. Marine artillery. However Iraqi fire did manage to destroy one Saudi Arabian V-150. Meanwhile, 2nd Saudi Arabian National Guard Brigade's 5th Battalion moved north of Khafji to block Iraqi reinforcements attempting to reach the city. This unit was further bolstered by the 8th Ministry of Defense and Aviation Brigade, and heavily aided by Coalition air support. Although fear of friendly fire forced the 8th Ministry of Defense and Aviation Brigade to pull back the following morning, Coalition aircraft successfully hindered Iraqi attempts to move more soldiers down to Khafji and caused large numbers of Iraqi troops to surrender to Saudi Arabian forces. That night, two U.S. Army heavy equipment transporters entered the city of Khafji, apparently lost, and were fired upon by Iraqi troops. Although one truck managed to turn around and escape, the two drivers of the second truck were wounded and captured. This led to a rescue mission organized by 3rd Battalion 3rd Marine Regiment, which sent a force of 30 men to extract the two wounded drivers. Although encountering no major opposition, they did not find the two drivers who had, by this time, been taken prisoner. The Marines did find a burnt out Qatari AMX-30, with its dead crew. In the early morning hours, despite significant risk to their safety, an AC-130 providing overwatch stayed beyond sunrise. It was shot down by an Iraqi surface-to-air missile (SAM), killing the aircraft's crew of 14. The interdiction on the part of Coalition aircraft and Saudi Arabian and Qatari ground forces was having an effect on the occupying Iraqi troops. Referring to Saddam Hussein's naming of the ground engagement as the "mother of all battles", Iraqi General Salah radioed in a request to withdraw, stating, "The mother was killing her children." Since the beginning of the battle, Coalition aircraft had flown at least 350 sorties against Iraqi units in the area and on the night of 30–31 January, Coalition air support also began to attack units of the Iraqi Third Corps assembled on the Saudi Arabian border. ### Recapture of Khafji: 31 January – 1 February On 31 January, the effort to retake the city began anew. The attack was launched at 08:30 hours, and was met by heavy but mostly inaccurate Iraqi fire; however, three Saudi Arabian V-150 armored cars were knocked out by RPG-7s at close range. The 8th battalion of the Saudi Arabian brigade was ordered to deploy to the city by 10:00 hours, while 5th Battalion to the north engaged another column of Iraqi tanks attempting to reach the city. The latter engagement led to the destruction of around 13 Iraqi tanks and armored personnel carriers, and the capture of 6 more vehicles and 116 Iraqi soldiers, costing the Saudi Arabian battalion two dead and two wounded. The 8th Battalion engaged the city from the northeast, linking up with 7th Battalion. These units cleared the southern portion of the city, until 7th Battalion withdrew south to rest and rearm at 18:30 hours, while the 8th remained in Khafji. The two Qatari tank companies, with U.S. Marine artillery and air support, moved north of the city to block Iraqi reinforcements. The 8th continued clearing buildings and by the time the 7th had withdrawn to the south, the Saudi Arabians had lost approximately 18 dead and 50 wounded, as well as seven V-150 vehicles. Coalition aircraft continued to provide heavy support throughout the day and night. A veteran of the Iran-Iraq War later mentioned that Coalition airpower "imposed more damage on his brigade in half an hour than it had sustained in eight years of fighting against the Iranians." During the battle, an Iraqi amphibious force was sent to land on the coast and moved into Khafji. As the boats made their way through the Persian Gulf towards Khafji, U.S. and British aircraft caught the Iraqi boats in the open and destroyed over 90% of the Iraqi amphibious force. The Saudi Arabian and Kuwaiti units renewed operations the following day. Two Iraqi companies, with about 20 armored vehicles, remained in the city and had not attempted to break out during the night. While the Saudi Arabian 8th Battalion continued operations in the southern portion of the city, the 7th Battalion began to clear the northern sector of the city. Iraqi resistance was sporadic and most Iraqi soldiers surrendered on sight; as a result, the city was recaptured on 1 February 1991. ## Aftermath During the battle, Coalition forces incurred 43 fatalities and 52 injured casualties. This included 25 Americans killed, 11 of them by friendly fire along with 14 airmen killed when their AC-130 was shot down by Iraqi SAMs. The U.S. also had two soldiers wounded and another two soldiers were captured in Khafji. Saudi Arabian casualties totaled 18 killed and 50 wounded. Two Saudi main battle tanks and ten lightly armored V-150s were knocked out. Most of the V150s were knocked out by RPG-7 fire in close-range fighting inside the town of Khafji. One of the two that was a catastrophic kill was hit by a 100mm main gun round from a T-55. Iraq listed its casualties as 71 dead, 148 wounded and 702 missing. U.S. sources present at the battle claim that 300 Iraqis lost their lives, and at least 90 vehicles were destroyed. Another source suggests that 60 Iraqi soldiers were killed and at least 400 taken prisoner, while no less than 80 armored vehicles were knocked out; however these casualties are attributed to the fighting both inside and directly north of Khafji. Whatever the exact casualties, the majority of three Iraqi mechanized/armored divisions had been destroyed. The Iraqi capture of Khafji was a major propaganda victory for Iraq: on 30 January Iraqi radio claimed that they had "expelled Americans from the Arab territory". For many in the Arab world, the battle of Khafji was seen as an Iraqi victory, and Hussein made every possible effort to turn the battle into a political victory. On the other side, confidence within the United States Armed Forces in the abilities of the Saudi Arabian and Kuwaiti armies increased as the battle progressed. After Khafji, the Coalition's leadership began to sense that the Iraqi Army was a "hollow force" and it provided them with an impression of the degree of resistance they would face during the Coalition's ground offensive that would begin later that month. The battle was felt by the Saudi Arabian government to be a major propaganda victory, which had successfully defended its territory. Despite the success of the engagements between 29 January and 1 February, the Coalition did not launch its main offensive into Kuwait and Iraq until the night of 24–25 February. The invasion of Iraq was completed about 48 hours later. The Battle of Khafji served as a modern example of the ability of air power to serve a supporting role to ground forces. It offered the Coalition an indication of the manner in which Operation Desert Storm would be fought, but also hinted at future friendly-fire casualties which accounted for nearly half of the U.S. dead.
623,013
Tom Simpson
1,166,399,072
British cyclist (1937–1967)
[ "1937 births", "1967 deaths", "BBC Sports Personality of the Year winners", "British Vuelta a España stage winners", "British male cyclists", "Burials in Nottinghamshire", "Commonwealth Games medallists in cycling", "Commonwealth Games silver medallists for England", "Cyclists at the 1956 Summer Olympics", "Cyclists at the 1958 British Empire and Commonwealth Games", "Cyclists who died while racing", "Doping cases in cycling", "Drug-related deaths in France", "English male cyclists", "English sportspeople in doping cases", "Filmed deaths in sports", "Medalists at the 1956 Summer Olympics", "Medallists at the 1958 British Empire and Commonwealth Games", "Olympic bronze medallists for Great Britain", "Olympic cyclists for Great Britain", "Olympic medalists in cycling", "People from Haswell, County Durham", "Sport deaths in France", "Sportspeople from County Durham", "Sportspeople from Nottinghamshire", "UCI Road World Champions (elite men)" ]
Thomas Simpson (30 November 1937 – 13 July 1967) was one of Britain's most successful professional cyclists. He was born in Haswell, County Durham, and later moved to Harworth, Nottinghamshire. Simpson began road cycling as a teenager before taking up track cycling, specialising in pursuit races. He won a bronze medal for track cycling at the 1956 Summer Olympics and a silver at the 1958 British Empire and Commonwealth Games. In 1959, at age 21, Simpson was signed by the French professional road-racing team . He advanced to their first team () the following year, and won the 1961 Tour of Flanders. Simpson then joined ; in the 1962 Tour de France he became the first British rider to wear the yellow jersey, finishing sixth overall. In 1963 Simpson moved to , winning Bordeaux–Paris that year and the 1964 Milan–San Remo. In 1965 he became Britain's first professional world road race champion and won the Giro di Lombardia; this made him the BBC Sports Personality of the Year, the first cyclist to win the award. Injuries hampered much of Simpson's 1966 season. He won two stages of the 1967 Vuelta a España before he won the general classification of Paris–Nice that year. In the thirteenth stage of the 1967 Tour de France, Simpson collapsed and died during the ascent of Mont Ventoux. He was 29 years old. The post-mortem examination found that he had mixed amphetamines and alcohol; this diuretic combination proved fatal when combined with the heat, the hard climb of the Ventoux and a stomach complaint. A memorial near where he died has become a place of pilgrimage for many cyclists. Simpson was known to have taken performance-enhancing drugs during his career, when no doping controls existed. He is held in high esteem by many fans for his character and will to win. ## Early life and amateur career ### Childhood and club racing Simpson was born on 30 November 1937 in Haswell, County Durham, the youngest of six children of coal miner Tom Simpson and his wife Alice (née Cheetham). His father had been a semi-professional sprinter in athletics. The family lived modestly in a small terraced house until 1943, when his parents took charge of the village's working men's club and lived above it. In 1950 the Simpsons moved to Harworth on the Nottinghamshire–Yorkshire border, where young Simpson's maternal aunt lived; new coalfields were opening, with employment opportunities for him and older brother Harry, by now, the only children left at home. Simpson rode his first bike, his brother-in-law's, at age 12, sharing it with Harry and two cousins for time trials around Harworth. Following Harry, Tom joined Harworth & District CC (Cycling Club) aged 13. He delivered groceries in the Bassetlaw district by bicycle and traded with a customer for a better road bike. He was often left behind in club races; members of his cycling club nicknamed him "four-stone Coppi", after Italian rider Fausto Coppi, due to his slim physique. Simpson began winning club time trials, but sensed resentment of his boasting from senior members. He left Harworth & District and joined Rotherham's Scala Wheelers at the end of 1954. Simpson's first road race was as a junior at the Forest Recreation Ground in Nottingham. After leaving school he was an apprentice draughtsman at an engineering company in Retford, using the 10 mi (16.1 km) commute by bike as training. He placed well in half mile races on grass and cement, but decided to concentrate on road racing. In May 1955 Simpson won the National Cyclists' Union South Yorkshire individual pursuit track event as a junior; the same year, he won the British League of Racing Cyclists (BLRC) junior hill climb championship and placed third in the senior event. Simpson immersed himself in the world of cycling, writing letters asking for advice. Naturalised Austrian rider George Berger responded, travelling from London to Harworth to help him with his riding position. In late 1955, Simpson ran a red light in a race and was suspended from racing for six months by the BLRC. During his suspension he dabbled in motorcycle trials, nearly quitting cycling but unable to afford a new motorcycle necessary for progress in the sport. ### Track years Berger told Simpson that if he wanted to be a successful road cyclist, he needed experience in track cycling, particularly in the pursuit discipline. Simpson competed regularly at Fallowfield Stadium in Manchester, where in early 1956 he met amateur world pursuit silver medallist Cyril Cartwright, who helped him develop his technique. At the national championships at Fallowfield the 18-year-old Simpson won a silver medal in the individual pursuit, defeating amateur world champion Norman Sheil before losing to Mike Gambrill. Simpson began working with his father as a draughtsman at the glass factory in Harworth. He was riding well; although not selected by Great Britain for the amateur world championships, he made the 4,000-metre team pursuit squad for the 1956 Olympics. In mid-September, Simpson competed for two weeks in Eastern Europe against Russian and Italian teams to prepare for the Olympics. The seven-rider contingent began with races in Leningrad, continuing to Moscow before finishing in Sofia. He was nicknamed "the Sparrow" by the Soviet press because of his slender build. The following month he was in Melbourne for the Olympics, where the team qualified for the team-pursuit semi-finals against Italy; they were confident of defeating South Africa and France but lost to Italy, taking the bronze medal. Simpson blamed himself for the loss for pushing too hard on a turn and being unable to recover for the next. After the Olympics, Simpson trained throughout his winter break into 1957. In May, he rode in the national 25-mile championships; although he was the favourite, he lost to Sheil in the final. In a points race at an international event at Fallowfield a week later Simpson crashed badly, almost breaking his leg; he stopped working for a month and struggled to regain his form. At the national pursuit championships, he was beaten in the quarter-finals. After this defeat Simpson returned to road racing, winning the BLRC national hill climb championship in October before taking a short break from racing. In spring 1958 he traveled to Sofia with Sheil for two weeks' racing. On his return he won the national individual pursuit championship at Herne Hill Velodrome. In July, Simpson won a silver medal for England in the individual pursuit at the British Empire and Commonwealth Games in Cardiff, losing to Sheil by one-hundredth of a second in the final. A medical exam taken with the Royal Air Force (RAF) revealed Simpson to be colour blind. In September 1958, Simpson competed at the amateur world championships in Paris. Against reigning champion Carlo Simonigh of Italy in the opening round of the individual pursuit, he crashed on the concrete track at the end of the race. Simpson was briefly knocked unconscious and sustained a dislocated jaw; however, he won the race since he crashed after the finish line. Although he was in pain, team manager Benny Foster forced Simpson to race in the quarter-final against New Zealand's Warwick Dalton, hoping to unsettle Dalton ahead of a possible meeting with Simpson's teammate Sheil. Simpson wanted to turn professional, but needed to prove himself first, setting his sights on the world amateur indoor hour record. Reg Harris arranged for an attempt at Zürich's Hallenstadion velodrome on Simpson's birthday in November. He failed by 320 metres, covering a distance of 43.995 km (27.337 mi) and blaming his failure on the low temperature generated by an ice rink in the centre of the velodrome. The following week he travelled to Ghent, in the Flanders region of Belgium, to ride amateur track races. He stayed at the Café Den Engel, run by Albert Beurick, who organised for him to ride at Ghent's Kuipke velodrome in the Sportpaleis (English: Sport Palace). Simpson decided to move to the continent for a better chance at success, and contacted French brothers Robert and Yvon Murphy, whom he met while racing. They agreed that he could stay with them in the Breton fishing port of Saint-Brieuc. His final event in Britain was at Herne Hill, riding motor-paced races. Simpson won the event and was invited to Germany to train for the 1959 motor-paced world championships, but declined the opportunity in favour of a career on the road. Bicycle manufacturer Elswick Hopper invited him to join their British-based team, but Benny Foster advised him to continue with his plans to move to France. ### Move to Brittany In April 1959, Simpson left for France with £100 savings and two Carlton bikes, one road and one track, given in appreciation of his help promoting the company. His last words to his mother before the move were, "I don't want to be sitting here in twenty years' time, wondering what would have happened if I hadn't gone to France". The next day, his National Service papers were delivered; although willing to serve before his move, he feared the call-up would put his potential career at risk. His mother returned them, with the hope they would understand this. He applied to local cycling clubs, and joined Club Olympique Briochin, racing with an independent (semi-professional) licence from the British Cycling Federation. When settled with the Murphy family, 21-year-old Simpson met 19-year-old Helen Sherburn, an au pair from Sutton, Yorkshire. Simpson began attracting attention, winning races and criteriums. He was invited to race in the eight-day stage race Route de France by the Saint-Raphaël VC 12e, the amateur club below the professional team . Simpson won the final stage, breaking away from the peloton and holding on for victory. After this win, he declined an offer to ride in the Tour de France for the professional team. Simpson had contract offers from two professional teams, and , which had a British cyclist, Brian Robinson; opting for the latter team, on 29 June he signed a contract for 80,000 francs (£80 a month). On Simpson's return to Harworth for Christmas, the RAF were notified and the press ran stories on his apparent draft avoidance. He passed a medical in Sheffield, but history repeated itself and the papers arrived the day after his departure for his team's training camp in Narbonne in southern France. The French press, unlike the British, found the situation amusing. ## Professional career ### 1959: Foundations In July, four months after leaving England, Simpson rode his first race as a professional, the Tour de l'Ouest in west France. He won the fourth stage and took the overall race leader's jersey. He won the next stage's individual time trial, increasing his lead. On the next stage he lost the lead with a punctured tyre, finishing the race in fourteenth place overall. In August Simpson competed at the world championships in the 5000 m individual pursuit at Amsterdam's large, open-air velodrome and the road race on the nearby Circuit Park Zandvoort motor-racing track. He placed fourth in the individual pursuit, losing by 0.3 seconds in the quarter-finals. He prepared for the 180 mi (290 km) road race, eight laps of the track. After 45 mi (72 km) a ten-rider breakaway formed; Simpson bridged the gap. As the peloton began to close in, he tried to attack. Although he was brought back each time, Simpson placed fourth in a sprint for the best finish to date by a British rider. He was praised by the winner, André Darrigade of France, who thought that without Simpson's work on the front, the breakaway would have been caught. Darrigade helped him enter criteriums for extra money. His fourth place earned Simpson his nickname, "Major Simpson", from French sports newspaper L'Équipe. They ran the headline: "Les carnets du Major Simpson" ("The notes of Major Simpson"), referencing the 1950s series of books, Les carnets du Major Thompson by Pierre Daninos. Simpson moved up to 's first team, , for the end-of-season one-day classic races. In his first appearance in the Giro di Lombardia, one of the five "monuments" of cycling, he retired with a tyre puncture while in the lead group of riders. In Simpson's last race of the season, he finished fourth in the Trofeo Baracchi, a two-man team time trial with Gérard Saint, racing against his boyhood idol, Fausto Coppi; it was Coppi's final race before his death. Simpson finished the season with twenty-eight wins. ### 1960: Tour de France debut His first major race of the 1960 season was the one-day "monument" Milan–San Remo in March, in which the organisers introduced the Poggio climb (the final climb) to keep the race from finishing with a bunch sprint. Simpson broke clear from a breakaway group over the first climb, the Turchino, leading the race for 45 km (28 mi) before being caught. He lost contact over the Poggio, finishing in 38th place. In April he moved to the Porte de Clichy district of Paris, sharing a small apartment with his teammate Robinson. Days after his move, Simpson rode in Paris–Roubaix, known as "The Hell of the North", the first cycling race to be shown live on Eurovision. He launched an attack as an early breakaway, riding alone at the front for 40 km (24.9 mi), but was caught around a mile from the finish at Roubaix Velodrome, coming in ninth. Simpson rode a lap of honour after the race at the request of the emotional crowd. His televised effort gained him attention throughout Europe. He then won the Mont Faron hill climb and the overall general classification of the Tour du Sud-Est, his first overall win in a professional stage race. He planned to ride in the Isle of Man International road race, excited to see to his home fans. There were rumours, which proved correct, that the Royal Military Police were waiting for him at the airport, so he decided not to travel. This was the last he heard from the authorities regarding his call-up. The British Cycling Federation fined him £25 for his absence. In June, Simpson made his Grand Tour debut in the Tour de France aged 22. Rapha directeur sportif (team manager) Raymond Louviot opposed his participation, but since the race was contested by national teams Simpson accepted the invitation from the British squad. During the first stage, he was part of a thirteen-rider breakaway which finished over two minutes in front of the field; he crashed on the cinder track at Heysel Stadium in Brussels, finishing thirteenth, but received the same time as the winner. Later that day he finished ninth in the time trial, moving up to fifth place overall. During the third stage Simpson was part of a breakaway with two French riders who repeatedly attacked him, forcing him to chase and use energy needed for the finish; he finished third, missing the thirty-second bonus for a first-place finish, which would have put him in the overall race leader's yellow jersey. He dropped to ninth overall by the end of the first week. During stage ten, Simpson crashed descending the Col d'Aubisque in the Pyrenees but finished the stage in fourteenth place. In the following stage he was dropped, exhausted, from a chasing group; failing to recover. He finished the Tour in twenty-ninth place overall, losing 2 st (13 kg; 28 lb) in weight over the three weeks. After the Tour, Simpson rode criteriums around Europe until crashing in central France; he returned home to Paris and checked himself into a hospital. Following a week's bed-rest, he rode in the road world championships at the Sachsenring in East Germany. During the race Simpson stopped to adjust his shoes on the right side of the road and was hit from behind by a car, sustaining a cut to his head which required five stitches. In the last of the classics, the Giro di Lombardia, he struggled, finishing eighty-fourth. Simpson had been in constant contact with Helen, who was now working in Stuttgart, Germany, meeting with her between races. They became engaged on Christmas Day, and originally planned to marry at the end of 1961, but in fact wed on 3 January 1961 in Doncaster, Yorkshire. ### 1961: Tour of Flanders and injury Simpson's first major event of the 1961 season was the Paris–Nice stage race in March. In stage three he helped his team win the team time trial and took the general classification lead by three seconds; however, he lost it in the next stage. In the final stages of the race Simpson's attacks were thwarted, and he finished fifth overall. On 26 March, Simpson rode in the one-day Tour of Flanders. With 's Nino Defilippis, he chased down an early breakaway. Simpson worked with the group; with about 8 km (5 mi) to go he attacked, followed by Defilippis. The finish, three circuits around the town of Wetteren, was flat; Defilippis, unlike Simpson, was a sprinter and was expected to win. One kilometre from the finish, Simpson launched a sprint; he eased off with 300 m to go, tricking Defilippis into thinking he was exhausted. As Defilippis passed, Simpson jumped again to take victory, becoming the first Briton to win a "monument" classic. Defilippis protested that the finishing banner had been blown down, and he did not know where the finish was; however, the judges noted that the finish line was clearly marked on the road itself. Defilippis' team asked Simpson to agree to a tie, saying no Italian had won a classic since 1953. He replied: "An Englishman had not won one since 1896!" A week later, Simpson rode in Paris–Roubaix in the hope of bettering his previous year's ninth place. As the race reached the paved section he went on a solo attack, at which point he was told that rider Raymond Poulidor was chasing him down. Simpson increased his speed, catching the publicity and press vehicles ahead (known as the caravane). A press car swerved to avoid a pothole; this forced him into a roadside ditch. Simpson fell, damaging his front wheel and injuring his knee. He found his team car and collected a replacement wheel, but by then the front of the race had passed. Back in the race he crashed twice more, finishing 88th. At Simpson's next race, the four-day Grand Prix d'Eibar, his first in Spain, his knee injury still bothered him. He won the second stage, but was forced to quit during the following stage. His injury had not healed, even after treatment by various specialists, but for financial reasons he was forced to enter the Tour de France with the British team. He abandoned on stage three, which started in Roubaix, struggling to pedal on the cobbles. Three months after his fall at Paris–Roubaix he saw a doctor at St. Michael's Hospital in Paris. He gave Simpson injections in his knee, which reduced the inflammation. Once healed, he competed in the road world championships in Berne, Switzerland. On the track he qualified for the individual pursuit with the fourth-fastest time, losing in the quarter-finals to Peter Post of the Netherlands. In the road race, Simpson was part of a seventeen-rider breakaway that finished together in a sprint; he crossed the line in ninth place. Helen became pregnant; Simpson's apartment in Paris was now unsuitable and a larger home in France was not in their means. In October, with help from his friend, Albert Beurick, they moved into a small cottage in Ghent. Low on funds, Simpson earned money in one-day track races in Belgium. ### 1962: Yellow jersey Simpson's contract with Rapha-Gitane-Dunlop had ended with the 1961 season. Tour de France winner Jacques Anquetil signed with them for 1962, but Simpson wanted to lead a team, and signed with for the 1962 season. After training camp at Lodève in southern France, he rode in Paris–Nice. He helped his team win the stage-3a team time trial and finished second overall, behind 's Jef Planckaert. He was unable to ride in Milan–San Remo when its organisers limited the race to Italian-based teams; instead he rode in Gent–Wevelgem, finishing sixth, then defended his Tour of Flanders title. At the end of the latter, Simpson was in a select group of riders at the head of the race. Although he led over each of the final climbs, at the finish he finished fifth and won the King of the Mountains prize. A week later Simpson finished thirty-seventh in Paris–Roubaix, delayed by a crash. Coming into the Tour de France, Simpson was leader of his team; it was the first time since 1929 that company teams were allowed to compete. He finished ninth in the first stage, in a group of twenty-two riders who finished over eight minutes ahead of the rest. Simpson's team finished second to in the stage-2b team time trial; he was in seventh place in the general classification, remaining in the top ten the rest of the first week. During stage 8a he was in a thirty-rider group which gained about six minutes, moving him to second overall behind teammate André Darrigade. At the end of the eleventh stage Simpson was third in the overall, over a minute behind race leader Willy Schroeders () and fifty-one seconds behind Darrigade. Stage twelve from Pau to Saint-Gaudens, the hardest stage of the 1962 Tour (known as the "Circle of Death"), was the Tour's first mountain stage. Simpson saw an opportunity to lead the race. The team now solely concentrated on his interests, since Darrigade was a sprinter and would no longer be involved in the general classification. As the peloton reached the Col du Tourmalet, Simpson attacked with a small group of select riders, finishing eighteenth place in a bunch sprint. As he finished ahead of all the other leaders in the general classification, he became the overall new leader of race, and the first British rider to wear the leader's yellow jersey. Simpson lost the lead on the following stage, a short time trial ending with a steep uphill finish at Superbagnères. He finished thirty-first and dropped to sixth overall. On stage nineteen he advanced recklessly descending the Col de Porte in the Alps, crashing on a bend and only saved from falling over the edge by a tree, leaving him with a broken left middle finger. He lost almost eleven minutes in the next stage's time trial, finishing the Tour at Paris' Parc des Princes stadium 17 minutes and 9 seconds behind in 6th place. After the Tour Simpson rode criteriums before the road world championships in Salò, Italy, where he retired after missing a large breakaway. He began riding six-day track races into his winter break. In December he made an appearance at the Champions' Concert cycling awards held at Royal Albert Hall in London. Separately, he won the British Cycling Federation's Personality of the Year. Simpson and Helen were expecting their second child and upgraded to a larger house in Sint-Amandsberg, a sub-municipality of Ghent. ### 1963: Bordeaux–Paris Leroux withdrew its sponsorship of the Gitane team for the 1963 season. Simpson was contracted to their manager, Raymond Louviot; Louviot was rejoining and Simpson could follow, but he saw that as a step backwards. bought the contract from Louviot, which ran until the end of the season. Simpson's season opened with Paris–Nice; he fell out of contention after a series of tyre punctures in the opening stages, using the rest of the race as training. He withdrew from the race on the final stage to rest for his next race, Milan–San Remo; after breaking away by himself he stopped beside the road, which annoyed his fellow riders. At Milan–San Remo, Simpson was in a four-rider breakaway; his tyre punctured, and although he got back to the front, he finished nineteenth. He placed third in the Tour of Flanders in a three-rider sprint. In Paris–Roubaix Simpson worked for teammate, and winner, Emile Daems, finishing ninth. In the one-day Paris–Brussels he was in a breakaway near the Belgian border; with 50 km (31.1 mi) remaining he was left with world road race champion Jean Stablinski of , who attacked on a cobbled climb in Alsemberg outside Brussels. Simpson's bike slipped a gear, and Stablinski stayed away for the victory. After his second-place finish, Simpson led the Super Prestige Pernod International season-long competition for world's best cyclist. The following week he raced in the Ardennes classics, placing thirty-third in Liège–Bastogne–Liège, after he rode alone for about 100 km (62 mi) before being caught in the closing kilometres. On 26 May, Simpson rode in the one-day, 557 km (346 mi) Bordeaux–Paris. Also known as the "Derby of the Road", it was the longest he had ever ridden. The race began at 1:58 am; the initial 161 km (100 mi) were unpaced until the town of Châtellerault, where dernys (motorised bicycles) paced each rider to the finish. Simpson broke away in a group of three riders. Simpson's pacer, Fernand Wambst, increased his speed, and Simpson dropped the other two. He caught the lead group, thirteen minutes ahead, over a distance of 161 km (100 mi). Simpson attacked, and with 36 km (22.4 mi) remaining, opening a margin of two minutes. His lead steadily increased, and he finished in the Parc des Princes over five minutes ahead of teammate Piet Rentmeester. Simpson announced that he would not ride the Tour de France, concentrating on the world road championships instead. Before, he won the Isle of Man International in treacherous conditions where only sixteen out of seventy riders finished. At the road world championships in Ronse, Belgium, the Belgians controlled the race until Simpson broke free, catching two riders ahead: Henry Anglade (France) and Shay Elliott (Ireland). Anglade was dropped, and Elliott refused to work with Simpson. They were caught; the race finished in a bunch sprint, with Simpson crossing the line in 29th. Simpson's season ended with six-day races across Europe and an invitation only race on the Pacific island of New Caledonia, along with other European riders. He skipped his usual winter training schedule for his first skiing holiday at Saint-Gervais-les-Bains in the Alps, taking Helen and his two young daughters, Jane and Joanne. ### 1964: Milan–San Remo After a training camp near Nice in southern France Simpson rode in the one-day Kuurne–Brussels–Kuurne in Belgium, finishing second to 's Arthur Decabooter. The conditions were so cold, he only completed the race to keep warm. Albert Beurick started Simpson's supporters club at the Café Den Engel, raising £250 for him in the first nine months. In Paris–Nice, his tyre punctured during stage four, losing five minutes and used the rest of the race for training. On 19 March, two days later, Simpson rode in Milan–San Remo. Before the race, French journalist René de Latour advised Simpson not to attack early: "If you feel good then keep it for the last hour of the race." In the final 32 km (19.9 mi), Simpson escaped in group of four riders, which including the 1961 winner, Poulidor of . On final climb, the Poggio, Poulidor launched a series of attacks on the group; only Simpson managed to stay with him and they crossed the summit and descended into Milan. With 500 m to go, Simpson began his sprint; Poulidor could not respond, leaving Simpson to take the victory with a record average speed of 27.1 mph (43.6 km/h). Simpson spent the next two months training for the Tour de France at the end of June. After the first week of the Tour, Simpson was in tenth place overall. On the ninth stage, he was part of 22-rider breakaway which finished together at Monaco's Stade Louis II; he placed second to Anquetil, moving up to eighth overall. The next day, he finished 20th in the 20.8 km (12.9 mi) time trial. During the 16th stage, which crossed four cols, Simpson finished 33rd, 25 minutes and 10 seconds behind the stage winner, and dropped to 17th overall. He finished the Tour in 14th place overall. Simpson later discovered that he rode the Tour suffering from tapeworms. After the race, Simpson prepared for the world road championships with distance training and criteriums. At the world championships on 3 September, the 290 km (180 mi) road race consisted of twenty-four laps of a varying circuit at Sallanches in the French Alps. Simpson crashed on the third lap while descending in wet conditions, damaging a pedal. He got back to the peloton, launching a solo attack on a descent; he then chased down the group of four leaders with two laps to go. On the last lap he was dropped by three riders, finishing six seconds behind. On 17 October, Simpson rode in the Giro di Lombardia. Halfway through the race he was given the wrong musette (bag) by his team in the feed zone, and threw it away. With the head of the race reduced to five riders, Molteni's Gianni Motta attacked. Simpson was the only one who could follow, but he began to feel the effects of not eating. Motta gave him part of his food, which sustained him for a while. On the final climb Simpson led Motta, but was exhausted. Over the remaining 10 km (6.2 mi) of flat terrain, Motta dropped him; Simpson cracked, and was repeatedly overtaken, finishing twenty-first. He closed the year riding track races. ### 1965: World championship and Lombardia The Simpson family spent Christmas in England, before a trip to Saint-Gervais-les-Bains, where Simpson injured himself skiing, suffering a broken foot and a sprained ankle. He recovered, riding six-day races. At the Antwerp six-day, he dropped out on the fourth day with a cold. His cold worsened and he missed most of March. He abandoned Milan–San Remo at the foot of the Poggio. On 11 April, he finished seventh in Paris–Roubaix after crashing in the lead group. The crash forced him to miss the Tour of Flanders as he struggled to walk on his injured foot. In Liège–Bastogne–Liège he attacked with 's Felice Gimondi, catching an early break. They worked together for 25 km (15.5 mi), until Gimondi gave up. Simpson rode alone before slipping on oil mixed with water; he stayed with the front group, finishing tenth. On 29 May, Simpson rode in the London–Holyhead race, the longest unpaced one-day race, with a distance of 265 mi (426 km); he won in a bunch sprint, setting a record of ten hours and twenty-nine minutes. He followed with an appearance at Bordeaux–Paris. François Mahé () went on a lone break, Simpson attacked in pursuit, followed by Jean Stablinski. Simpson's derny broke down, and he was delayed changing motorbikes. He caught Stablinski, and was joined by Anquetil. Outside Paris Mahé was caught and dropped, after 200 km (124 mi) on his own. Anquetil won the race by fifty-seven seconds ahead of Stablinski, who beat Simpson in a sprint. Peugeot manager Gaston Plaud ordered Simpson to ride the Midi Libre stage race to earn a place in the Tour de France, and he finished third overall. The 1965 Tour was considered open due to Anquetil's absence, and Simpson was among the riders favoured by L'Équipe. During stage nine he injured his hand crashing on the descent of the Col d'Aubisque in the Pyrenees, finishing tenth in the stage and seventh in general classification. Simpson developed bronchitis after stage fifteen and cracked on the next stage, losing nearly nineteen minutes. His hand became infected, but he rode the next three stages before the Tour doctor stopped him from racing. He was taken to hospital, where they operated on his hand and treated him for blood poisoning, bronchitis and a kidney infection. After ten days off his bike, Simpson was only contracted to three post-Tour criteriums. His training for the road world championships included kermesse circuit races in Flanders. Simpson's last race before the world championships was the Paris–Luxembourg stage race, riding as a super-domestique (lieutenant). On 5 September, Simpson rode in the road race at the world championships in San Sebastián, Spain. The race was a 267.4 km (166 mi) hilly circuit of fourteen laps. The British team had no support; Simpson and his friend Albert Beurick obtained food and drink by stealing from other teams. During the first lap, a strong break was begun by British rider Barry Hoban. As his lead stretched to one minute, Simpson and teammates Vin Denson and Alan Ramsbottom bridged the gap, followed by Germany's Rudi Altig. Hoban kept the pace high enough to prevent any of the favourites from joining. Simpson and Altig broke clear with two-and-a-half laps remaining, staying together until the final kilometre, when Simpson launched his sprint; he held off Altig for victory by three bike lengths, becoming the first British professional world road race champion. On 16 October, Simpson rode in the Giro di Lombardia, which featured five mountain passes. He escaped with Motta, and dropped him before the finish in Como to win his third "monument" classic over three minutes ahead of the rest. Simpson was the second world champion to win in Italy; the first was Alfredo Binda in 1927. Simpson was offered lucrative contracts by teams, including who were prepared to pay him the year's salary in advance. He could not escape his contract with Peugeot, which ran until the end of the 1967 season. For the next three weeks he rode contract races, riding an estimated 12,000 mi (19,000 km). He rode 18 races, with each earning him £300–£350. Simpson ended the year second to Anquetil in the Super Prestige Pernod International, and won the Daily Express Sportsman of the Year, the Sports Journalists' Association Sportsman of the Year, presented by the Prime Minister Harold Wilson, and the BBC Sports Personality of the Year. In British cycling Simpson won the British Cycling Federation Personality of the Year and the Bidlake Memorial Prize. He was given the freedom of Sint-Amandsberg; his family, including his parents, were driven in an open-top car along the crowd-lined route from the Café Den Engel to the Town Hall. ### 1966: An injury-ridden season As in the previous winter, Simpson went on a skiing holiday. On 25 January he fell, breaking his right tibia, and his leg was in a plaster cast until the end of February. He missed contract races, crucial training and most of the spring classics. Simpson began riding again in March, and in late April started, but did not finish, Liège–Bastogne–Liège. Simpson's injury did not stop the press from naming him a favourite for the Tour de France. He was subdued in the race until stage twelve, when he forced a breakaway with Altig (Molteni), finishing second. Simpson again finished second in the next stage, jumping clear of the peloton in a three-rider group in the final kilometres. After the stage he was eighteenth overall, over seven minutes down. Simpson moved up to 16th after finishing 5th in stage 14b – a short time trial. As the race reached the Alps, he decided to make his move. During stage sixteen he attacked on the descent of the first of three cols, the Croix de Fer. He crashed but continued, attacking again. Simpson was joined by 's Julio Jiménez on the climb of the Télégraphe to the Galibier. Simpson was caught by a chase group descending the Galibier before he crashed again, knocked off his bike by a press motorcycle. The crash required five stitches in his arm. The next day he struggled to hold the handlebars and could not use the brake lever with his injured arm, forcing him to abandon. His answer to journalists asking about his future was, "I don't know. I'm heartbroken. My season is ruined." After recovering from his injury Simpson rode 40 criteriums in 40 days, capitalising on his world championship and his attacks in the Tour. He retired from the road world championships at the Nürburgring with cramp. His road season ended with retirements from autumn classics Paris–Tours and the Giro di Lombardia. He rode six-day races, finishing fourteenth in the winter rankings. The misfortune he endured during the season made him the first rider named as a victim of the "curse of the rainbow jersey". For the winter Simpson took his family to the island of Corsica, planning the build of his retirement home. ### 1967: Paris–Nice and Vuelta stages Simpson's primary objective for 1967 was overall victory in the Tour de France; in preparation, he planned to ride stage races instead of one-day classics. Simpson felt his chances were good because this Tour was contested by national, rather than professional teams. He would lead the British team, which – although one of the weakest – would support him totally, unlike Peugeot. During Simpson's previous three years with Peugeot, he was only guaranteed a place on their Tour team if he signed with them for the following year. Free to join a new team for the 1968 season, he was offered at least ten contracts; Simpson had a verbal agreement with Italian team Salvarani, and would share its leadership with Felice Gimondi. In an interview with Cycling (now Cycling Weekly) journalist, Ken Evans, in April, Simpson revealed his intention to attempt the hour record in the 1967 season. He also said he wanted retire from road racing aged 33, to ride on the track and spend more time with his family. In March he rode in the Paris–Nice. After stage two his teammate, Eddy Merckx, took the overall lead. Simpson moved into the lead the next day as part of a breakaway, missed by Merckx, which finished nearly twenty minutes ahead. Merckx thought Simpson double-crossed him, but Simpson was a passive member of the break. At the start of stage six, Simpson was in second place behind 's Rolf Wolfshohl. Merckx drew clear as the race approached Mont Faron, with Simpson following. They stayed together until the finish in Hyères, with Simpson allowing Merckx to take first place. Simpson finished over a minute ahead of Wolfshohl, putting him in the race leader's white jersey. He held the lead in the next two stages to win the race. Three days later Simpson and Merckx both raced in Milan–San Remo. Simpson escaped early in a five-rider breakaway lasting about 220 km (137 mi), before Merckx won in a bunch sprint with assistance from Simpson, who finished in seventieth place. After 110 mi (177 km) of Paris–Roubaix, Simpson's bike was unridable and he retired from the race. In late April Simpson rode in his first Vuelta a España, using the eighteen-stage race to prepare for the Tour. During stage two a breakaway group gained over thirteen minutes, dashing his hopes for a high placing. Simpson nearly quit the race before the fifth stage, from Salamanca to Madrid, but rode it because it was easier to get home by air from Madrid. He won the stage, attacking from a breakaway, and finished second in stage seven. On the eleventh stage, concluding in Andorra, Simpson rode away from the peloton on his own. With 30 km (18.6 mi) remaining, he began to lose control of his bike and was halted by Peugeot manager Gaston Plaud until he had recovered, by which time the race had passed. In an interview with L'Équipe'''s Philippe Brunel in February 2000, Tour de France physician Pierre Dumas revealed that Simpson told him that he was taken to hospital during the Vuelta. Simpson won stage sixteen, which ended in San Sebastián, and finished the Vuelta thirty-third overall. Simpson was determined to make an impact in the Tour de France; in his eighth year as a professional cyclist, he hoped for larger appearance fees in post-Tour criteriums to help secure his financial future after retirement. His plan was to finish in the top three, or to wear the yellow jersey at some point in the race. He targeted three key stages, one of which was the thirteenth, over Mont Ventoux, and planned to ride conservatively until the race reached the mountains. In the prologue, Simpson finished thirteenth. After the first week he was in sixth place overall, leading the favourites. As the race crossed the Alps, Simpson fell ill, across the Col du Galibier, with diarrhoea and stomach pains. Unable to eat, he finished stage ten in 16th place and dropped to seventh overall as his rivals passed him. Teammate Vin Denson advised Simpson to limit his losses and accept what he had. He placed in 39th position on stage 11 and 7th on stage 12. In Marseille, on the evening before stage thirteen, Simpson's manager, Daniel Dousset, pressured him for good results. Plaud begged Simpson to quit the race. ## Death The thirteenth stage (13 July) of the 1967 Tour de France measured 211.5 km (131.4 mi); it started in Marseille, crossing Mont Ventoux (the "Giant of Provence") before finishing in Carpentras. At dawn, Tour doctor Pierre Dumas met journalist Pierre Chany near his hotel. Dumas noted the warm temperature, "If the boys stick their nose in a 'topette' [bag of drugs] today, we could have a death on our hands." At the start line, a journalist noticed Simpson looked tired and asked him if the heat was the problem. Simpson replied, "No, it's not the heat, it's the Tour." As the race reached the lower slopes of Ventoux, Simpson's team mechanic Harry Hall, witnessed Simpson, still ill, putting the lid back on his water bottle as he exited a building. Race commissaire (official) Jacques Lohmuller later confirmed to Hall that he also saw the incident and that Simpson was putting brandy in his bottle. Near the summit of Ventoux, the peloton began to fracture. Simpson was in the front group before slipping back to a group of chasers about a minute behind. He then began losing control of his bike, zig-zagging across the road. A kilometre from the summit, Simpson fell off his bike. Team manager Alec Taylor and Hall arrived in the team car to help him. Hall tried to persuade Simpson to stop, saying: "Come on Tom, that's it, that's your Tour finished", but Simpson said he wanted to continue. Taylor said, "If Tom wants to go on, he goes". Noticing his toe straps were still undone, Simpson said, "Me straps, Harry, me straps!" They got him on his bike and pushed him off. Simpson's last words, as remembered by Hall, were "On, on, on." Hall estimated Simpson rode a further 500 yd (457 m) before he began to wobble, and was held upright by spectators; he was unconscious, with his hands locked on the handlebars. Hall and a nurse from the Tour's medical team took turns giving Simpson mouth-to-mouth resuscitation, before Dumas arrived with an oxygen mask. Approximately forty minutes after his collapse, a police helicopter took Simpson to nearby Avignon hospital, where he was pronounced dead at 5:40 p.m. Two empty tubes and a half-full one of amphetamines, one of which was labelled "Tonedron", were found in the rear pocket of his jersey. The official cause of death was "heart failure caused by exhaustion." On the next racing day, the other riders were reluctant to continue racing and asked the organisers for a postponement. France's Stablinski suggested that the race continue, with a British rider, whose team would wear black armbands, allowed to win the stage. Hoban won the stage, although many thought the stage winner should have been Denson, Simpson's close friend. Media reports suggested that his death was caused by heat exhaustion, until, on 31 July 1967 British journalist J. L. Manning of the Daily Mail broke the news about a formal connection between drugs and Simpson's death. French authorities confirmed that Simpson had traces of amphetamine in his body, impairing his judgement and allowing him to push himself beyond his limits. His death contributed to the introduction of mandatory testing for performance-enhancing drugs in cycling, leading to tests in 1968 at the Giro d'Italia, Tour de France and Summer Olympics. Simpson was buried in Harworth Cemetery, after a service at the 12th-century village church attended by an estimated 5,000 mourners, including Peugeot teammate Eddy Merckx, the only continental rider in attendance. The epitaph on Simpson's gravestone in Harworth cemetery reads, "His body ached, his legs grew tired, but still he would not give in", taken from a card left by his brother, Harry, following his death. ## Doping Unlike the majority of his contemporaries, Simpson was open about the use of drugs in professional cycling. In 1960, interviewed by Chris Brasher for The Observer newspaper, Simpson spoke about his understanding of how riders could beat him, saying: "I know from the way they ride the next day they are taking dope. I don't want to have to take it – I have too much respect for my body." Two years before his death, Simpson hinted in the newspaper, The People, at drug-taking in races, although he implied that he himself was not involved. Asked about drugs by Eamonn Andrews on the BBC Home Service radio network, Simpson did not deny taking them; however, he said that a rider who frequently took drugs might get to the top but would not stay there. In his biography of Simpson, Put Me Back on My Bike, William Fotheringham quoted Alan Ramsbottom as saying, "Tom went on the [1967] Tour de France with one suitcase for his kit and another with his stuff, drugs and recovery things", which Fotheringham said was confirmed by Simpson's roommate Colin Lewis. Ramsbottom added, "Tom took a lot of chances. He took a lot of it [drugs]. I remember him taking a course of strychnine to build up to some big event. He showed me the box, and had to take one every few days." although he implied that other competitors were involved. Lewis recalled Simpson acquiring a small box at their hotel. Simpson explained to him: "That's my year's supply of Micky Finns'. That lot cost me £800." Commentator and Simpson's close friend David Saunders stated in his 1971 book, Cycling in the Sixties, that although he did not condone Simpson's use of drugs, he thought it was not the reason for his death. He said: "I am quite convinced that Simpson killed himself because he just did not know when to stop. All his racing life he had punished his frail body, pushing it to the limits of endurance with his tremendous will-power and single-mindedness and, on Mont Ventoux, he pushed it too far, perhaps the drug easing the pain of it all." Saunders went on to say that Simpson was not alone in the taking of drugs in professional cycling and that the authorities ignored their use. His opinion was that Simpson did not take drugs to gain an unfair advantage, but because "he was not going to be beaten by a pill". ## Riding style and legacy Simpson in his adolescence was described as fearsome in descent by fellow Scala Wheelers club member George Shaw, who explained that if Simpson dropped behind on a climb, he would come back on the descent. Simpson's risk-taking on descents was evident throughout his career, crashing in four out of the seven Tours de France he competed in. Track rider Norman Sheil recalled: "When racing on a banked velodrome, Simpson would sometimes ride up the advertising boards at the top of the bankings, Wall of Death-style, to please the crowds." Simpson's death was attributed to his unwillingness to admit defeat ascending Mont Ventoux. He described a near-death experience during a race in 1964, the Trofeo Baracchi two-man time trial, to Vin Denson, who recalled: "He said he felt peace of mind and wasn't afraid to die. He said he would have been happy dying." Simpson looked for any advantage over his opponents. He made his own saddle, a design which is now standard. During his time with Peugeot, he rode bikes made by Italian manufacturer Masi that resembled Peugeots. Simpson was obsessed with dieting since 1956, when he was mentored by Cyril Cartwright. Simpson understood the value of fruit and vegetables after reading Les Cures de jus by nutritionist Raymond Dextreit; during the winter, he would consume 10 lb (4.5 kg) of carrots a day. Other unusual food preferences included pigeons, duck and trout skin, raspberry leaves and garlic in large quantities. In the 1968 Tour de France, there was a special prize given in his honour, the Souvenir Tom Simpson, a sprint on stage 15 in the small town of Mirepoix, won by the soloing Roger Pingeon. Winner of the race Jan Janssen said of him, "Occasionally Tommy could be annoying. When it was rolling along at 30kmh and - paf!... he’d attack. Oh leave us alone! There's still 150km to go pipe down. But often, he wanted war." Janssen went on to say, "Even in the feed zones. It's not the law, but it's not polite. Musettes (lunch bags) were up in the air there was panic and crashes. It was Simpson acting like a jerk. It didn't happen often. Occasionally I was angry at him. I’d say to him in his native English: You f\*\*\*\*\*g c\*\*t... There were often many teams, five or six, in the same hotel together every evening. Each had their own table. And at a certain moment, Tommy walked into the restaurant like a gentleman, with a cane, bowler hat and in costume... He was like a Lord in England and the rest of us were in tracksuits. Everyone saw that, laughed, and the things he had done during the race were forgotten." A granite memorial to Simpson, with the words "Olympic medallist, world champion, British sporting ambassador", stands on the spot where he collapsed and died on Ventoux, one kilometre east of the summit. Cycling began a fund for a monument a week after Simpson's death, raising about £1,500. The memorial was unveiled in 1968. It has become a site of pilgrimage for cyclists, who frequently leave cycling-related objects, such as water bottles and caps, in tribute. In nearby Bédoin, a plaque was installed in the town square by journalists following the 1967 Tour. The Harworth and Bircotes Sports and Social Club has a small museum dedicated to Simpson, opened by Belgian cyclist Lucien Van Impe in August 2001. In 1997, to commemorate the 30th anniversary of his death, a small plaque was added to the Mont Ventoux memorial, with the words "There is no mountain too high. Your daughters Jane and Joanne, July 13, 1997", and a replica of the memorial was erected outside the museum. In his adopted hometown of Ghent, there is a bust of Simpson at the entrance to the Kuipke velodrome. Every year since his death, the Tom Simpson Memorial Race has taken place in Harworth. Ray Pascoe, a fan, made the 1995 film Something To Aim At, a project he began in the years following Simpson's death; the film includes interviews with those closest to Simpson. The 2005 documentary Wheels Within Wheels follows actor Simon Dutton as he searches for people and places in Simpson's life. Dutton's four-year project chronicles the midlife crisis that sparked his quest to rediscover Simpson. British rider David Millar won stage twelve of the 2012 Tour de France on the 45th anniversary of Simpson's death; previously banned from cycling for using performance-enhancing drugs, he paid tribute to Simpson and reinforced the importance of learning from his – and Simpson's – mistakes. Millar wrote the introduction for a reissue of Simpson's autobiography, Cycling Is My Life, published in 2009. In 2010, Simpson was inducted into the British Cycling Hall of Fame. He inspired Simpson Magazine, which began in March 2013. According to the magazine's creators, “It was Simpson's spirit and style, his legendary tenacity and his ability to suffer that endeared him to cycling fans everywhere as much as the trophies he won.” ## Family and interests Soon after moving to France in 1959, Simpson met Helen Sherburn. They married in 1961, before moving to Ghent, Belgium, the following year. They had two daughters, Jane (born April 1962) and Joanne (born May 1963), who were brought up, and live, in Belgium. After his death, Helen Simpson married Barry Hoban in December 1969. Simpson is the maternal uncle of retired Belgian-Australian cyclist Matthew Gilmore, whose father, Graeme, was also a cyclist. The 2000 book Mr. Tom: The True Story of Tom Simpson, written by Simpson's nephew, Chris Sidwells, focuses on his career and family life. Simpson spoke fluent French, and was also competent in Flemish and Italian. He was interested in vintage cars, and his driving and riding styles were similar; Helen remembered, "Driving through the West End of London at 60 mph (97 km/h), was nothing." In January 1966, Simpson was a guest castaway on BBC Radio 4's Desert Island Discs; his favourite musical piece was "Ari's Theme" from Exodus by the London Festival Orchestra, his book choice was The Pickwick Papers and his luxury item was golf equipment. Helen said that she chose his records for the show, since he was not interested in music. Simpson's autobiography, Cycling Is My Life, was first published in 1966. ## Career achievements ### Major results Sources: 1955 1st BLRC National Junior Hill Climb Championship 1956 2nd Individual pursuit, Amateur National Track Championships 3rd Team pursuit, Olympic Games 1957 1st BLRC National Hill Climb Championship 1st Individual pursuit, Amateur National Track Championships 1958 1st Individual pursuit, Amateur National Track Championships 2nd Individual pursuit, British Empire and Commonwealth Games 1959 1st Stage 8 Route de France Tour de l'Ouest : 1st Stages 4 & 5b (ITT) 2nd Overall Essor Breton 4th Road race, UCI Road World Championships 4th Trofeo Baracchi (with Gérard Saint) 1960 1st Overall Tour du Sud-Est 1st Stage 1b (TTT) Four Days of Dunkirk 1st Mont Faron hill climb 3rd Overall Genoa–Rome : 1st Mountains classification 7th La Flèche Wallonne 9th Paris–Roubaix 1961 1st Tour of Flanders 1st Stage 2 Euskal Bizikleta 2nd Overall Menton–Rome 5th Overall Paris–Nice : 1st Stage 3 (TTT) 9th Road race, UCI Road World Championships 1962 2nd Overall Paris–Nice : 1st Stage 3a (TTT) 3rd Critérium des As 3rd Six Days of Madrid (with John Tresidder) 5th Tour of Flanders : 1st Mountains classification 6th Overall Tour de France : Held after Stage 12 6th Gent–Wevelgem 1963 1st Bordeaux–Paris 2nd Overall Tour du Var : 1st Stage 1 1st Isle of Man International 1st Grand Prix du Parisien 2nd Critérium des As 2nd Gent–Wevelgem 2nd Paris–Brussels 2nd Overall Super Prestige Pernod International 2nd Paris–Tours 3rd Tour of Flanders 8th Paris–Roubaix 10th La Flèche Wallonne 10th Giro di Lombardia 1964 1st Milan–San Remo 1st Stage 5 Circuit de Provençal 2nd Kuurne–Brussels–Kuurne 3rd Trofeo Baracchi (with Rudi Altig) 4th Road race, UCI Road World Championships 4th Mont Faron hill climb 10th Paris–Roubaix 1965 1st Road race, UCI Road World Championships 1st Giro di Lombardia 1st London–Holyhead 1st Six Days of Brussels (with Peter Post) 2nd Six Days of Ghent (with Peter Post) 2nd Overall Super Prestige Pernod International 3rd Overall Midi Libre 3rd La Flèche Wallonne : 1st Mountains classification 3rd Overall Circuit de Provençal 3rd Bordeaux–Paris 5th Harelbeke–Antwerp–Harelbeke 6th Paris–Roubaix 10th Liège–Bastogne–Liège 1966 1st Stage 2b (TTT) Four Days of Dunkirk 2nd Six Days of Münster (with Klaus Bugdahl) 2nd Grand Prix of Aargau Canton 1967 1st Overall Paris–Nice Vuelta a España : 1st Stages 5 & 16 1st Isle of Man International 1st Stage 5 Giro di Sardegna 3rd Six Days of Antwerp (with Leo Proost and Emile Severeyns) 4th Polymultipliée ### Grand Tour general classification results timeline Sources: ### Monuments results timeline Sources: ### Awards and honours - British Cycling Federation Personality of the Year: 1962, 1965 - BBC Sports Personality of the Year: 1965 - Bidlake Memorial Prize: 1965 - Daily Express'' Sportsman of the Year: 1965 - Freedom of Sint-Amandsberg: 1965 - Sports Journalists' Association Sportsman of the Year: 1965 - British Cycling Hall of Fame: 2010 ## See also - List of British cyclists - List of British cyclists who have led the Tour de France general classification - List of Desert Island Discs episodes (1961–70) - List of doping cases in cycling - List of Olympic medalists in cycling (men) - List of cyclists with a cycling-related death - Yellow jersey statistics
41,507,141
Kona Lanes
1,133,988,277
Former bowling center in Costa Mesa, California
[ "1958 establishments in California", "2003 disestablishments in California", "Bowling alleys", "Buildings and structures completed in 1958", "Buildings and structures demolished in 2003", "Defunct entertainment venues", "Demolished buildings and structures in California", "Futurist architecture", "Googie architecture in California", "Modernist architecture in California", "Roadside attractions in California", "Sports venues in Costa Mesa, California", "Ten-pin bowling in the United States" ]
Kona Lanes was a bowling center in Costa Mesa, California, that opened in 1958 and closed in 2003 after 45 years in business. Known for its futuristic design, it featured 40 wood-floor bowling lanes, a game room, a lounge, and a coffee shop that eventually became a Mexican diner. Built during the advent of Googie architecture, its Polynesian-inspired Tiki styling extended from the large roadside sign to the building's neon lights and exaggerated rooflines. When Kona Lanes was demolished in 2003, it was one of the last remaining examples of the Googie style in the region; its sister center, Java Lanes in Long Beach, was razed in 2004. Much of Kona's equipment was sold prior to the demolition; the distinctive sign was saved and sent to Cincinnati, where a portion is on permanent display in the American Sign Museum. Costa Mesa's planning commission approved a proposal to build a department store on the site; following public outcry, those plans were scrapped. In 2010, the still-vacant land was rezoned for senior citizens' apartments and commercial development. Construction on the apartments began ten years after Kona Lanes was demolished. ## History ### Early years Kona Lanes opened in 1958, featuring the Tiki-inspired signage and architecture that became popular following World War II, including what the Los Angeles Times called its "flamboyant neon lights and ostentatious rooflines meant to attract motorists like moths". The building on Harbor Boulevard near Adams Avenue was one of three in the Googie style that architects Powers, Daly, & DeRosa designed at around the same time; Kona Lanes and its sister center, Java Lanes, used names that suggested South Pacific island locales. Author Andrew Hurley called them "expensive and attractive buildings that screamed, 'Have fun here, and Kona retained much of that atmosphere over the years. Its massive neon-lit street sign remained for the life of the building, and Kona was the only bowling establishment in the area to reject automatic scoring equipment throughout its existence. Kona Lanes hosted the Southern California PBA Open twice in 1964; Billy Hardwick won in April and Jerry Hale in December. Longtime general manager Dick Stoeffler, known at the time as the producer and host of TV Bowling Tournament on KTLA, finished third during the televised finals in his own building in December, behind Hale and Hardwick. When Stoeffler rolled back-to-back 300 games in one league session at Kona in 1968, he was one of only four men in the United States to have managed the feat. ### Peak years Champions who bowled at Kona Lanes during its 45-year history include three-time Professional Bowlers Association Tour winner Jack Biondolillo; six-time male Bowling Writers Association of America Bowler of the Year and future PBA Hall of Famer Don Carter; John Haveles, an Orange County Bowling Hall of Fame inductee who began a stint as Kona's manager in 1974; future Michigan Women's Bowling Association Hall of Famer Cora Fiebig; two-time female BWAA Bowler of the Year Aleta Sill; repeat bowler of the year and WIBC Hall of Famer Donna Adamek; and Barry Asher, the multiple PBA Tour champion and Hall of Fame inductee who later ran the pro shop at Fountain Bowl in nearby Fountain Valley. Kona Lanes and Tustin Lanes hosted nearly 10,000 teams of five players each taking part in the United States Bowling Congress Women's Championships in 1986. Under Dick Stoeffler's management, Kona Lanes kept busy 24 hours a day, which made him one of the most successful proprietors in the country. Stoeffler met his future wife there; other couples had similar experiences, including at least one wedding. The center was often so busy that customers had to make reservations to get a lane during open bowling hours. At its peak, Kona Lanes averaged more than 80 lines on each of its 40 lanes. Bowling as a participation sport flourished in the early 1960s, but its popularity was diluted due to overbuilding; the number of bowling alleys sanctioned by the then-American Bowling Congress peaked at about 11,000 by mid-decade, and Kona was one of more than 30 in southern California alone. A decline in league bowling starting in the 1980s was also blamed for the downturn, but an AMF Bowling official argued that the customer base remained steady because an increase in open bowling made up for fewer league bowlers. Jack Mann bought Kona Lanes in 1980 and re-branded it New Kona Lanes the following year. Mann's family owned several bowling centers in the region; he was behind the creation of Fountain Bowl in 1973 and the short-lived Regal Lanes in Orange in 1974. He also owned Tustin Lanes before selling it to his youngest son, Alex. Mann bought Kona Lanes not because he loved bowling, but because it would continue to pay dividends if he was no longer able to work. He later sold it to his son Jack Jr. ### Music The center's lounge, the Outrigger Room, hosted many local artists over the years. Jazz quintet The Redd Foxx Bbq released four songs recorded there, and Roscoe Holland recorded a set of eight live performances for his album Beyond the Reef. In later years, much of the bowlers' area was taped off for rock concerts and weekend promotions like Club Crush, which proved popular among teenagers and also led to album recordings. A planned event featuring a local punk rock group was shut down by the Costa Mesa Police Department, leading to negative publicity. ### Decline and demolition Kona Lanes continued to lose business to newer centers, despite efforts to appeal to a more diverse customer base by hosting local music acts, supporting a Polynesian-themed restaurant called Kona Korral, and promoting gimmicks like "nude bowling". Eventually, the property became more valuable than the business. The landowners, C.J. Segerstrom & Sons, gave Jack Mann Jr. a choice: spend \$10–20 million to update the center, or give it up. Mann chose the latter rather than spend such a sum on a site without a long-term lease. Plans to build a Kohl's department store on the site occupied by Kona Lanes and the already-closed Edwards Cinema Center and Ice Capades Chalet were approved by the city's planning commissioners, but they were met with resistance by neighbors who did not believe the store was a good fit for the area. In February 2003, Mayor Karen Robinson complained to commissioners that Costa Mesa's policy-makers were discarding recreation as part of the quality of residents' lives, and appealed their decision. The city council later rejected the proposal. Meanwhile, the efforts to save Kona Lanes failed; it closed for good in May 2003 and was demolished soon after. ### Rezoning and new use The 7.5-acre parcel was rezoned in 2010 for senior housing that was expected to provide a new customer base for the restaurants and retailers already in the area and for commercial developments still to come. The lot had sat unused for about ten years before construction on the 215-unit complex began; Azulón at Mesa Verde opened in 2014. Several dozen palm and eucalyptus trees were saved and replanted on the site. ## Legacy ### Closure and community response Activity at Kona Lanes increased in its final days, due to the nostalgic value of potential keepsakes. Manager Juanita Johnson said people were asking to buy furniture, office equipment, and more. "Some of that is older than I am." The more substantial items, including the original wood lanes, had been sold off prior to demolition, while dumpster divers hit the parking lot each day, looking for any items of interest. The loss of Kona Lanes was a repeated topic at political events. One Costa Mesa city council candidate said he made a commitment to public service when the building was torn down with little resistance. Another would-be council member agreed that the demolition should never have happened. In 2012, Costa Mesa planners began to upgrade The Triangle, a retail space along Harbor Boulevard two miles south of the Kona Lanes site. Those plans featured a 10-lane bowling alley that opened in 2014. One city official said the center answered residents' longtime call for a new, upscale bowling facility. ### Historic roadside sign The huge, neon-lit KONA LANES BOWL sign was featured in publications by the Costa Mesa Historical Society, along with The Book of Tiki and Tiki Road Trip; it also inspired professional paintings, and was one of several Costa Mesa landmarks memorialized in a mural painted on the local Floyd's 99 Barbershop. In 2003, Costa Mesa Planning Commissioner Katrina Foley led an effort to save the sign from the scrap heap. Thanks in part to a private donation, the marquee was trucked 2,500 miles to Cincinnati, one of the first 20 signs accepted by the American Sign Museum. The KONA LANES cabinet was refurbished and put on permanent display; the larger, badly rusted BOWL section tore and collapsed as it was offloaded in Cincinnati, and was not saved. ## See also - Ten-pin bowling
27,949,474
Walden–Wallkill Rail Trail
1,002,375,761
Rail trail in New York, US
[ "Parks in Orange County, New York", "Parks in Ulster County, New York", "Wallkill Valley Railroad" ]
The Walden–Wallkill Rail Trail, also known as the Jesse McHugh Rail Trail, is a 3.22-mile (5.18 km) rail trail between the village of Walden, New York and the neighboring hamlet of Wallkill. The two communities are located in Orange and Ulster counties, respectively, in upstate New York. The trail, like the Wallkill Valley Rail Trail to the north, is part of the former Wallkill Valley Railroad's rail corridor. The land was purchased by the towns of Montgomery and Shawangunk in 1985 and converted to a public trail. The portion of the trail in Shawangunk was formally opened in 1993 and named after former town supervisor Jesse McHugh. Plans to pave the trail between Walden and Wallkill were discussed since 2001, and the route was finally paved between 2008 and 2009. The trail includes an unofficial, unimproved section to the north of Wallkill, and is bounded by NY 52 and NY 208. ## History Stretching 33 miles (53 km) from Montgomery to Kingston, the Wallkill Valley Railroad operated from 1866 until its last regular freight run on December 31, 1977. In the 1980s, Conrail, then the owner of the Wallkill Valley line, attempted to sell the former rail corridor. The towns of Montgomery and Shawangunk – in Orange and Ulster counties, respectively – purchased their sections of the rail line to allow "development of a commercial corridor [as well as] utility easements and access" to a local reservoir. The Montgomery section consisted of 2 miles (3.2 km) from the village of Walden to the town line with Shawangunk, and the Shawangunk section ran 2.3 miles (3.7 km) north from the town line to Birch Road. The purchases were completed in August and October 1985, respectively. In November of that year, the New York State Department of Correctional Services bought 1.4 miles (2.3 km) of the former corridor in Shawangunk's hamlet of Wallkill, near the Wallkill Correctional Facility. This portion extends from Birch Road to the town line with Gardiner. The Shawangunk Correctional Facility was built at that location. South of Walden, the corridor remains an active rail line operated by the Norfolk Southern Railway. North of the prisons, the former corridor continues as the separate Wallkill Valley Rail Trail. Rail trail enthusiasts have been trying to find a way to combine the two rail trails since the 1990s, and in 2004 the town of Shawangunk commissioned an open space study that identified possible ways to accomplish such a connection. A 2008 Ulster County transportation plan included projects to connect the trails, and the town of Shawangunk is currently considering plans to connect the trails by diverting the corridor along Birch Road. The original route of the corridor is 40 feet (12 m) within the prisons' perimeter fence. The portion of the former corridor running through the center of Wallkill was converted to a road, Railroad Avenue. The southern part of the route, running from Wallkill to the Montgomery–Shawangunk town line, was officially opened as the Jesse McHugh Rail Trail on June 5, 1993. Jesse McHugh was a former Shawangunk town supervisor. The northern portion of the Shawangunk section, which stretches to the border of the prison grounds, is maintained by the town but not officially part of the trail. In 2001, Shawangunk, Montgomery and the Walden began applying for over \$600,000 in TEA-21 grants to create a paved, Americans with Disabilities Act of 1990 (ADA)-accessible trail between Walden and Wallkill. The total cost of paving the trail was expected to be \$750,000, though it eventually ballooned to \$1.5 million. The decision to pave the trail was vehemently opposed by horseback riders who felt it would endanger them, and protested at several public meetings by the Mid-Hudson Horse Trails Association. The decision was also opposed by nearby homeowners who believed an increase in trail use would threaten their privacy. In October 2003, Walden, Shawangunk and Montgomery acquired the \$600,000 grant needed to begin paving the trail. Two months later, Bob and Doris Kimball, a couple in Montgomery, donated 20 acres (8.1 ha) of their land to create a park by the trail near Lake Osiris Road. The park is expected to be developed once funds are available to do so. Nearly \$200,000 in funding to complete pavement of the trail was lost when the outgoing 109th Congress did not approve a 2006 budget bill. In February 2008, Congressman Maurice Hinchey announced the appropriation of \$351,000 to complete the project. Construction began on September 22, 2008, and the paved 3.22-mile (5.18 km) trail opened on May 2, 2009. Flooding from hurricanes in 2011 caused a cave-in along the Montgomery section of the trial. The storms eroded much of the ground beneath the trail, causing the ground to sink. As of July 2012, no repairs have been completed; the cost of fully repairing the trail was estimated to be \$214,000. ## Route The trail begins at the 9.4-acre (3.8 ha) Wooster Grove Park in the village of Walden, near NY 52. There is a visitor center for rail trail users at the park. The park also contains Walden's former train station, which has since been renovated as a recreational facility. The trail continues 1 mile (1.6 km) north from the trailhead before reaching Lake Osiris Road, continuing another 1+1⁄4 miles (2.0 km) to the Montgomery–Shawangunk town line. Once in Shawangunk, the trail passes by the Borden Estate, a mansion built in 1906 by the granddaughter of Gail Borden. In 1854, Gail Borden patented the process for creating condensed milk; the Borden family subsequently owned a series of milk companies. The mansion is now used by the School of Practical Philosophy for philosophy classes. About 3⁄4 mile (1.2 km) from the town line, the trail reaches its Wallkill trailhead bordering NY 208, directly across the street from the Shawangunk police station. The paved section between Walden and Wallkill is flat, with only a 3% grade. A portion of the former corridor in central Wallkill has since been converted to a road. An unimproved northern section in Wallkill extends 1+1⁄2 miles (2.4 km) from the intersection of Railroad Avenue and C. E. Penny Drive to Birch Road. Birch Road marks the border between the former corridor and two state prisons. This section passes through private hunting grounds and is "unmarked and has no signs" but is "arguably ... the most scenic" portion of the former Wallkill Valley rail corridor, featuring "splendid" views of the Shawangunk Ridge to the west. While the total length of the trail is officially only about 3 miles (4.8 km), the inclusion of the northern section increases its length to about 4+1⁄2 miles (7.2 km). The trail is used for walking, jogging, bicycling and dog walking. ## See also - Wallkill Valley Rail Trail – the northern continuation of the former rail corridor
71,434,174
Ælfwynn, wife of Æthelstan Half-King
1,145,367,121
Member of a wealthy Anglo-Saxon family in Huntingdonshire, spouse of Æthelstan Half-King, died 983
[ "10th-century English women" ]
Ælfwynn or Ælfwyn (died 8 July 983) was a member of a wealthy Anglo-Saxon family in Huntingdonshire who married Æthelstan Half-King, the powerful ealdorman of East Anglia, in about 932. She is chiefly known for having been foster-mother to the future King Edgar the Peaceful following his mother's death in 944, when he was an infant. She had four sons, and the youngest, Æthelwine, became the chief secular magnate and leading supporter of the monastic reform movement. Ælfwynn donated her estates for his foundation of Ramsey Abbey in 966 and was probably buried there. ## Life and family Ælfwynn was the wife of Æthelstan Half-King, Ealdorman of East Anglia, who was called the Half-King because it was believed that he was so powerful that King Edmund I (r. 940–946) and his brother King Eadred (r. 946–955) depended on his advice. He was a strong supporter of the monastic reform movement and a close friend of Dunstan, who was one of its leaders and a future Archbishop of Canterbury and saint. Æthelstan married Ælfwynn soon after he became an ealdorman in 932. Her parents are not known, but she came from a wealthy Huntingdonshire family. The late tenth-century writer Byrhtferth of Ramsay wrote that her son Æthelwine: "had a distinguished lineage on his mother's side. In praising her, Archbishop Dunstan said that she and her kindred were blessed." She had a brother, Æthelsige, who acted as a surety when estates in Huntingdonshire were sold to Peterborough Abbey. Ælfwynn had four sons, Æthelwold, Ælfwold, Æthelsige (his uncle's namesake) and Æthelwine. Æthelwold was appointed an ealdorman for part of his father's territory of East Anglia by Edmund's elder son King Eadwig (r. 955–959) in 956, perhaps in preparation for Æthelstan's retirement shortly afterwards to become a monk at Glastonbury Abbey. In the same year, Æthelwold married Ælfthryth, and after his death in 962 she became the wife of King Edgar the Peaceful (r. 959–975) and the mother of King Æthelred the Unready (r. 978–1016). Ælfwold witnessed Edgar's charters as a thegn from 958 to 972. Æflwyn's third son, Æthelsige, also witnessed charters as a thegn from 958. He was part of Edgar's inner circle, serving as his camerarius (chamberlain) until 963. King Edmund's younger son, the future King Edgar, was born around 943 and his mother Ælfgifu died in 944. Edgar was sent to be fostered by Ælfwynn, which the Medieval Latin expert Michael Lapidge sees as a "token of her power and influence". It enabled Æthelstan's family to strengthen its ties with the royal family. Edgar was probably brought up in Huntingdonshire, which was the location of Ælfwynn's estates and later of Æthelwine's home. In about 958 Edgar gave Ælfwynn a ten-hide estate at Old Weston in Huntingdonshire as thanks. The historian Robin Fleming comments that the ætheling (prince) was profoundly influenced by his upbringing: Thus, the ætheling was reared in the household of one of his father's closest allies and raised among the Half-King's own brothers and sons, five of whom at one time or another were ealdormen. Since Half-King was an intimate of the reform circle, in particular with St Dunstan, Edgar came of age in an atmosphere dominated by the ideals of monastic reform. Some of Edgar's affection for monks and his determination to revive Benedictine monasticism must have been acquired in this household of his youth. Ælfwynn's youngest son, Æthelwine, was a few years older than Edgar and probably brought up with him. Æthelwine was appointed ealdorman of East Anglia when Æthelwold died in 962, and he became the dominant lay figure in government, attesting charters in first place among the ealdormen, following the death of his chief rival, Ælfhere, Ealdorman of Mercia, in 983. After Æthelsige left Edgar's service, he was active in his brother's administration of East Anglia until he died on 13 October 987. Æthelwine was called Dei Amicus (friend of God) because he was the leading lay patron (after Edgar) of the monastic reform movement, and in 966 he founded Ramsey Abbey, together with Oswald, the Bishop of Worcester and later Archbishop of York. Ælfwynn supported Ramsey in preference to the religious houses favoured by her husband, and her estates, including the property donated by Edgar, formed part of the endowment for Ramsey. She may have played a crucial role in its establishment. Her second son, Ælfwold, was a strong supporter of monastic reform who ordered the killing of a man who illegally claimed property belonging to Peterborough Abbey. He and his wife were also benefactors of Ramsey and he was buried there following his death on 14 April 990. Æthelwine ceased attesting charters in 990 and he died on 24 April 992 after a long illness. He was also buried at Ramsey. ## Death Ælfwynn died on 8 July 983. Her husband was buried at Glastonbury Abbey, whereas Ælfwynn was probably buried at Ramsey. She was recorded in the Ramsey necrology as "our sister", the donor of Old Weston, and her death was commemorated each year on 8 July, the same day as King Edgar.
4,179,247
Dave Gallaher
1,171,859,532
New Zealand Rugby union footballer
[ "1873 births", "1917 deaths", "Irish emigrants to New Zealand", "New Zealand international rugby union players", "New Zealand military personnel killed in World War I", "New Zealand military personnel of the Second Boer War", "People from Ramelton", "Ponsonby RFC players", "Rugby union hookers", "Rugby union players from Auckland", "Rugby union players from County Donegal", "Rugby union wing-forwards", "World Rugby Hall of Fame inductees" ]
David Gallaher (30 October 1873 – 4 October 1917) was an Irish-born New Zealand rugby union footballer best remembered as the captain of the "Original All Blacks"—the 1905–06 New Zealand national team, the first representative New Zealand side to tour the British Isles. Under Gallaher's leadership the Originals won 34 out of 35 matches over the course of tour, including legs in France and North America; the New Zealanders scored 976 points and conceded only 59. Before returning home he co-wrote the classic rugby text The Complete Rugby Footballer with his vice-captain Billy Stead. Gallaher retired as a player after the 1905–06 tour and took up coaching and selecting; he was a selector for both Auckland and New Zealand for most of the following decade. Born in Ramelton, Ireland, Gallaher migrated to New Zealand with his family as a small child. After moving to Auckland, in 1895 he joined Ponsonby RFC and was selected for his province in 1896. In 1901–02 he served with the New Zealand Contingent in the Anglo-Boer War. He first appeared on the New Zealand national team for their unbeaten tour of Australia in 1903, and played in New Zealand's first ever Test match, against Australia in Sydney. The Originals Gallaher captained during 1905–06 helped to cement rugby as New Zealand's national sport, but he was relentlessly pilloried by the British press for his role as wing-forward. The use of a wing-forward, which critics felt was a tactic to deliberately obstruct opponents, contributed to decades of strain between the rugby authorities of New Zealand and the Home Nations; the International Rugby Football Board (IRFB) effectively outlawed the position in 1931. During the First World War, Gallaher enlisted in the New Zealand Division to fight in Europe. He was fatally wounded by shrapnel wounds to the head in 1917 at the Battle of Passchendaele in Belgium. He has since been inducted into the World Rugby Hall of Fame, International Rugby Hall of Fame, and the New Zealand Sports Hall of Fame. A number of memorials exist in Gallaher's honour, including the Gallaher Shield for the winner of Auckland's club championship, and the Dave Gallaher Trophy contested between the national teams of France and New Zealand. ## Early life Dave Gallaher was born as David Gallagher on 30 October 1873 at Ramelton, County Donegal, Ireland, the third son of James Henry Gallagher, a 69-year-old shopkeeper, and his 29-year-old wife, Maria Hardy Gallagher (née McCloskie). James was a widower who had married Maria in 1866, a year after the death of his first wife. James had two children from his first marriage, and David was the seventh from his marriage to Maria. The couple had three more children after David, but of their ten offspring, three died in infancy. The couple's other offspring were: Joseph (born 1867), Isabella (1868), James (1869), Maria (called Molly, 1870), Jane (1871), Thomas (1872), William (1875), Oswald (1876), and James Patrick (1878). David was baptised as a Presbyterian in the First Ramelton Meeting House on 8 January 1874. After the struggling in his drapery business in Ramelton, James decided to emigrate with his family to New Zealand as part of George Vesey Stewart's Katikati Special Settlement scheme. In May 1878 the Gallaghers – minus the sick James Patrick who at eight weeks old was too weak to make the trip – sailed from Belfast on the Lady Jocelyn for Katikati in the Bay of Plenty. On arriving in New Zealand, the family altered their surname to "Gallaher" in an effort to reduce confusion over its spelling and pronunciation. The Gallaher couple and their six children arrived in Auckland after a three-month voyage, and from there sailed to Tauranga in the Bay of Plenty, before their final voyage to Katikati. On arrival they found the settlement scheme was not what they had envisaged or been promised: the land allocated to the family required enormous work to be broken in before being suitable for farming, there was no easy access to water, and the settlement was very hilly. It had been hoped that James would be employed as the agent for the Donegal Knitting Company in New Zealand, which was to be established by Lord George Hill. But Hill died unexpectedly and his successor did not support the initiative. As the family's poor quality land was insufficient to make a living, the children's mother Maria soon became the chief breadwinner after she obtained a position teaching for £2 a week at the new No. 2 School. In January 1886 David spent a week in Auckland hospital undergoing surgery to treat stunted muscles in his left leg which had led to curvature of his spine. His mother became sick that same year, and in 1887 lost her teaching position. His mother's condition worsened and she died of cancer on 9 September 1887. With a father in his seventies, the 13-year-old David was compelled to leave school so he could help his brothers to support the family. He took a job with a local stock and station agent. The older Gallaher children had to work to prevent the local authorities from putting their younger siblings up for adoption. In 1889, with the exception of William who remained in Katikati, the family joined Joseph in Auckland, where he had found work. David – who was by now 17 years old – was able to obtain work at the Northern Roller Mills Company, and was soon a member of the firm's junior cricket team. In the late 1890s Gallaher took employment at the Auckland Farmers' Freezing Company as a labourer; by the time of his deployment for the First World War two decades later he had risen to the position of foreman. His work required the constant handling of heavy animal carcasses, which helped him build upper body strength and kept him fit. ## Early rugby career Gallaher first gained attention for his talents as a rugby player while living in Katikati. After moving to Auckland, he played junior rugby for Parnell from 1890. He joined the Ponsonby District Rugby Football Club in 1895, after the family moved to Freemans Bay following Joseph's marriage to Nell Burchell. Gallaher, who played at hooker, was selected for an Auckland "B" side that year, and made his debut for Auckland against the touring Queensland team on 8 August 1896. The Aucklanders won 15–6. Gallaher was retained for Auckland's remaining fixtures that season: defeats to Wellington, Taranaki and Otago. In 1897, Gallaher's Ponsonby won eight of their nine matches en route to the Auckland club championship. He was selected to play for Auckland against the New Zealand representative side that had just completed a tour of Australia. The Aucklanders won 11–10 after scoring a late try; it was only New Zealand's second loss of their eleven-match tour. Later that year Gallaher was selected for Auckland's three-match tour where they defeated Taranaki, Wellington and Wanganui. Wellington's defeat was their first loss at home since the formation of the Wellington Rugby Football Union in 1879. The following season was less eventful for Gallaher – he played much of the season for Ponsonby, but injury prevented his selection for Auckland. After missing the 1898 season for Auckland, Gallaher continued to be selected for the union throughout 1899 and 1900. The side was undefeated over this time; he played for them twice in 1899, and in all four matches in 1900. He represented Auckland a total of 26 times over his career. ## Anglo-Boer War In January 1901 Gallaher joined the Sixth New Zealand Contingent of Mounted Rifles for service in the Anglo-Boer War. When enlisting he gave his date of birth as 31 October 1876, three years later than the actual date. It is unknown why he did this but the later date continued to be used in official records for the rest of his life. Gallaher was given a send-off dinner by his Ponsonby club before the contingent departed from Auckland on 31 January. After disembarking in South Africa at East London on 14 March 1901, Gallaher's contingent immediately embarked for Pretoria, and it was there that, as part of forces under the command of General Herbert Plumer, they set about their task of "rid[ding] the Northern Transvaal of Boer guerrillas and sympathizers." A member of the contingent's 16th (Auckland) Company, he served in the advanced guard, who scouted ahead of the main force. In October 1901 Gallaher contracted malaria, and was hospitalised in Charlestown, Natal. In a letter he composed to his sister while recovering he wrote: > we have been all over S[outh] Africa pretty well I believe, on the trek the whole time and it looks as if we will be trekking till the end of the Chapter. We have a fair share of the fighting all the time and I am still alive and kicking although I have had a couple of pretty close calls, one day I thought I would have to say good bye to old New Zealand but I had my usual luck and so came out all right Between late December 1901 and early January 1902 Gallaher and his contingent were involved in a number of skirmishes. He described one incident where he had several Boer fighters in his sights, but did not have "the heart" to fire at them while they rescued one of their comrades. Describing a later encounter to his sister, Gallaher wrote: "We had a total of 22 killed and 36 injured and a few taken prisoners[;] it was a pretty mournful sight to see the Red Cross bearers cruising around the field fetching all the dead and wounded who were laying all over the place". By March 1902 Gallaher had reached the rank of squadron sergeant-major, and his contingent was on its way to Durban. There the unit boarded ship for New Zealand, but Gallaher stayed behind, transferring to the Tenth New Zealand Contingent. His new unit did not see active service in South Africa, and he returned with them to New Zealand in August 1902. For his service Gallaher received the Queen's South Africa Medal (Cape Colony, Orange Free State, and Transvaal Clasps), and King's South Africa Medal (South Africa 1901 and South Africa 1902 Clasps). ## Resumption of his rugby career During his time in South Africa Gallaher did play some rugby, including captaining the New Zealand military team that played ten games and won the rugby championship among the British forces. But he was not fit enough to play immediately upon his return to New Zealand, and so did not resume playing rugby for Ponsonby until the 1903 season. When he did return for his club, for the first match of the year, he was described as "the outstanding forward" in a comprehensive defeat of Parnell. Despite having missed two seasons of provincial rugby, Gallaher was included in the 22-man New Zealand representative squad to tour Australia during 1903. He was the first Ponsonby player ever to play for the New Zealand team, commonly known as the "All Blacks". The 1903 team to Australia was, according to Winston McCarthy's 1968 history of the All Blacks, "still regarded by old-timers as the greatest team to ever leave New Zealand." The tour did not start well – a preliminary match in New Zealand, against Wellington, was lost 14–5, though Gallaher did score his first try for his country. Gallaher played eight matches – the first four as hooker and the remainder as wing-forward – out of eleven during the six-week tour. The party was captained by the veteran Otago player Jimmy Duncan, who was widely recognised as a master tactician. The first match in Australia, against New South Wales, was won 12–0 by the New Zealanders, despite their having a man sent off. After playing a Combined Western Districts side, New Zealand played a second match against New South Wales. New Zealand won again, but only 3–0 on a flooded pitch at Sydney Cricket Ground. The side continued touring the state before making their way north to Queensland, where they twice played the state side. The New Zealanders then returned to New South Wales, where the first-ever Australia–New Zealand rugby union Test match took place in Sydney. Since the selection of the first New Zealand team in 1884, inter-colonial games had been played against New South Wales (ten New Zealand wins from thirteen matches), and Queensland (seven New Zealand wins from seven), but none had been contested against a combined Australian side. The match – won 22–3 by the New Zealanders, who scored three tries to nil – marked Gallaher's first international cap. The last match of the tour was against New South Wales Country; New Zealand won 32–0. On their ten-match tour of Australia, New Zealand had scored 276 points and conceded only 13. Back in New Zealand, Gallaher was selected for the North Island in his first ever Inter-Island match; the South won 12–5. He then continued playing for Auckland, who were conducting a tour of both islands. Gallaher appeared in six of their seven matches, against Taranaki, Wellington, Southland, Otago, Canterbury, and South Canterbury. Auckland lost the first two matches, but won the others. In 1904 the first Ranfurly Shield match was played. The shield, a provincial challenge trophy won by defeating the holder, was to become the most prestigious trophy in domestic New Zealand rugby. Due to their unmatched provincial record at the time Auckland were awarded the shield. The first shield challenge was played against Wellington, who were not expected to pose much of a threat. Auckland had not lost at home in six years, but, with Gallaher in the side, were upset 6–3 by the Wellingtonians. Gallaher was then selected for the New Zealand team that faced the touring British Isles in what was New Zealand's first Test match on home soil. The British team were conducting a tour of Australia and New Zealand, and had finished their Australian leg unbeaten. Jimmy Duncan, who was coaching New Zealand after retiring as a player, said before the historic match: "I have given them directions. It's man for man all the time, and I have bet Gallaher a new hat that he can't catch [Percy] Bush. Bush has never been collared in Australia but he'll get it today." The match was tied 3–3 at half-time, but New Zealand were the stronger side in the second half and eventually won 9–3. Gallaher was praised by press for his all-round display at wing-forward, but in particular for his successful harassment of the British Isles' half-back Tommy Vile. The New Zealand defeat was the first tour loss for the British side, who then drew with a combined Taranaki-Wanganui-Manawatu side before travelling to Auckland. Gallaher played for Auckland against the tourists and scored one of the tries in their 13–0 victory. He was part of a forward pack that dominated their opponents, and again he troubled Vile; his tackling of Vile and Bush killed many British attacks. The rugby historian Terry McLean would write in 1987 that "his display could be ranked with the finest exhibitions of wing-forward play". Gallaher represented Auckland once more in 1904, a 3–0 loss to Taranaki. ## 1905 tour ### Background and preparations At the end of the 1904 season the New Zealand Rugby Football Union (NZRFU) suspended Gallaher from playing after a disagreement over a claim for expenses he had submitted to the Auckland Rugby Football Union for travel to play in the match against the British Isles. Eventually the matter was resolved when, under protest, Gallaher repaid the disputed amount. This settlement, coupled with his performance in 26–0 North Island win over the South Island in the pre-tour trial, allowed Gallaher to be considered for selection for New Zealand's 1905–06 tour of Europe and North America. The NZRFU had been trying to secure an invitation to send a team to Britain for some time, and were finally able to secure satisfactory financial guarantees to proceed in 1905. This was the first representative New Zealand team to undertake such a tour, though a privately organised team, the New Zealand Natives, had preceded them in 1888–89. The NZRFU named Gallaher captain for the tour, with Billy Stead as vice-captain. A week into the voyage to Britain aboard the SS Rimutaka, rumours circulated that some of the southern players were unhappy with the appointment of Gallaher, and with what they perceived as an Auckland bias in the squad. The dissidents contended that the captain and vice-captain should have been elected by the players, as they had been on the 1897 and 1903 tours to Australia. Gallaher recognised the damage factionalism might do to the team and offered to resign, as did the vice-captain Stead. Although the teams' manager refused to accept the resignations, the players still took a vote—17 out of 29 endorsed the NZRFU's selections. During the voyage to England the team conducted training drills on the ship's deck; for this the forwards were coached by Gallaher and fellow player Bill Cunningham, while Stead was in charge of the backs. Consequently, the services of the NZRFU-appointed coach Jimmy Duncan were not used; his appointment had caused opposition from many in the squad who believed his expertise was not required, and that an extra player should have been taken on tour instead. After a six-week voyage, the team arrived in Plymouth, England on 8 September 1905. ### Early tour matches The New Zealanders' first match was against the Devon county side at Exeter. A close contest was expected, but New Zealand ran out 55–4 winners, scoring twelve tries and conceding only a drop-goal. Reaction to the match was mixed – the team were accompanied by a cheering crowd and marching band following the win, but Gallaher's play at wing-forward provoked some criticism in the press. The use of a wing-forward was a distinctive feature of New Zealand play. Instead of having eight men in the scrum as was normal elsewhere, seven men were used – the missing man, the wing-forward, instead fed the ball into the scrum then held onto one of their hookers while the ball progressed through the scrum to their half-back. With the wing-forward bound to the side of the scrum, the opposing half-back would then have to manoeuvre past him to tackle the player with the ball. This increased the amount of time the half-back would have in possession of the ball before his opposite could tackle him. The use of this new tactic by New Zealand meant that Gallaher, the team's wing-forward, was repeatedly accused by the English of obstruction, though the referee Percy Coles, an official of the English Rugby Football Union (RFU), rarely penalised him in the Devon match. The Originals' fullback Billy Wallace posited that New Zealand's superior scrum made Gallaher's style of play more prominent. Unlike British and Irish teams of the time, New Zealand employed specialist positions for their forwards. Despite often facing an extra man in the scrum, the New Zealanders "drove like a cleaver through British forward packs". Gallaher later said: "I think my play is fair – I sincerely trust so – and surely the fact that both Mr Percy Coles and Mr D. H. Bowen – two of the referees of our matches, and fairly representative of English and Welsh ideas, have taken no exception so it ought to have some weight." The British press, looking to find fault in New Zealand's play, continued to criticise Gallaher throughout the tour. Gallaher believed the key to his side's success was a difference in playing styles, while Winston McCarthy believed the unique backline formation to be a major factor. Following the opening match the "All Blacks" – as the New Zealand team came to be known – defeated Cornwall and then Bristol, both 41–0. They then defeated Northampton 32–0. The tour continued in much the same way, with the All Blacks defeating Leicester, Middlesex, Durham, Hartlepool Clubs and Northumberland; in nearly all cases the defeats were inflicted without conceding any points (the one exception being Durham, who scored a try against New Zealand). The New Zealanders then comfortably defeated Gloucester and Somerset before facing Devonport Albion, the incumbent English club champions, who had not lost at home in 18 months. New Zealand beat them 21–3 in front of a crowd of 20,000. Gallaher scored the All Blacks' final try, an effort described by the Plymouth Herald as, "... a gem. It was a tearing rush for about fifty yards with clockwork-like passing all the way." New Zealand won their next seven matches, including victories over Blackheath, Oxford University and Cambridge University. Billy Wallace contended that the New Zealanders' form peaked with the win over Blackheath; he recalled that "after this game injuries began to take their toll and prevented us ever putting in so fine a team again on the tour." By the time the All Blacks played their first Test match, against Scotland, the team had played and won nineteen matches, and scored 612 points while conceding only 15. ### Scotland, Ireland and England internationals The Scottish Football Union (SFU), the governing body for rugby union in Scotland, did not give the New Zealanders an official welcome, and sent only one official to greet them on their arrival in Edinburgh. In addition, the SFU refused a financial guarantee for the match, promising the gate receipts to the New Zealanders instead; this meant that the NZRFU had to take on all monetary responsibilities for the match. One reason for the cold reception from the SFU may have been because of negative reports from David Bedell-Sivright, who was Scotland's captain and had also captained the British Isles team on their 1904 tour of New Zealand. Bedell-Sivright had reported unfavourably on his experiences in New Zealand the previous year, especially regarding the wing-forward play of Gallaher. When time for the Scotland Test did arrive, it was discovered that as the ground had not been covered for protection from the elements, and had frozen over. The SFU wanted to abandon the match, but Gallaher and the tour manager George Dixon contended that the weather would improve enough for the pitch to thaw, and the match was eventually allowed to proceed. The Test was closely contested, with Scotland leading 7–6 at half-time, but the All Blacks scored two late tries to win 12–7; despite the close score-line, the New Zealanders were clearly the better of the two sides. Four days later the tourists played a West of Scotland selection, where they received a much warmer reception than for the Scotland match, then travelled via Belfast to Dublin where they faced Ireland. Gallaher did not play in either match due to a leg injury suffered during the Scotland Test. New Zealand won the Ireland match 15–0, then defeated a team representing Munster province. By the time of New Zealand's next game, against England in London, Gallaher had recovered from his injury enough to play. Between 40,000 and 80,000 saw the match. The All Blacks scored five tries (four by Duncan McGregor, playing at wing) to win 15–0. According to the England player Dai Gent, the victory would have been even greater had the match conditions been dry. "One cannot help thinking that England might have picked a stronger side," said Gallaher. "From our experience, we did not think that this side was fully representative of the best men to be found in the country." Observers noted that Gallaher still seemed to be suffering from his leg injury during the match. New Zealand played three more matches in England – wins over Cheltenham, Cheshire, and Yorkshire – before travelling on to Wales. ### Wales Wales were the dominant rugby country of the four Home Nations, and in the middle of a "golden age" at the time. Many commentators in both New Zealand and the United Kingdom felt the Welsh test was the best chance of stopping an All Blacks clean sweep. As such, the game was billed as the "Match of the Century" even before the tourists had left New Zealand. Gallaher and his team faced them three days after the Yorkshire match. The All Blacks had thus far played 27 matches on tour, scoring 801 points while conceding only 22, and all in only 88 days. They were struggling to field fifteen fit players; a number of their best players, including Stead, were unavailable due to injury. The match was preceded by an All Black haka, to which the crowd responded with the Welsh national song "Land of my Fathers". Wales had developed tactics to negate the seven-man New Zealand scrum, and removed a man from their scrum to play as a "rover", equivalent to Gallaher's wing-forward position. Gallaher was consistently penalised by the Scottish referee, John Dallas, who held that the New Zealander was feeding the ball into the scrum incorrectly. This eventually compelled Gallaher to instruct his team not to contest the scrums, and therefore give Wales possession following each scrum. Bob Deans, playing at wing for New Zealand that day, later said that Dallas had gone "out to penalise Gallaher – there is no doubt about that". Teddy Morgan scored an unconverted try for Wales shortly before half-time to give the home side a 3–0 lead. The New Zealand backs had been poor in the first half, and the side's general form was well below that of earlier in the tour. However New Zealand were generally perceived to be the better side in the second half, with the performance of the Welsh fullback Bert Winfield keeping his team in the game. The most controversial moment of the tour happened late in the second half. Wallace recovered a Welsh kick and cut across the field, and with only Winfield to beat, passed to the New Zealand wing Deans. What happened next has provoked intense debate: Deans was tackled by the Welsh and either fell short of the try-line, or placed the ball over it before being dragged back. Dallas, who had dressed in heavy clothing and was struggling to keep up with the pace of the game, was 30 yards (27 m) behind play. When he arrived he ruled that Deans was short of the try-line, and so did not award New Zealand a try. Play continued, but the All Blacks could not score, and Wales won 3–0. This was New Zealand's first loss of the tour. Following the match Gallaher was asked if he was unhappy with any aspect of the game; he replied that "the better team won and I am content." When asked about Dallas's refereeing, he said: "I have always made it a point never to express a view regarding the referee in any match in which I have played". Gallaher was gracious in defeat, but Dixon was highly critical of both Dallas and the Welsh newspapers, who he accused of "violently and unjustly" attacking New Zealand's captain. Gallaher would later admit that he had been annoyed by this criticism, which he found unfair; he also pointed out that though the Welsh condemned the wing-forward position, they had themselves adopted some elements of it. Later during the tour, when discussing the issue of his feeding the ball into the scrum, he said: > No referee could accuse me throughout the tour of putting the ball in unfairly or of putting 'bias' on it. I would be quite content to accept the verdict on such referees as Mr. Gil Evans or Mr. Percy Coles on the point. There were times when the scrum work was done so neatly that as soon as the ball had left my hands the forwards shoved over the top of it, and it was heeled out, and Roberts was off with it before you could say 'knife'. It was all over so quickly that almost everyone – the referee sometimes included – thought there was something unfair about it, some 'trickery' and that the ball had not only been put in but passed out unfairly. People here have been accustomed when the ball was put into the scrum to see it wobbling about and frequently never coming out in a proper way. How can a man possibly put 'bias' on a ball if he rolls it into the scrum? The only way to put my screw on a ball would be, I would say, to throw it straight down, shoulder high, on to its end, so that it may possibly bounce in the desired direction. I have never done that – in fact, it can’t be done in the scrum and if I had ever attempted it I should have expected to be penalised immediately. Four more matches were contested in Wales, with Gallaher appearing in three. He played in the match against Glamorgan, won by New Zealand 9–0, but had his finger bitten, which was serious enough for him to miss the fixture against Newport. He returned to face Cardiff, the Welsh champions, on Boxing Day. Gallaher was again booed by the Welsh crowd, and once more the All Blacks were troubled in the scrum, this time after losing a player to injury. The New Zealanders won, but narrowly; Gallaher asserted after the match that Cardiff were the strongest club side they had met during the tour. New Zealand then faced Swansea in their last match in the British Isles. Gallaher again struggled to field a fit side, and at 3–0 down late in the match they were heading for their second defeat on tour. Wallace kicked a drop-goal – then worth four points – late in the game to give the All Blacks a narrow 4–3 victory. ### France, North America, and return The side departed Wales and travelled to Paris, where they faced France on 1 January 1906, in the home side's first ever Test match. The All Blacks led 18–3 at half time. After the French scored their second try, giving them 8 points – the most any team had scored against the All Blacks – the New Zealanders responded with six unanswered tries to win 38–8. They then returned to London, where they learned that New Zealand's Prime Minister, Richard Seddon, had arranged for them to return home via North America. Not all of the players were keen on the idea, and four did not make the trip, but the new plans did give the team over two weeks to spend in England before their departure. Before the New Zealand squad left Britain for North America, the English publisher Henry Leach asked Stead and Gallaher to author a book on rugby tactics and play. They finished the task in under a fortnight and were each paid £50. Entitled The Complete Rugby Footballer, the book was 322 pages long and included chapters on tactics and play, as well as a summary of rugby's history in New Zealand including the 1905 tour. It was mainly authored by Stead, a bootmaker, with Gallaher contributing most of the diagrams. Gallaher almost certainly made some contributions to the text, including sections on Auckland club rugby, and on forward play. The book showed the All Blacks' tactics and planning to be superior to others of the time, and according to Matt Elliott is "marvellously astute"; it received universal acclaim on its publication. According to a 2011 assessment by ESPN's Graham Jenkins, it "remains one of the most influential books produced in the realms of rugby literature". The New Zealanders travelled to New York City, where they played an exhibition game, then on to San Francisco. There they played two official matches against British Columbia, and won both easily. The tour programme thus ended; New Zealand had played 35 games and lost only once. Gallaher had played in 26 of those matches, including four Tests. Over their 32 matches in the British Isles New Zealand scored 830 points and conceded 39; overall they scored 976 points and conceded only 59. On their arrival back in New Zealand on 6 March 1906, the All Blacks were welcomed by a crowd of 10,000 before being hosted at a civic reception in Auckland. Invited to speak at the reception, Gallaher said: "We did not go behind our back to talk about the Welshman, but candidly said that on that day the better team had won. I have one recommendation to make to the New Zealand [Rugby] Union, if it was to undertake such a tour again, and that is to play the Welsh matches first." ### Aftermath and impact The 1905–06 Originals are remembered as perhaps the greatest of All Black sides, and set the standard for all their successors. They introduced a number of innovations to Britain and Ireland, including specialised forward positions and unfamiliar variations in attacking plays. But while their success helped establish rugby as New Zealand's national sport and fed a growing sporting nationalism, the controversial wing-forward position contributed to strained ties with the Home Nations' rugby authorities. British and Irish administrators were also wary of New Zealand's commitment to the amateur ethos, and questioned their sportsmanship. According to the historian Geoffrey Vincent, many in the traditional rugby establishment believed that: "Excessive striving for victory introduced an unhealthy spirit of competition, transforming a character-building 'mock fight' into 'serious fighting'. Training and specialization degraded sport to the level of work." The success of the Originals provoked plans for a professional team of players to tour England and play Northern Union clubs in what is now known as rugby league. Unlike rugby league, which was professional, rugby union was strictly amateur at the time, and in 1907 a professional team from New Zealand known as the "All Golds" (originally a play on "All Blacks") toured England and Wales before introducing rugby league to both New Zealand and Australia. According to historian Greg Ryan, the All Golds tour "confirmed many British suspicions about the rugby culture that had shaped the 1905 team." These factors may have contributed to the gap between All Black tours of the British Isles – they next toured in 1924. The NZRFU was denied representation on the International Rugby Football Board (IRFB) – composed exclusively of English, Irish, Scottish and Welsh members – until 1948. After complaining about the wing-forward for years, the Home Nations-administered IRFB made a series of law changes that effectively outlawed the position in 1931. ## Auckland and All Black selector Gallaher retired from playing after the All Blacks' tour, but remained involved in the sport as a coach and selector. He coached at age group level for Ponsonby and in 1906 succeeded Fred Murray as sole selector of the Auckland provincial team. He was Auckland selector until 1916; over this time Auckland played 65 games, won 48, lost 11 and drew 6. Gallaher did make a brief comeback as a player – travelling as the selector of an injury depleted Auckland team, he turned out against Marlborough at Blenheim in 1909; Marlborough won 8–3. He also played against the Maniapoto sub-union just over a week later. Auckland held the Ranfurly Shield from 1905 to 1913, successfully defending it 23 times. The team struggled to retain the shield during 1912 and 1913 and eventually lost it to Taranaki in a 14–11 defeat. During Gallaher's tenure as selector Auckland inflicted an 11–0 defeat of the touring 1908 Anglo-Welsh side, defeated the New Zealand Māori in 1910, and beat Australia 15–11 in 1913. Gallaher was also a national selector from 1907 to 1914, and with George Nicholson co-coached the All Blacks against the 1908 Anglo-Welsh team. A number of Gallaher's team-mates from the 1905–06 tour were included in the New Zealand squad for the series; of three Tests, the All Blacks won two and drew the other. During Gallaher's incumbency as a national selector, New Zealand played 50 matches, won 44, lost four and drew two. This included 16 Tests, of which only one was lost and two drawn. ## First World War Although exempt from conscription due to his age, Gallaher enlisted in May 1916. While awaiting for his call-up to begin training he learnt that his younger brother Company Sergeant-Major Douglas Wallace Gallaher had been killed while serving with the 11th Australian Battalion at Laventie near Fromelles on 3 June 1916. Douglas had been living in Perth, Australia prior to the war and had previously been wounded at Gallipoli. Biographer Matt Elliott describes it as a "myth" that Gallaher enlisted to avenge his younger brother; rather he claims that it was most likely due to "loyalty and duty". After enlisting and completing his basic training at Trentham he was posted to 22nd Reinforcements, 2nd Battalion, Auckland Regiment within the New Zealand Division. Gallaher left New Zealand aboard the Aparima in February 1917 and reached Britain on 2 May. Gallaher was a member of the ship's Sports Committee and spent time organising and practising for a planned rugby match at the Cape of Good Hope – it is unknown if the match ever took place. After arriving in England he was promoted to the rank of temporary sergeant and dispatched to Sling Camp for further training. His rank was confirmed as sergeant on 6 June 1917. Gallaher's unit fought in the Battle of Messines, near La Basse Ville, and in August and September 1917 they trained for the upcoming Passchendaele offensive. During the Battle of Broodseinde on 4 October 1917 Gallaher was fatally wounded by a piece of shrapnel that penetrated through his helmet, and he died later that day at the 3rd Australian Casualty Clearing Station, Gravenstafel Spur. He was 43 years old. Dave Gallaher is buried in grave No. 32513 at Nine Elms British Cemetery, which is west of Poperinge on the Helleketelweg, a road leading from the R33 Poperinge ring road in Belgium. His regulation gravestone, bearing the silver fern of New Zealand, incorrectly gives his age as 41. New Zealand sides touring Europe have since regularly visited his grave site. For his war service Gallaher was posthumously awarded the British War Medal and the Victory Medal. His brother Henry, who was a miner, served with the Australian 51st Battalion and was killed on 24 April 1917. Henry's twin brother, Charles, also served in the war and survived being badly wounded at Gallipoli. ## Personal life On 10 October 1906 Gallaher married "Nellie" Ellen Ivy May Francis at All Saints Anglican Church, Ponsonby, Auckland. Eleven years younger than Gallaher, Nellie was the daughter of Nora Francis and the sister of Arthur ('Bolla') Francis – a fellow rugby player. For many years prior to the marriage Gallaher had boarded at the Francis family home where he had come to know Nellie. Both had also attended the All Saints Anglican Church where Nellie sang in the choir. With his limited income, and frequent absences from work playing rugby, Gallaher found boarding his best accommodation option. On 28 September 1908 their daughter Nora Tahatu (later Nora Simpson) was born. Nellie Gallaher died in January 1969. Gallaher's brother-in-law Bolla Francis played for Ponsonby, Auckland and New Zealand sides for a number of years, including when Gallaher was a selector. In 1911, at age 29, and in the twilight of his All Blacks' career, he decided to switch to the professional sport of rugby league. Francis went on to represent New Zealand in rugby league, making him a dual-code international. It is unlikely his switch to rugby league was done without Gallaher's knowledge. Francis did eventually return to rugby union as a coach. Gallaher was also a member of the fraternal organisation the United Ancient Order of the Druids, and attended meetings fortnightly in Newton, not far from Ponsonby. He also played several sports in addition to rugby, including cricket, yachting and athletics. ## Memorial and legacy In 1922 the Auckland Rugby Football Union introduced the Gallaher Shield in his honour; it has since been awarded to the winner of the union's premier men's club competition. Ponsonby – Gallaher's old club – have won the title more than any other club. At international level New Zealand and France contest the Dave Gallaher Trophy, which was first awarded when New Zealand defeated France on Armistice Day in 2000. In 2011 New Zealand's then oldest living All Black, Sir Fred Allen, unveiled a 2.7-metre (8 ft 10 in) high bronze statue of Gallaher beside one of the entrances at Eden Park in Auckland. The statue was created by Malcolm Evans. Gallaher has been inducted into the International Rugby Hall of Fame, the World Rugby Hall of Fame, and the New Zealand Sports Hall of Fame. In 2005 members of the All Blacks witnessed the unveiling of a plaque at Gallaher's birthplace in Ramelton, which was presented in conjunction with the renaming of Letterkenny RFC's home ground to Dave Gallaher Memorial Park. Gallaher's name is also incorporated into the club's crest. The ground was upgraded following its renaming, and in 2012 the Letterkenny section of the ground was opened by former All Black, and Ponsonby stalwart, Bryan Williams. An Ireland-produced documentary about Gallaher's life, The Donegal All Black, was aired in 2015. Later that year, a jersey worn by Gallaher during the 1905 British Isles tour was sold at auction in Cardiff for £180,000—nearly 10 times the previous record auction price for a rugby jersey. ## Leadership and personality "Gallaher played many dashing games," the British newspaper The Sportsman reported after his death, "and led his side from one success to another until they were deemed invincible. He was a veritable artist, who never deserved all the hard things said about him, especially in South Wales. A great player, a great judge of the game". Gallaher's military experience gave him an appreciation for "discipline, cohesion and steadiness under pressure." He was however quiet, even dour, and preferred to lead by example. He insisted players spend an hour "contemplating the game ahead" on match days, and also that they pay attention to detail. Original All Black Ernie Booth wrote of Gallaher: "To us All Blacks his words would often be, 'Give nothing away; take no chance.' As a skipper he was somewhat a disciplinarian, doubtless imbibed from his previous military experience in South Africa. Still, he treated us all like men, not kids, who were out to 'play the game' for good old New Zealand." Another contemporary said he was "perhaps not the greatest of wing-forwards, as such; but he was acutely skilled as a judge of men and moves". Paul Verdon, in his history of All Black captains, Born to Lead, writes: "The overwhelming evidence suggests Gallaher's leadership style, honed from time spent in the Boer War, was very effective." Gallaher's biographer Matt Elliott asserts that in the century since his playing retirement "his reputation as a player and leader have only enhanced". According to historian Terry McLean: "In a long experience of reading and hearing about the man, one has never encountered, from the New Zealand angle, or from his fellow players, criticism of his qualities as a leader." In the view of the English rugby journalist E. H. D. Sewell, writing soon after Gallaher's death, the New Zealand captain was "a very quiet, taciturn sort of cove, who spoke rarely about football or his own achievements ... I never heard a soul who met him on that famous trip, say a disparaging word about him." ## See also - List of international rugby union players killed in World War I
31,078,356
1940 Brocklesby mid-air collision
1,166,747,585
Collision involving Royal Australian Air Force training aircraft
[ "1940 in Australia", "Accidents and incidents involving the Avro Anson", "Aviation accidents and incidents in 1940", "History of the Royal Australian Air Force", "Mid-air collisions", "Mid-air collisions involving military aircraft", "Riverina" ]
On 29 September 1940, a mid-air collision occurred over Brocklesby, New South Wales, Australia. The accident was unusual in that the aircraft involved, two Royal Australian Air Force (RAAF) Avro Ansons of No. 2 Service Flying Training School, remained locked together after colliding, and then landed safely. The collision stopped the engines of the upper Anson, but those of the machine underneath continued to run, allowing the aircraft to keep flying. Both navigators and the pilot of the lower Anson bailed out. The pilot of the upper Anson found that he was able to control the interlocked aircraft with his ailerons and flaps, and made an emergency landing in a nearby paddock. All four crewmen survived the incident, and the upper Anson was repaired and returned to flight service. ## Training school and flight details No. 2 Service Flying Training School (SFTS), based at RAAF Station Forest Hill near Wagga Wagga, New South Wales, was one of several pilot training facilities formed in the early years of World War II as part of Australia's contribution to the Empire Air Training Scheme. After basic aeronautical instruction at an elementary flying training school, pupils went on to an SFTS to learn techniques they would require as operational (or "service") pilots, including instrument flying, night flying, cross-country navigation, advanced aerobatics, formation flying, dive bombing, and aerial gunnery. No. 2 SFTS's facilities were still under construction when its first course commenced on 29 July 1940. On 29 September 1940, two of the school's Avro Ansons took off from Forest Hill for a cross-country training exercise over southern New South Wales. Tail number N4876 was piloted by Leading Aircraftman Leonard Graham Fuller, 22, from Cootamundra, with Leading Aircraftman Ian Menzies Sinclair, 27, from Glen Innes, as navigator. Tail number L9162 was piloted by Leading Aircraftman Jack Inglis Hewson, 19, from Newcastle, with Leading Aircraftman Hugh Gavin Fraser, 27, from Melbourne, as navigator. Their planned route was expected to take them first to Corowa, then to Narrandera, then back to Forest Hill. ## Collision and emergency landing The Ansons were at an altitude of 300 metres (1,000 ft) over the township of Brocklesby, near Albury, when they made a banking turn. Fuller lost sight of Hewson's aircraft beneath him and the two Ansons collided amid what Fuller later described as a "grinding crash and a bang as roaring propellers struck each other and bit into the engine cowlings". The aircraft remained jammed together, the lower Anson's turret wedged into the other's port wing root, and its fin and rudder balancing the upper Anson's port tailplane. Both of the upper aircraft's engines had been knocked out in the collision but those of the one below continued to turn at full power as the interlocked Ansons began to slowly circle. Fuller described the "freak combination" as "lumping along like a brick". He nevertheless found that he was able to control the piggybacking pair of aircraft with his ailerons and flaps, and began searching for a place to land. The two navigators, Sinclair and Fraser, bailed out, followed soon after by the lower Anson's pilot, Hewson, whose back had been injured when the spinning blades of the other aircraft sliced through his fuselage. Fuller travelled 8 kilometres (5 mi) after the collision, then successfully made an emergency pancake landing in a large paddock 6 kilometres (4 mi) south-west of Brocklesby. The locked aircraft slid 180 metres (200 yd) across the grass before coming to rest. As far as Fuller was concerned, the touchdown was better than any he had made when practising circuits and bumps at Forest Hill airfield the previous day. His acting commanding officer, Squadron Leader Cooper, declared the choice of improvised runway "perfect", and the landing itself as a "wonderful effort". The RAAF's Inspector of Air Accidents, Group Captain Arthur "Spud" Murphy, flew straight to the scene from Air Force Headquarters in Melbourne, accompanied by his deputy Henry Winneke. Fuller told Murphy: > Well, sir, I did everything we've been told to do in a forced landing—land as close as possible to habitation or a farmhouse and, if possible, land into the wind. I did all that. There's the farmhouse, and I did a couple of circuits and landed into the wind. She was pretty heavy on the controls, though! ## Aftermath The freak accident garnered news coverage around the world, and cast a spotlight on the small town of Brocklesby. In preventing the destruction of the Ansons, Fuller was credited not only with avoiding possible damage to Brocklesby, but with saving approximately £40,000 (£ million today) worth of military hardware. Both Ansons were repaired; the top aircraft (N4876) returned to flight service, and the lower (L9162) was used as an instructional airframe. Hewson was treated for his back injury at Albury District Hospital and returned to active duty; he graduated from No. 2 SFTS in October 1940. He was discharged from the Air Force as a flight lieutenant in 1946. Sinclair was discharged in 1945, also a flight lieutenant. Fraser was posted to Britain and flew as a pilot officer with No. 206 Squadron RAF, based in Aldergrove, Northern Ireland. He and his crew of three died on 1 January 1942 during a routine training flight, when their Lockheed Hudson collided with a tree. Fuller was promoted to sergeant after his successful landing, but also confined to barracks for fourteen days and docked seven days' pay for speaking about the incident to newspapers without authorisation. He graduated from No. 2 SFTS in October 1940, and received a commendation from the Australian Air Board for his "presence of mind, courage and determination in landing the locked Ansons without serious damage to the aircraft under difficult conditions". Fuller saw active service first in the Middle East, and then in Europe with No. 37 Squadron RAF. He earned the Distinguished Flying Medal for his actions over Palermo in March 1942. Commissioned later that year, Fuller was posted back to Australia as a flying officer, and became an instructor at No. 1 Operational Training Unit in Sale, Victoria. He died near Sale on 18 March 1944, when he was hit by a bus while riding his bicycle. ## Legacy According to the Greater Hume Shire Council, the 1940 mid-air collision remains Brocklesby's "main claim to fame". Local residents commemorated the 50th anniversary of the event by erecting a marker near the site of the crash landing; it was unveiled by Tim Fischer, the Federal Member for Farrer and Leader of the National Party, on 29 September 1990. On 26 January 2007, a memorial featuring an Avro Anson engine was opened during Brocklesby's Australia Day celebrations.
89,530
Harold Pinter
1,172,856,322
English playwright (1930–2008)
[ "1930 births", "2008 deaths", "20th-century British dramatists and playwrights", "20th-century British essayists", "20th-century British novelists", "20th-century British poets", "20th-century British screenwriters", "20th-century British short story writers", "20th-century atheists", "20th-century letter writers", "21st-century British dramatists and playwrights", "21st-century British essayists", "21st-century British non-fiction writers", "21st-century British poets", "21st-century British screenwriters", "21st-century British short story writers", "21st-century atheists", "Alumni of RADA", "Alumni of the Royal Central School of Speech and Drama", "BAFTA fellows", "Best British Screenplay BAFTA Award winners", "Best Screenplay BAFTA Award winners", "Booker authors' division", "British anthologists", "British male essayists", "British male television writers", "British media critics", "British psychological fiction writers", "Burials at Kensal Green Cemetery", "Commanders of the Order of the British Empire", "David Cohen Prize recipients", "Deaths from cancer in England", "Deaths from liver cancer", "English Ashkenazi Jews", "English Jewish writers", "English Nobel laureates", "English activists", "English anti-war activists", "English anti–Iraq War activists", "English anti–nuclear weapons activists", "English atheists", "English conscientious objectors", "English essayists", "English film directors", "English human rights activists", "English literary critics", "English male dramatists and playwrights", "English male film actors", "English male non-fiction writers", "English male novelists", "English male poets", "English male screenwriters", "English male short story writers", "English male stage actors", "English male television actors", "English pacifists", "English people of Polish-Jewish descent", "English people of Ukrainian-Jewish descent", "English political writers", "English radio writers", "English social justice activists", "English socialists", "English sociologists", "English television directors", "English television writers", "English theatre directors", "Fellows of the Royal Society of Literature", "Foreign members of the Serbian Academy of Sciences and Arts", "Free speech activists", "Harold Pinter", "Jewish English activists", "Jewish English male actors", "Jewish atheists", "Jewish dramatists and playwrights", "Laurence Olivier Award winners", "Lecturers", "Literacy and society theorists", "Literary theorists", "Mass media theorists", "Members of the Academy of Arts, Berlin", "Members of the Order of the Companions of Honour", "Members of the Serbian Academy of Sciences and Arts", "Nobel laureates in Literature", "PEN International", "People associated with Queen Mary University of London", "People educated at Hackney Downs School", "People from Lower Clapton", "Recipients of the Legion of Honour", "Rhetoric theorists", "Sociologists of art", "Surrealist writers", "The arts and politics", "Theatre of the Absurd", "Theatre theorists", "Theatrologists", "Tony Award winners", "Writers about activism and social change", "Writers about globalization", "Writers from Hackney Central", "Writers of historical fiction set in the early modern period", "Writers of historical fiction set in the modern age" ]
Harold Pinter (/ˈpɪntər/; 10 October 1930 – 24 December 2008) was a British playwright, screenwriter, director and actor. A Nobel Prize winner, Pinter was one of the most influential modern British dramatists with a writing career that spanned more than 50 years. His best-known plays include The Birthday Party (1957), The Homecoming (1964) and Betrayal (1978), each of which he adapted for the screen. His screenplay adaptations of others' works include The Servant (1963), The Go-Between (1971), The French Lieutenant's Woman (1981), The Trial (1993) and Sleuth (2007). He also directed or acted in radio, stage, television and film productions of his own and others' works. Pinter was born and raised in Hackney, east London, and educated at Hackney Downs School. He was a sprinter and a keen cricket player, acting in school plays and writing poetry. He attended the Royal Academy of Dramatic Art but did not complete the course. He was fined for refusing national service as a conscientious objector. Subsequently, he continued training at the Central School of Speech and Drama and worked in repertory theatre in Ireland and England. In 1956 he married actress Vivien Merchant and had a son, Daniel, born in 1958. He left Merchant in 1975 and married author Lady Antonia Fraser in 1980. Pinter's career as a playwright began with a production of The Room in 1957. His second play, The Birthday Party, closed after eight performances but was enthusiastically reviewed by critic Harold Hobson. His early works were described by critics as "comedy of menace". Later plays such as No Man's Land (1975) and Betrayal (1978) became known as "memory plays". He appeared as an actor in productions of his own work on radio and film, and directed nearly 50 productions for stage, theatre and screen. Pinter received over 50 awards, prizes and other honours, including the Nobel Prize in Literature in 2005 and the French Légion d'honneur in 2007. Despite frail health after being diagnosed with oesophageal cancer in December 2001, Pinter continued to act on stage and screen, last performing the title role of Samuel Beckett's one-act monologue Krapp's Last Tape, for the 50th anniversary season of the Royal Court Theatre, in October 2006. He died from liver cancer on 24 December 2008. ## Biography ### Early life and education Pinter was born on 10 October 1930, in Hackney, east London, the only child of British Jewish parents of Eastern European descent: his father, Hyman "Jack" Pinter (1902–1997) was a ladies' tailor; his mother, Frances (née Moskowitz; 1904–1992), a housewife. Pinter believed an aunt's erroneous view that the family was Sephardic and had fled the Spanish Inquisition; thus, for his early poems, Pinter used the pseudonym Pinta and at other times used variations such as da Pinto. Later research by Lady Antonia Fraser, Pinter's second wife, revealed the legend to be apocryphal; three of Pinter's grandparents came from Poland and the fourth from Odesa, so the family was Ashkenazic. Pinter's family home in London is described by his official biographer Michael Billington as "a solid, red-brick, three-storey villa just off the noisy, bustling, traffic-ridden thoroughfare of the Lower Clapton Road". In 1940 and 1941, after the Blitz, Pinter was evacuated from their house in London to Cornwall and Reading. Billington states that the "life-and-death intensity of daily experience" before and during the Blitz left Pinter with profound memories "of loneliness, bewilderment, separation and loss: themes that are in all his works." Pinter discovered his social potential as a student at Hackney Downs School, a London grammar school, between 1944 and 1948. "Partly through the school and partly through the social life of Hackney Boys' Club ... he formed an almost sacerdotal belief in the power of male friendship. The friends he made in those days – most particularly Henry Woolf, Michael (Mick) Goldstein and Morris (Moishe) Wernick – have always been a vital part of the emotional texture of his life." A major influence on Pinter was his inspirational English teacher Joseph Brearley, who directed him in school plays and with whom he took long walks, talking about literature. According to Billington, under Brearley's instruction, "Pinter shone at English, wrote for the school magazine and discovered a gift for acting." In 1947 and 1948, he played Romeo and Macbeth in productions directed by Brearley. At the age of 12, Pinter began writing poetry, and in spring 1947, his poetry was first published in the Hackney Downs School Magazine. In 1950 his poetry was first published outside the school magazine, in Poetry London, some of it under the pseudonym "Harold Pinta". Pinter was an atheist. ### Sport and friendship Pinter enjoyed running and broke the Hackney Downs School sprinting record. He was a cricket enthusiast, taking his bat with him when evacuated during the Blitz. In 1971, he told Mel Gussow: "one of my main obsessions in life is the game of cricket—I play and watch and read about it all the time." He was chairman of the Gaieties Cricket Club, a supporter of Yorkshire Cricket Club, and devoted a section of his official website to the sport. One wall of his study was dominated by a portrait of himself as a young man playing cricket, which was described by Sarah Lyall, writing in The New York Times: "The painted Mr. Pinter, poised to swing his bat, has a wicked glint in his eye; testosterone all but flies off the canvas." Pinter approved of the "urban and exacting idea of cricket as a bold theatre of aggression." After his death, several of his school contemporaries recalled his achievements in sports, especially cricket and running. The BBC Radio 4 memorial tribute included an essay on Pinter and cricket. Other interests that Pinter mentioned to interviewers are family, love and sex, drinking, writing, and reading. According to Billington, "If the notion of male loyalty, competitive rivalry and fear of betrayal forms a constant thread in Pinter's work from The Dwarfs onwards, its origins can be found in his teenage Hackney years. Pinter adores women, enjoys flirting with them, and worships their resilience and strength. But, in his early work especially, they are often seen as disruptive influences on some pure and Platonic ideal of male friendship: one of the most crucial of all Pinter's lost Edens." ### Early theatrical training and stage experience Beginning in late 1948, Pinter attended the Royal Academy of Dramatic Art for two terms, but hating the school, missed most of his classes, feigned a nervous breakdown, and dropped out in 1949. In 1948 he was called up for National Service. He was initially refused registration as a conscientious objector, leading to his twice being prosecuted, and fined, for refusing to accept a medical examination, before his CO registration was ultimately agreed. He had a small part in the Christmas pantomime Dick Whittington and His Cat at the Chesterfield Hippodrome in 1949 to 1950. From January to July 1951, he attended the Central School of Speech and Drama. From 1951 to 1952, he toured Ireland with the Anew McMaster repertory company, playing over a dozen roles. In 1952, he began acting in regional English repertory productions; from 1953 to 1954, he worked for the Donald Wolfit Company, at the King's Theatre, Hammersmith, performing eight roles. From 1954 until 1959, Pinter acted under the stage name David Baron. In all, Pinter played over 20 roles under that name. To supplement his income from acting, Pinter worked as a waiter, a postman, a bouncer, and a snow-clearer, meanwhile, according to Mark Batty, "harbouring ambitions as a poet and writer." In October 1989 Pinter recalled: "I was in English rep as an actor for about 12 years. My favourite roles were undoubtedly the sinister ones. They're something to get your teeth into." During that period, he also performed occasional roles in his own and others' works for radio, TV, and film, as he continued to do throughout his career. ### Marriages and family life From 1956 until 1980, Pinter was married to Vivien Merchant, an actress whom he met on tour, perhaps best known for her performance in the 1966 film Alfie. Their son Daniel was born in 1958. Through the early 1970s, Merchant appeared in many of Pinter's works, including The Homecoming on stage (1965) and screen (1973), but the marriage was turbulent. For seven years, from 1962 to 1969, Pinter was engaged in a clandestine affair with BBC-TV presenter and journalist Joan Bakewell, which inspired his 1978 play Betrayal, and also throughout that period and beyond he had an affair with an American socialite, whom he nicknamed "Cleopatra". This relationship was another secret he kept from both his wife and Bakewell. Initially, Betrayal was thought to be a response to his later affair with historian Antonia Fraser, the wife of Hugh Fraser, and Pinter's "marital crack-up". Pinter and Merchant had both met Antonia Fraser in 1969, when all three worked together on a National Gallery programme about Mary, Queen of Scots; several years later, on 8–9 January 1975, Pinter and Fraser became romantically involved. That meeting initiated their five-year extramarital love affair. After hiding the relationship from Merchant for two and a half months, on 21 March 1975, Pinter finally told her "I've met somebody". After that, "Life in Hanover Terrace gradually became impossible", and Pinter moved out of their house on 28 April 1975, five days after the première of No Man's Land. In mid-August 1977, after Pinter and Fraser had spent two years living in borrowed and rented quarters, they moved into her former family home in Holland Park, where Pinter began writing Betrayal. He reworked it later, while on holiday at the Grand Hotel in Eastbourne, in early January 1978. After the Frasers' divorce had become final in 1977 and the Pinters' in 1980, Pinter married Fraser on 27 November 1980. Because of a two-week delay in Merchant's signing the divorce papers, however, the reception had to precede the actual ceremony, originally scheduled to occur on his 50th birthday. Vivien Merchant died of acute alcoholism in the first week of October 1982, at the age of 53. Billington writes that Pinter "did everything possible to support" her and regretted that he ultimately became estranged from their son, Daniel, after their separation, Pinter's remarriage, and Merchant's death. A reclusive gifted musician and writer, Daniel changed his surname from Pinter to Brand, the maiden name of his maternal grandmother, before Pinter and Fraser became romantically involved; while according to Fraser, his father could not understand it, she says that she could: "Pinter is such a distinctive name that he must have got tired of being asked, 'Any relation?'" Michael Billington wrote that Pinter saw Daniel's name change as "a largely pragmatic move on Daniel's part designed to keep the press ... at bay." Fraser told Billington that Daniel "was very nice to me at a time when it would have been only too easy for him to have turned on me ... simply because he had been the sole focus of his father's love and now manifestly wasn't." Still unreconciled at the time of his father's death, Daniel Brand did not attend Pinter's funeral. Billington observes that "The break-up with Vivien and the new life with Antonia was to have a profound effect on Pinter's personality and his work," though he adds that Fraser herself did not claim to have influence over Pinter or his writing. In her own contemporaneous diary entry dated 15 January 1993, Fraser described herself more as Pinter's literary midwife. Indeed, she told Billington that "other people [such as Peggy Ashcroft, among others] had a shaping influence on [Pinter's] politics" and attributed changes in his writing and political views to a change from "an unhappy, complicated personal life ... to a happy, uncomplicated personal life", so that "a side of Harold which had always been there was somehow released. I think you can see that in his work after No Man's Land [1975], which was a very bleak play." Pinter was content in his second marriage and enjoyed family life with his six adult stepchildren and 17 step-grandchildren. Even after battling cancer for several years, he considered himself "a very lucky man in every respect". Sarah Lyall notes in her 2007 interview with Pinter in The New York Times that his "latest work, a slim pamphlet called 'Six Poems for A.', comprises poems written over 32 years, with "A" of course being Lady Antonia. The first of the poems was written in Paris, where she and Mr. Pinter traveled soon after they met. More than three decades later the two are rarely apart, and Mr. Pinter turns soft, even cozy, when he talks about his wife." In that interview Pinter "acknowledged that his plays—full of infidelity, cruelty, inhumanity, the lot—seem at odds with his domestic contentment. 'How can you write a happy play?' he said. 'Drama is about conflict and degrees of perturbation, disarray. I've never been able to write a happy play, but I've been able to enjoy a happy life.'" After his death, Fraser told The Guardian: "He was a great man, and it was a privilege to live with him for over 33 years. He will never be forgotten." ## Civic activities and political activism In 1948–49, when he was 18, Pinter opposed the politics of the Cold War, leading to his decision to become a conscientious objector and to refuse to comply with National Service in the British military. However, he told interviewers that, if he had been old enough at the time, he would have fought against the Nazis in World War II. He seemed to express ambivalence, both indifference and hostility, towards political structures and politicians in his Fall 1966 Paris Review interview conducted by Lawrence M. Bensky. Yet, he had been an early member of the Campaign for Nuclear Disarmament and also had supported the British Anti-Apartheid Movement (1959–1994), participating in British artists' refusal to permit professional productions of their work in South Africa in 1963 and in subsequent related campaigns. In "A Play and Its Politics", a 1985 interview with Nicholas Hern, Pinter described his earlier plays retrospectively from the perspective of the politics of power and the dynamics of oppression. In his last 25 years, Pinter increasingly focused his essays, interviews and public appearances directly on political issues. He was an officer in International PEN, travelling with American playwright Arthur Miller to Turkey in 1985 on a mission co-sponsored with a Helsinki Watch committee to investigate and protest against the torture of imprisoned writers. There he met victims of political oppression and their families. Pinter's experiences in Turkey and his knowledge of the Turkish suppression of the Kurdish language inspired his 1988 play Mountain Language. He was also an active member of the Cuba Solidarity Campaign, an organisation that "campaigns in the UK against the US blockade of Cuba". In 2001, Pinter joined the International Committee to Defend Slobodan Milošević (ICDSM), which appealed for a fair trial and for the freedom of Slobodan Milošević, signing a related "Artists' Appeal for Milošević" in 2004. Pinter strongly opposed the 1991 Gulf War, the 1999 NATO bombing campaign in FR Yugoslavia during the Kosovo War, the United States' 2001 War in Afghanistan, and the 2003 Invasion of Iraq. Among his provocative political statements, Pinter called Prime Minister Tony Blair a "deluded idiot" and compared the administration of President George W. Bush to Nazi Germany. He stated that the United States "was charging towards world domination while the American public and Britain's 'mass-murdering' prime minister sat back and watched." He was very active in the antiwar movement in the United Kingdom, speaking at rallies held by the Stop the War Coalition and frequently criticising American aggression, as when he asked rhetorically, in his acceptance speech for the Wilfred Owen Award for Poetry on 18 March 2007: "What would Wilfred Owen make of the invasion of Iraq? A bandit act, an act of blatant state terrorism, demonstrating absolute contempt for the conception of international law." Pinter earned a reputation for being pugnacious, enigmatic, taciturn, terse, prickly, explosive and forbidding. Pinter's blunt political statements, and the award of the Nobel Prize in Literature, elicited strong criticism and even, at times, provoked ridicule and personal attacks. The historian Geoffrey Alderman, author of the official history of Hackney Downs School, expressed his own "Jewish View" of Harold Pinter: "Whatever his merit as a writer, actor and director, on an ethical plane Harold Pinter seems to me to have been intensely flawed, and his moral compass deeply fractured." David Edgar, writing in The Guardian, defended Pinter against what he termed Pinter's "being berated by the belligerati" like Johann Hari, who felt that he did not "deserve" to win the Nobel Prize. Later Pinter continued to campaign against the Iraq War and on behalf of other political causes that he supported. Pinter signed the mission statement of Jews for Justice for Palestinians in 2005 and its full-page advertisement, "What Is Israel Doing? A Call by Jews in Britain", published in The Times on 6 July 2006, and he was a patron of the Palestine Festival of Literature. In April 2008, Pinter signed the statement "We're not celebrating Israel's anniversary". The statement noted: "We cannot celebrate the birthday of a state founded on terrorism, massacres and the dispossession of another people from their land.", "We will celebrate when Arab and Jew live as equals in a peaceful Middle East" ## Career ### As actor Pinter's acting career spanned over 50 years and, although he often played villains, included a wide range of roles on stage and in radio, film, and television. In addition to roles in radio and television adaptations of his own plays and dramatic sketches, early in his screenwriting career he made several cameo appearances in films based on his own screenplays; for example, as a society man in The Servant (1963) and as Mr. Bell in Accident (1967), both directed by Joseph Losey; and as a bookshop customer in his later film Turtle Diary (1985), starring Michael Gambon, Glenda Jackson, and Ben Kingsley. Pinter's notable film and television roles included the lawyer Saul Abrahams opposite Peter O'Toole in Rogue Male, BBC TV's 1976 adaptation of Geoffrey Household's 1939 novel, and a drunk Irish journalist in Langrishe, Go Down (starring Judi Dench and Jeremy Irons) distributed on BBC Two in 1978 and released in movie theatres in 2002. Pinter's later film roles included the criminal Sam Ross in Mojo (1997), written and directed by Jez Butterworth, based on Butterworth's play of the same name; Sir Thomas Bertram (his most substantial feature-film role) in Mansfield Park (1998), a character that Pinter described as "a very civilised man ... a man of great sensibility but in fact, he's upholding and sustaining a totally brutal system [the slave trade] from which he derives his money"; and Uncle Benny, opposite Pierce Brosnan and Geoffrey Rush, in The Tailor of Panama (2001). In television films, he played Mr. Bearing, the father of ovarian cancer patient Vivian Bearing, played by Emma Thompson in Mike Nichols's HBO film of the Pulitzer Prize-winning play Wit (2001); and the Director opposite John Gielgud (Gielgud's last role) and Rebecca Pidgeon in Catastrophe, by Samuel Beckett, directed by David Mamet as part of Beckett on Film (2001). ### As director Pinter began to direct more frequently during the 1970s, becoming an associate director of the National Theatre (NT) in 1973. He directed almost 50 productions of his own and others' plays for stage, film, and television, including 10 productions of works by Simon Gray: the stage and/or film premières of Butley (stage, 1971; film, 1974), Otherwise Engaged (1975), The Rear Column (stage, 1978; TV, 1980), Close of Play (NT, 1979), Quartermaine's Terms (1981), Life Support (1997), The Late Middle Classes (1999), and The Old Masters (2004). Several of those productions starred Alan Bates (1934–2003), who originated the stage and screen roles of not only Butley but also Mick in Pinter's first major commercial success, The Caretaker (stage, 1960; film, 1964); and in Pinter's double-bill produced at the Lyric Hammersmith in 1984, he played Nicolas in One for the Road and the cab driver in Victoria Station. Among over 35 plays that Pinter directed were Next of Kin (1974), by John Hopkins; Blithe Spirit (1976), by Noël Coward; The Innocents (1976), by William Archibald; Circe and Bravo (1986), by Donald Freed; Taking Sides (1995), by Ronald Harwood; and Twelve Angry Men (1996), by Reginald Rose. ### As playwright Pinter was the author of 29 plays and 15 dramatic sketches and the co-author of two works for stage and radio. He was considered to have been one of the most influential modern British dramatists, Along with the 1967 Tony Award for Best Play for The Homecoming and several other American awards and award nominations, he and his plays received many awards in the UK and elsewhere throughout the world. His style has entered the English language as an adjective, "Pinteresque", although Pinter himself disliked the term and found it meaningless. #### "Comedies of menace" (1957–1968) Pinter's first play, The Room, written and first performed in 1957, was a student production at the University of Bristol, directed by his good friend, actor Henry Woolf, who also originated the role of Mr. Kidd (which he reprised in 2001 and 2007). After Pinter mentioned that he had an idea for a play, Woolf asked him to write it so that he could direct it to fulfill a requirement for his postgraduate work. Pinter wrote it in three days. The production was described by Billington as "a staggeringly confident debut which attracted the attention of a young producer, Michael Codron, who decided to present Pinter's next play, The Birthday Party, at the Lyric Hammersmith, in 1958." Written in 1957 and produced in 1958, Pinter's second play, The Birthday Party, one of his best-known works, was initially both a commercial and critical disaster, despite an enthusiastic review in The Sunday Times by its influential drama critic Harold Hobson, which appeared only after the production had closed and could not be reprieved. Critical accounts often quote Hobson: > I am well aware that Mr Pinter[']s play received extremely bad notices last Tuesday morning. At the moment I write these [words] it is uncertain even whether the play will still be in the bill by the time they appear, though it is probable it will soon be seen elsewhere. Deliberately, I am willing to risk whatever reputation I have as a judge of plays by saying that The Birthday Party is not a Fourth, not even a Second, but a First [as in Class Honours]; and that Pinter, on the evidence of his work, possesses the most original, disturbing and arresting talent in theatrical London ... Mr Pinter and The Birthday Party, despite their experiences last week, will be heard of again. Make a note of their names. Pinter himself and later critics generally credited Hobson as bolstering him and perhaps even rescuing his career. In a review published in 1958, borrowing from the subtitle of The Lunatic View: A Comedy of Menace, a play by David Campton, critic Irving Wardle called Pinter's early plays "comedy of menace"—a label that people have applied repeatedly to his work. Such plays begin with an apparently innocent situation that becomes both threatening and "absurd" as Pinter's characters behave in ways often perceived as inexplicable by his audiences and one another. Pinter acknowledges the influence of Samuel Beckett, particularly on his early work; they became friends, sending each other drafts of their works in progress for comments. Pinter wrote The Hothouse in 1958, which he shelved for over 20 years (See "Overtly political plays and sketches" below). Next he wrote The Dumb Waiter (1959), which premièred in Germany and was then produced in a double bill with The Room at the Hampstead Theatre Club, in London, in 1960. It was then not produced often until the 1980s, and it has been revived more frequently since 2000, including the West End Trafalgar Studios production in 2007. The first production of The Caretaker, at the Arts Theatre Club, in London, in 1960, established Pinter's theatrical reputation. The play transferred to the Duchess Theatre in May 1960 and ran for 444 performances, receiving an Evening Standard Award for best play of 1960. Large radio and television audiences for his one-act play A Night Out, along with the popularity of his revue sketches, propelled him to further critical attention. In 1964, The Birthday Party was revived both on television (with Pinter himself in the role of Goldberg) and on stage (directed by Pinter at the Aldwych Theatre) and was well received. By the time Peter Hall's London production of The Homecoming (1964) reached Broadway in 1967, Pinter had become a celebrity playwright, and the play garnered four Tony Awards, among other awards. During this period, Pinter also wrote the radio play A Slight Ache, first broadcast on the BBC Third Programme in 1959 and then adapted to the stage and performed at the Arts Theatre Club in 1961. A Night Out (1960) was broadcast to a large audience on ABC Weekend TV's television show Armchair Theatre, after being transmitted on BBC Radio 3, also in 1960. His play Night School was first televised in 1960 on Associated Rediffusion. The Collection premièred at the Aldwych Theatre in 1962, and The Dwarfs, adapted from Pinter's then unpublished novel of the same title, was first broadcast on radio in 1960, then adapted for the stage (also at the Arts Theatre Club) in a double bill with The Lover, which had previously been televised by Associated Rediffusion in 1963; and Tea Party, a play that Pinter developed from his 1963 short story, first broadcast on BBC TV in 1965. Working as both a screenwriter and as a playwright, Pinter composed a script called The Compartment (1966), for a trilogy of films to be contributed by Samuel Beckett, Eugène Ionesco, and Pinter, of which only Beckett's film, titled Film, was actually produced. Then Pinter turned his unfilmed script into a television play, which was produced as The Basement, both on BBC 2 and also on stage in 1968. #### "Memory plays" (1968–1982) From the late 1960s through the early 1980s, Pinter wrote a series of plays and sketches that explore complex ambiguities, elegiac mysteries, comic vagaries, and other "quicksand-like" characteristics of memory and which critics sometimes classify as Pinter's "memory plays". These include Landscape (1968), Silence (1969), Night (1969), Old Times (1971), No Man's Land (1975), The Proust Screenplay (1977), Betrayal (1978), Family Voices (1981), Victoria Station (1982), and A Kind of Alaska (1982). Some of Pinter's later plays, including Party Time (1991), Moonlight (1993), Ashes to Ashes (1996), and Celebration (2000), draw upon some features of his "memory" dramaturgy in their focus on the past in the present, but they have personal and political resonances and other tonal differences from these earlier memory plays. #### Overtly political plays and sketches (1980–2000) Following a three-year period of creative drought in the early 1980s after his marriage to Antonia Fraser and the death of Vivien Merchant, Pinter's plays tended to become shorter and more overtly political, serving as critiques of oppression, torture, and other abuses of human rights, linked by the apparent "invulnerability of power." Just before this hiatus, in 1979, Pinter re-discovered his manuscript of The Hothouse, which he had written in 1958 but had set aside; he revised it and then directed its first production himself at Hampstead Theatre in London, in 1980. Like his plays of the 1980s, The Hothouse concerns authoritarianism and the abuses of power politics, but it is also a comedy, like his earlier comedies of menace. Pinter played the major role of Roote in a 1995 revival at the Minerva Theatre, Chichester. Pinter's brief dramatic sketch Precisely (1983) is a duologue between two bureaucrats exploring the absurd power politics of mutual nuclear annihilation and deterrence. His first overtly political one-act play is One for the Road (1984). In 1985 Pinter stated that whereas his earlier plays presented metaphors for power and powerlessness, the later ones present literal realities of power and its abuse. Pinter's "political theatre dramatizes the interplay and conflict of the opposing poles of involvement and disengagement." Mountain Language (1988) is about the Turkish suppression of the Kurdish language. The dramatic sketch The New World Order (1991) provides what Robert Cushman, writing in The Independent described as "10 nerve-wracking minutes" of two men threatening to torture a third man who is blindfolded, gagged and bound in a chair; Pinter directed the British première at the Royal Court Theatre Upstairs, where it opened on 9 July 1991, and the production then transferred to Washington, D.C., where it was revived in 1994. Pinter's longer political satire Party Time (1991) premièred at the Almeida Theatre in London, in a double-bill with Mountain Language. Pinter adapted it as a screenplay for television in 1992, directing that production, first broadcast in the UK on Channel 4 on 17 November 1992. Intertwining political and personal concerns, his next full-length plays, Moonlight (1993) and Ashes to Ashes (1996) are set in domestic households and focus on dying and death; in their personal conversations in Ashes to Ashes, Devlin and Rebecca allude to unspecified atrocities relating to the Holocaust. After experiencing the deaths of first his mother (1992) and then his father (1997), again merging the personal and the political, Pinter wrote the poems "Death" (1997) and "The Disappeared" (1998). Pinter's last stage play, Celebration (2000), is a social satire set in an opulent restaurant, which lampoons The Ivy, a fashionable venue in London's West End theatre district, and its patrons who "have just come from performances of either the ballet or the opera. Not that they can remember a darn thing about what they saw, including the titles. [These] gilded, foul-mouthed souls are just as myopic when it comes to their own table mates (and for that matter, their food), with conversations that usually connect only on the surface, if there." On its surface the play may appear to have fewer overtly political resonances than some of the plays from the 1980s and 1990s; but its central male characters, brothers named Lambert and Matt, are members of the elite (like the men in charge in Party Time), who describe themselves as "peaceful strategy consultants [because] we don't carry guns." At the next table, Russell, a banker, describes himself as a "totally disordered personality ... a psychopath", while Lambert "vows to be reincarnated as '[a] more civilised, [a] gentler person, [a] nicer person'." These characters' deceptively smooth exteriors mask their extreme viciousness. Celebration evokes familiar Pinteresque political contexts: "The ritzy loudmouths in 'Celebration' ... and the quieter working-class mumblers of 'The Room' ... have everything in common beneath the surface". "Money remains in the service of entrenched power, and the brothers in the play are 'strategy consultants' whose jobs involve force and violence ... It is tempting but inaccurate to equate the comic power inversions of the social behaviour in Celebration with lasting change in larger political structures", according to Grimes, for whom the play indicates Pinter's pessimism about the possibility of changing the status quo. Yet, as the Waiter's often comically unbelievable reminiscences about his grandfather demonstrate in Celebration, Pinter's final stage plays also extend some expressionistic aspects of his earlier "memory plays", while harking back to his "comedies of menace", as illustrated in the characters and in the Waiter's final speech: > My grandfather introduced me to the mystery of life and I'm still in the middle of it. I can't find the door to get out. My grandfather got out of it. He got right out of it. He left it behind him and he didn't look back. He got that absolutely right. And I'd like to make one further interjection. > He stands still. Slow fade. During 2000–2001, there were also simultaneous productions of Remembrance of Things Past, Pinter's stage adaptation of his unpublished Proust Screenplay, written in collaboration with and directed by Di Trevis, at the Royal National Theatre, and a revival of The Caretaker directed by Patrick Marber and starring Michael Gambon, Rupert Graves, and Douglas Hodge, at the Comedy Theatre. Like Celebration, Pinter's penultimate sketch, Press Conference (2002), "invokes both torture and the fragile, circumscribed existence of dissent". In its première in the National Theatre's two-part production of Sketches, despite undergoing chemotherapy at the time, Pinter played the ruthless Minister willing to murder little children for the benefit of "The State". ### As screenwriter Pinter composed 27 screenplays and film scripts for cinema and television, many of which were filmed, or adapted as stage plays. His fame as a screenwriter began with his three screenplays written for films directed by Joseph Losey, leading to their close friendship: The Servant (1963), based on the novel by Robin Maugham; Accident (1967), adapted from the novel by Nicholas Mosley; and The Go-Between (1971), based on the novel by L. P. Hartley. Films based on Pinter's adaptations of his own stage plays are: The Caretaker (1963), directed by Clive Donner; The Birthday Party (1968), directed by William Friedkin; The Homecoming (1973), directed by Peter Hall; and Betrayal (1983), directed by David Jones. Pinter also adapted other writers' novels to screenplays, including The Pumpkin Eater (1964), based on the novel by Penelope Mortimer, directed by Jack Clayton; The Quiller Memorandum (1966), from the 1965 spy novel The Berlin Memorandum, by Elleston Trevor, directed by Michael Anderson; The Last Tycoon (1976), from the unfinished novel by F. Scott Fitzgerald, directed by Elia Kazan; The French Lieutenant's Woman (1981), from the novel by John Fowles, directed by Karel Reisz; Turtle Diary (1985), based on the novel by Russell Hoban; The Heat of the Day (1988), a television film, from the 1949 novel by Elizabeth Bowen; The Comfort of Strangers (1990), from the novel by Ian McEwan, directed by Paul Schrader; and The Trial (1993), from the novel by Franz Kafka, directed by David Jones. His commissioned screenplays of others' works for the films The Handmaid's Tale (1990), The Remains of the Day (1990), and Lolita (1997), remain unpublished and in the case of the latter two films, uncredited, though several scenes from or aspects of his scripts were used in these finished films. His screenplays The Proust Screenplay (1972), Victory (1982), and The Dreaming Child (1997) and his unpublished screenplay The Tragedy of King Lear (2000) have not been filmed. A section of Pinter's Proust Screenplay was, however, released as the 1984 film Swann in Love (Un amour de Swann), directed by Volker Schlöndorff, and it was also adapted by Michael Bakewell as a two-hour radio drama broadcast on BBC Radio 3 in 1995, before Pinter and director Di Trevis collaborated to adapt it for the 2000 National Theatre production. Pinter's last filmed screenplay was an adaptation of the 1970 Tony Award-winning play Sleuth, by Anthony Shaffer, which was commissioned by Jude Law, one of the film's producers. It is the basis for the 2007 film Sleuth, directed by Kenneth Branagh. Pinter's screenplays for The French Lieutenant's Woman and Betrayal were nominated for Academy Awards in 1981 and 1983, respectively. ### 2001–2008 From 16 to 31 July 2001, a Harold Pinter Festival celebrating his work, curated by Michael Colgan, artistic director of the Gate Theatre, Dublin, was held as part of the annual Lincoln Center Festival at Lincoln Center in New York City. Pinter participated both as an actor, as Nicolas in One for the Road, and as a director of a double bill pairing his last play, Celebration, with his first play, The Room. As part of a two-week "Harold Pinter Homage" at the World Leaders Festival of Creative Genius, held from 24 September to 30 October 2001, at the Harbourfront Centre, in Toronto, Canada, Pinter presented a dramatic reading of Celebration (2000) and also participated in a public interview as part of the International Festival of Authors. In December 2001, Pinter was diagnosed with oesophageal cancer, for which, in 2002, he underwent an operation and chemotherapy. During the course of his treatment, he directed a production of his play No Man's Land, and wrote and performed in a new sketch, "Press Conference", for a production of his dramatic sketches at the National Theatre, and from 2002 on he was increasingly active in political causes, writing and presenting politically charged poetry, essays, speeches, as well as involved in developing his final two screenplay adaptations, The Tragedy of King Lear and Sleuth, whose drafts are in the British Library's Harold Pinter Archive (Add MS 88880/2). From 9 to 25 January 2003, the Manitoba Theatre Centre, in Manitoba, Canada, held a nearly month-long PinterFest, in which over 130 performances of twelve of Pinter's plays were performed by a dozen different theatre companies. Productions during the Festival included: The Hothouse, Night School, The Lover, The Dumb Waiter, The Homecoming, The Birthday Party, Monologue, One for the Road, The Caretaker, Ashes to Ashes, Celebration, and No Man's Land. In 2005, Pinter stated that he had stopped writing plays and that he would be devoting his efforts more to his political activism and writing poetry: "I think I've written 29 plays. I think it's enough for me ... My energies are going in different directions—over the last few years I've made a number of political speeches at various locations and ceremonies ... I'm using a lot of energy more specifically about political states of affairs, which I think are very, very worrying as things stand." Some of this later poetry included "The 'Special Relationship'", "Laughter", and "The Watcher". From 2005, Pinter experienced ill health, including a rare skin disease called pemphigus and "a form of septicaemia that afflict[ed] his feet and made it difficult for him to walk." Yet, he completed his screenplay for the film of Sleuth in 2005. His last dramatic work for radio, Voices (2005), a collaboration with composer James Clarke, adapting selected works by Pinter to music, premièred on BBC Radio 3 on his 75th birthday on 10 October 2005. Three days later, it was announced that he had won the 2005 Nobel Prize in Literature. In an interview with Pinter in 2006, conducted by critic Michael Billington as part of the cultural programme of the 2006 Winter Olympics in Turin, Italy, Pinter confirmed that he would continue to write poetry but not plays. In response, the audience shouted No in unison, urging him to keep writing. Along with the international symposium on Pinter: Passion, Poetry, Politics, curated by Billington, the 2006 Europe Theatre Prize theatrical events celebrating Pinter included new productions (in French) of Precisely (1983), One for the Road (1984), Mountain Language (1988), The New World Order (1991), Party Time (1991), and Press Conference (2002) (French versions by Jean Pavans); and Pinter Plays, Poetry & Prose, an evening of dramatic readings, directed by Alan Stanford, of the Gate Theatre, Dublin. In June 2006, the British Academy of Film and Television Arts (BAFTA) hosted a celebration of Pinter's films curated by his friend, the playwright David Hare. Hare introduced the selection of film clips by saying: "To jump back into the world of Pinter's movies ... is to remind yourself of a literate mainstream cinema, focused as much as Bergman's is on the human face, in which tension is maintained by a carefully crafted mix of image and dialogue." After returning to London from the Edinburgh International Book Festival, in September 2006, Pinter began rehearsing for his performance of the role of Krapp in Samuel Beckett's one-act monologue Krapp's Last Tape, which he performed from a motorised wheelchair in a limited run the following month at the Royal Court Theatre to sold-out audiences and "ecstatic" critical reviews. The production ran for only nine performances, as part of the 50th-anniversary celebration season of the Royal Court Theatre; it sold out within minutes of the opening of the box office and tickets commanded large sums from ticket resellers. One performance was filmed and broadcast on BBC Four on 21 June 2007, and also screened later, as part of the memorial PEN Tribute to Pinter, in New York, on 2 May 2009. In October and November 2006, Sheffield Theatres hosted Pinter: A Celebration. It featured productions of seven of Pinter's plays: The Caretaker, Voices, No Man's Land, Family Voices, Tea Party, The Room, One for the Road, and The Dumb Waiter; and films (most his screenplays; some in which Pinter appears as an actor). In February and March 2007, a 50th anniversary of The Dumb Waiter, was produced at the Trafalgar Studios. Later in February 2007, John Crowley's film version of Pinter's play Celebration (2000) was shown on More4 (Channel 4, UK). On 18 March 2007, BBC Radio 3 broadcast a new radio production of The Homecoming, directed by Thea Sharrock and produced by Martin J. Smith, with Pinter performing the role of Max (for the first time; he had previously played Lenny on stage in 1964). A revival of The Hothouse opened at the National Theatre, in London, in July 2007, concurrently with a revival of Betrayal at the Donmar Warehouse, directed by Roger Michell. Revivals in 2008 included the 40th-anniversary production of the American première of The Homecoming on Broadway, directed by Daniel J. Sullivan. From 8 to 24 May 2008, the Lyric Hammersmith celebrated the 50th anniversary of The Birthday Party with a revival and related events, including a gala performance and reception hosted by Harold Pinter on 19 May 2008, exactly 50 years after its London première there. The final revival during Pinter's lifetime was a production of No Man's Land, directed by Rupert Goold, opening at the Gate Theatre, Dublin, in August 2008, and then transferring to the Duke of York's Theatre, London, where it played until 3 January 2009. On the Monday before Christmas 2008, Pinter was admitted to Hammersmith Hospital, where he died on Christmas Eve from liver cancer, aged 78. On 26 December 2008, when No Man's Land reopened at the Duke of York's, the actors paid tribute to Pinter from the stage, with Michael Gambon reading Hirst's monologue about his "photograph album" from Act Two that Pinter had asked him to read at his funeral, ending with a standing ovation from the audience, many of whom were in tears: > I might even show you my photograph album. You might even see a face in it which might remind you of your own, of what you once were. You might see faces of others, in shadow, or cheeks of others, turning, or jaws, or backs of necks, or eyes, dark under hats, which might remind you of others, whom once you knew, whom you thought long dead, but from whom you will still receive a sidelong glance if you can face the good ghost. Allow the love of the good ghost. They possess all that emotion ... trapped. Bow to it. It will assuredly never release them, but who knows ... what relief ... it may give them ... who knows how they may quicken ... in their chains, in their glass jars. You think it cruel ... to quicken them, when they are fixed, imprisoned? No ... no. Deeply, deeply, they wish to respond to your touch, to your look, and when you smile, their joy ... is unbounded. And so I say to you, tender the dead, as you would yourself be tendered, now, in what you would describe as your life. ## Posthumous events ### Funeral Pinter's funeral was a private, half-hour secular ceremony conducted at the graveside at Kensal Green Cemetery, 31 December 2008. The eight readings selected in advance by Pinter included passages from seven of his own writings and from the story "The Dead", by James Joyce, which was read by actress Penelope Wilton. Michael Gambon read the "photo album" speech from No Man's Land and three other readings, including Pinter's poem "Death" (1997). Other readings honoured Pinter's widow and his love of cricket. The ceremony was attended by many notable theatre people, including Tom Stoppard, but not by Pinter's son, Daniel Brand. At its end, Pinter's widow, Antonia Fraser, stepped forward to his grave and quoted from Horatio's speech after the death of Hamlet: "Goodnight, sweet prince, / And flights of angels sing thee to thy rest." ### Memorial tributes The night before Pinter's burial, theatre marquees on Broadway dimmed their lights for a minute in tribute, and on the final night of No Man's Land at the Duke of York's Theatre on 3 January 2009, all of the Ambassador Theatre Group in the West End dimmed their lights for an hour to honour the playwright. Diane Abbott, the Member of Parliament for Hackney North & Stoke Newington proposed an early day motion in the House of Commons to support a residents' campaign to restore the Clapton Cinematograph Theatre, established in Lower Clapton Road in 1910, and to turn it into a memorial to Pinter "to honour this Hackney boy turned literary great." On 2 May 2009, a free public memorial tribute was held at The Graduate Center of The City University of New York. It was part of the 5th Annual PEN World Voices Festival of International Literature, taking place in New York City. Another memorial celebration, held in the Olivier Theatre, at the Royal National Theatre, in London, on the evening of 7 June 2009, consisted of excerpts and readings from Pinter's writings by nearly three dozen actors, many of whom were his friends and associates, including: Eileen Atkins, David Bradley, Colin Firth, Henry Goodman, Sheila Hancock, Alan Rickman, Penelope Wilton, Susan Wooldridge, and Henry Woolf; and a troupe of students from the London Academy of Music and Dramatic Art, directed by Ian Rickson. On 16 June 2009, Antonia Fraser officially opened a commemorative room at the Hackney Empire. The theatre also established a writer's residency in Pinter's name. Most of issue number 28 of Craig Raine's Arts Tri-Quarterly Areté was devoted to pieces remembering Pinter, beginning with Pinter's 1987 unpublished love poem dedicated "To Antonia" and his poem "Paris", written in 1975 (the year in which he and Fraser began living together), followed by brief memoirs by some of Pinter's associates and friends, including Patrick Marber, Nina Raine, Tom Stoppard, Peter Nichols, Susanna Gross, Richard Eyre, and David Hare. A memorial cricket match at Lord's Cricket Ground between the Gaieties Cricket Club and the Lord's Taverners, followed by performances of Pinter's poems and excerpts from his plays, took place on 27 September 2009. In 2009, English PEN established the PEN Pinter Prize, which is awarded annually to a British writer or a writer resident in Britain who, in the words of Pinter's Nobel speech, casts an 'unflinching, unswerving' gaze upon the world, and shows a 'fierce intellectual determination ... to define the real truth of our lives and our societies'. The prize is shared with an international writer of courage. The inaugural winners of the prize were Tony Harrison and the Burmese poet and comedian Maung Thura (a.k.a. Zarganar). ### Being Harold Pinter In January 2011 Being Harold Pinter, a theatrical collage of excerpts from Pinter's dramatic works, his Nobel Lecture, and letters of Belarusian prisoners, created and performed by the Belarus Free Theatre, evoked a great deal of attention in the public media. The Free Theatre's members had to be smuggled out of Minsk, owing to a government crackdown on dissident artists, to perform their production in a two-week sold-out engagement at La MaMa in New York as part of the 2011 Under the Radar Festival. In an additional sold-out benefit performance at the Public Theater, co-hosted by playwrights Tony Kushner and Tom Stoppard, the prisoner's letters were read by ten guest performers: Mandy Patinkin, Kevin Kline, Olympia Dukakis, Lily Rabe, Linda Emond, Josh Hamilton, Stephen Spinella, Lou Reed, Laurie Anderson, and Philip Seymour Hoffman. In solidarity with the Belarus Free Theatre, collaborations of actors and theatre companies joined in offering additional benefit readings of Being Harold Pinter across the United States. ### The Harold Pinter Theatre, London In September 2011, British Theatre owners, Ambassador Theatre Group (ATG) announced it was renaming its Comedy Theatre, Panton Street, London to become The Harold Pinter Theatre. Howard Panter, Joint CEO and Creative Director of ATG told the BBC, "The work of Pinter has become an integral part of the history of the Comedy Theatre. The re-naming of one of our most successful West End theatres is a fitting tribute to a man who made such a mark on British theatre who, over his 50 year career, became recognised as one of the most influential modern British dramatists." ## Honours An Honorary Associate of the National Secular Society, a Fellow of the Royal Society of Literature, and an Honorary Fellow of the Modern Language Association of America (1970), Pinter was appointed CBE in 1966 and became a Companion of Honour in 2002, having declined a knighthood in 1996. In 1995, he accepted the David Cohen Prize, in recognition of a lifetime of literary achievement. In 1996, he received a Laurence Olivier Special Award for lifetime achievement in the theatre. In 1997 he became a BAFTA Fellow. He received the World Leaders Award for "Creative Genius" as the subject of a week-long "Homage" in Toronto, in October 2001. In 2004, he received the Wilfred Owen Award for Poetry for his "lifelong contribution to literature, 'and specifically for his collection of poetry entitled War, published in 2003'". In March 2006, he was awarded the Europe Theatre Prize in recognition of lifetime achievements pertaining to drama and theatre. In conjunction with that award, the critic Michael Billington coordinated an international conference on Pinter: Passion, Poetry, Politics, including scholars and critics from Europe and the Americas, held in Turin, Italy, from 10 to 14 March 2006. In October 2008, the Central School of Speech and Drama announced that Pinter had agreed to become its president and awarded him an honorary fellowship at its graduation ceremony. On his appointment, Pinter commented: "I was a student at Central in 1950–51. I enjoyed my time there very much and I am delighted to become president of a remarkable institution." But he had to receive that honorary degree, his 20th, in absentia owing to ill health. His presidency of the school was brief; he died just two weeks after the graduation ceremony, on 24 December 2008. In 2013, he was posthumously awarded the Sretenje Order of Serbia. ### Nobel Prize in Literature ### Légion d'honneur On 18 January 2007, French Prime Minister Dominique de Villepin presented Pinter with France's highest civil honour, the Légion d'honneur, at a ceremony at the French embassy in London. De Villepin praised Pinter's poem "American Football" (1991) stating: "With its violence and its cruelty, it is for me one of the most accurate images of war, one of the most telling metaphors of the temptation of imperialism and violence." In response, Pinter praised France's opposition to the war in Iraq. M. de Villepin concluded: "The poet stands still and observes what doesn't deserve other men's attention. Poetry teaches us how to live and you, Harold Pinter, teach us how to live." He said that Pinter received the award particularly "because in seeking to capture all the facets of the human spirit, [Pinter's] works respond to the aspirations of the French public, and its taste for an understanding of man and of what is truly universal". Lawrence Pollard observed that "the award for the great playwright underlines how much Mr Pinter is admired in countries like France as a model of the uncompromising radical intellectual". ## Scholarly response Some scholars and critics challenge the validity of Pinter's critiques of what he terms "the modes of thinking of those in power" or dissent from his retrospective viewpoints on his own work. In 1985, Pinter recalled that his early act of conscientious objection resulted from being "terribly disturbed as a young man by the Cold War. And McCarthyism ... A profound hypocrisy. 'They' the monsters, 'we' the good. In 1948, the Russian suppression of Eastern Europe was an obvious and brutal fact, but I felt very strongly then and feel as strongly now that we have an obligation to subject our own actions and attitudes to an equivalent critical and moral scrutiny." Scholars agree that Pinter's dramatic rendering of power relations results from this scrutiny. Pinter's aversion to any censorship by "the authorities" is epitomised in Petey's line at the end of The Birthday Party. As the broken-down and reconstituted Stanley is being carted off by the figures of authority Goldberg and McCann, Petey calls after him, "Stan, don't let them tell you what to do!" Pinter told Gussow in 1988, "I've lived that line all my damn life. Never more than now." The example of Pinter's stalwart opposition to what he termed "the modes of thinking of those in power"—the "brick wall" of the "minds" perpetuating the "status quo"—infused the "vast political pessimism" that some academic critics may perceive in his artistic work, its "drowning landscape" of harsh contemporary realities, with some residual "hope for restoring the dignity of man." As Pinter's long-time friend David Jones reminded analytically inclined scholars and dramatic critics, Pinter was one of the "great comic writers": > The trap with Harold's work, for performers and audiences, is to approach it too earnestly or portentously. I have always tried to interpret his plays with as much humour and humanity as possible. There is always mischief lurking in the darkest corners. The world of The Caretaker is a bleak one, its characters damaged and lonely. But they are all going to survive. And in their dance to that end they show a frenetic vitality and a wry sense of the ridiculous that balance heartache and laughter. Funny, but not too funny. As Pinter wrote, back in 1960: "As far as I am concerned The Caretaker IS funny, up to a point. Beyond that point, it ceases to be funny, and it is because of that point that I wrote it." His dramatic conflicts present serious implications for his characters and his audiences, leading to sustained inquiry about "the point" of his work and multiple "critical strategies" for developing interpretations and stylistic analyses of it. ## Pinter research collections Pinter's unpublished manuscripts and letters to and from him are held in the Harold Pinter Archive in the Modern Literary Manuscripts division of the British Library. Smaller collections of Pinter manuscripts are in the Harry Ransom Humanities Research Center, the University of Texas at Austin; The Lilly Library, Indiana University at Bloomington; the Mandeville Special Collections Library, Geisel Library, at the University of California, San Diego; the British Film Institute, in London; and the Margaret Herrick Library, Pickford Center for Motion Picture Study, the Academy of Motion Picture Arts and Sciences, Beverly Hills, California. ## List of works and bibliography ## See also - International PEN - PEN Pinter Prize - Jewish left - List of Jewish Nobel laureates
13,764
Hassium
1,173,119,233
null
[ "Chemical elements", "Chemical elements with hexagonal close-packed structure", "Hassium", "Synthetic elements", "Transition metals" ]
Hassium is a chemical element with the symbol Hs and the atomic number 108. Hassium is highly radioactive; its most stable known isotopes have half-lives of approximately ten seconds. One of its isotopes, <sup>270</sup>Hs, has magic numbers of both protons and neutrons for deformed nuclei, which gives it greater stability against spontaneous fission. Hassium is a superheavy element; it has been produced in a laboratory only in very small quantities by fusing heavy nuclei with lighter ones. Natural occurrences of the element have been hypothesised but never found. In the periodic table of elements, hassium is a transactinide element, a member of the 7th period and group 8; it is thus the sixth member of the 6d series of transition metals. Chemistry experiments have confirmed that hassium behaves as the heavier homologue to osmium, reacting readily with oxygen to form a volatile tetroxide. The chemical properties of hassium have been only partly characterized, but they compare well with the chemistry of the other group 8 elements. The principal innovation that led to the discovery of hassium was the technique of cold fusion, in which the fused nuclei did not differ by mass as much as in earlier techniques. It relied on greater stability of target nuclei, which in turn decreased excitation energy. This decreased the number of neutron ejections during synthesis, creating heavier, more stable resulting nuclei. The technique was first tested at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, in 1974. JINR used this technique to attempt synthesis of element 108 in 1978, in 1983, and in 1984; the latter experiment resulted in a claim that element 108 had been produced. Later in 1984, a synthesis claim followed from the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Hesse, West Germany. The 1993 report by the Transfermium Working Group, formed by the International Union of Pure and Applied Chemistry and the International Union of Pure and Applied Physics, concluded that the report from Darmstadt was conclusive on its own whereas that from Dubna was not, and major credit was assigned to the German scientists. GSI formally announced they wished to name the element hassium after the German state of Hesse (Hassia in Latin), home to the facility in 1992; this name was accepted as final in 1997. ## Introduction to the heaviest elements ## Discovery ### Cold fusion Nuclear reactions used in the 1960s resulted in high excitation energies that required expulsion of four or five neutrons; these reactions used targets made of elements with high atomic numbers to maximize the size difference between the two nuclei in a reaction. While this increased the chance of fusion due to the lower electrostatic repulsion between the target and the projectile, the formed compound nuclei often broke apart and did not survive to form a new element. Moreover, fusion processes inevitably produce neutron-poor nuclei, as heavier elements require more neutrons per proton to maximize stability; therefore, the necessary ejection of neutrons results in final products with typically have shorter lifetimes. As such, light beams (six to ten protons) allowed synthesis of elements only up to 106. To advance to heavier elements, Soviet physicist Yuri Oganessian at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, proposed a different mechanism, in which the bombarded nucleus would be lead-208, which has magic numbers of protons and neutrons, or another nucleus close to it. Each proton and neutron has a fixed value of rest energy; those of all protons are equal and so are those of all neutrons. In a nucleus, some of this energy is diverted to binding protons and neutrons; if a nucleus has a magic number of protons and/or neutrons, then even more of its rest energy is diverted, which gives the nuclide additional stability. This additional stability requires more energy for an external nucleus to break the existing one and penetrate it. More energy diverted to binding nucleons means less rest energy, which in turn means less mass (mass is proportional to rest energy). More equal atomic numbers of the reacting nuclei result in greater electrostatic repulsion between them, but the lower mass excess of the target nucleus balances it. This leaves less excitation energy for the newly created compound nucleus, which necessitates fewer neutron ejections to reach a stable state. Because of this energy difference, the former mechanism became known as "hot fusion" and the latter as "cold fusion". Cold fusion was first declared successful in 1974 at JINR, when it was tested for synthesis of the yet-undiscovered element106. These new nuclei were projected to decay via spontaneous fission. The physicists at JINR concluded element 106 was produced in the experiment because no fissioning nucleus known at the time showed parameters of fission similar to what was observed during the experiment and because changing either of the two nuclei in the reactions negated the observed effects. Physicists at the Lawrence Berkeley Laboratory (LBL; originally Radiation Laboratory, RL, and later Lawrence Berkeley National Laboratory, LBNL) of the University of California in Berkeley, California, United States, also expressed great interest in the new technique. When asked about how far this new method could go and if lead targets were a physics' Klondike, Oganessian responded, "Klondike may be an exaggeration [...] But soon, we will try to get elements 107... 108 in these reactions." ### Reports The synthesis of element108 was first attempted in 1978 by a research team led by Oganessian at the JINR. The team used a reaction that would generate element108, specifically, the isotope <sup>270</sup>108, from fusion of radium (specifically, the isotope <sup>226</sup> <sub>88</sub>Ra ) and calcium (<sup>48</sup> <sub>20</sub>Ca ). The researchers were uncertain in interpreting their data, and their paper did not unambiguously claim to have discovered the element. The same year, another team at JINR investigated the possibility of synthesis of element108 in reactions between lead (<sup>208</sup> <sub>82</sub>Pb ) and iron (<sup>58</sup> <sub>26</sub>Fe ); they were uncertain in interpreting the data, suggesting the possibility that element108 had not been created. In 1983, new experiments were performed at JINR. The experiments probably resulted in the synthesis of element108; bismuth (<sup>209</sup> <sub>83</sub>Bi ) was bombarded with manganese (<sup>55</sup> <sub>25</sub>Mn ) to obtain <sup>263</sup>108, lead (<sup>207</sup> <sub>82</sub>Pb , <sup>208</sup> <sub>82</sub>Pb ) was bombarded with iron (<sup>58</sup> <sub>26</sub>Fe ) to obtain <sup>264</sup>108, and californium (<sup>249</sup> <sub>98</sub>Cf ) was bombarded with neon (<sup>22</sup> <sub>10</sub>Ne ) to obtain <sup>270</sup>108. These experiments were not claimed as a discovery and Oganessian announced them in a conference rather than in a written report. In 1984, JINR researchers in Dubna performed experiments set up identically to the previous ones; they bombarded bismuth and lead targets with ions of lighter elements manganese and iron, respectively. Twenty-one spontaneous fission events were recorded; the researchers concluded they were caused by <sup>264</sup>108. Later in 1984, a research team led by Peter Armbruster and Gottfried Münzenberg at Gesellschaft für Schwerionenforschung (GSI; Institute for Heavy Ion Research) in Darmstadt, Hesse, West Germany, attempted to create element108. The team bombarded a lead (<sup>208</sup> <sub>82</sub>Pb ) target with accelerated iron (<sup>58</sup> <sub>26</sub>Fe ) nuclei. GSI's experiment to create element108 was delayed until after their creation of element109 in 1982, as prior calculations had suggested that even–even isotopes of element108 would have spontaneous fission half-lives of less than one microsecond, making them difficult to detect and identify. The element108 experiment finally went ahead after <sup>266</sup>109 had been synthesized and was found to decay by alpha emission, suggesting that isotopes of element108 would do likewise, and this was corroborated by an experiment aimed at synthesizing isotopes of element106. GSI reported synthesis of three atoms of <sup>265</sup>108. Two years later, they reported synthesis of one atom of the even–even <sup>264</sup>108. ### Arbitration In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed the Transfermium Working Group (TWG) to assess discoveries and establish final names for elements with atomic numbers greater than 100. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria for recognition of an element and in 1991, they finished the work of assessing discoveries and disbanded. These results were published in 1993. According to the report, the 1984 works from JINR and GSI simultaneously and independently established synthesis of element108. Of the two 1984 works, the one from GSI was said to be sufficient as a discovery on its own. The JINR work, which preceded the GSI one, "very probably" displayed synthesis of element108. However, that was determined in retrospect given the work from Darmstadt; the JINR work focused on chemically identifying remote granddaughters of element108 isotopes (which could not exclude the possibility that these daughter isotopes had other progenitors), while the GSI work clearly identified the decay path of those element108 isotopes. The report concluded that the major credit should be awarded to GSI. In written responses to this ruling, both JINR and GSI agreed with its conclusions. In the same response, GSI confirmed that they and JINR were able to resolve all conflicts between them. ### Naming Historically, a newly discovered element was named by its discoverer. The first regulation came in 1947, when IUPAC decided naming required regulation in case there are conflicting names. These matters were to be resolved by the Commission of Inorganic Nomenclature and the Commission of Atomic Weights. They would review the names in case of a conflict and select one; the decision would be based on a number of factors, such as usage, and would not be an indicator of priority of a claim. The two commissions would recommend a name to the IUPAC Council, which would be the final authority. The discoverers held the right to name an element, but their name would be subject to approval by IUPAC. The Commission of Atomic Weights distanced itself from element naming in most cases. Under Mendeleev's nomenclature for unnamed and undiscovered elements, hassium would be known as "eka-osmium", as in "the first element below osmium in the periodic table" (from Sanskrit eka meaning "one"). In 1979, IUPAC published recommendations according to which the element was to be called "unniloctium" and assigned the corresponding symbol of "Uno", a systematic element name as a placeholder until the element was discovered and the discovery then confirmed, and a permanent name was decided. Although these recommendations were widely followed in the chemical community, the competing physicists in the field ignored them. They either called it "element108", with the symbols E108, (108) or 108, or used the proposed name "hassium". In 1990, in an attempt to break a deadlock in establishing priority of discovery and naming of several elements, IUPAC reaffirmed in its nomenclature of inorganic chemistry that after existence of an element was established, the discoverers could propose a name. (In addition, the Commission of Atomic Weights was excluded from the naming process.) The first publication on criteria for an element discovery, released in 1991, specified the need for recognition by TWG. Armbruster and his colleagues, the officially recognized German discoverers, held a naming ceremony for the elements 107 through 109, which had all been recognized as discovered by GSI, on 7September 1992. For element108, the scientists proposed the name "hassium". It is derived from the Latin name Hassia for the German state of Hesse where the institute is located. This name was proposed to IUPAC in a written response to their ruling on priority of discovery claims of elements, signed 29 September 1992. The process of naming of element 108 was a part of a larger process of naming a number of elements starting with element 101; three teams—JINR, GSI, and LBL—claimed discoveries of several elements and the right to name those elements. Sometimes, these claims clashed; since a discoverer was considered entitled to naming of an element, conflicts over priority of discovery often resulted in conflicts over names of these new elements. These conflicts became known as the Transfermium Wars. Different suggestions to name the whole set of elements from 101 onward and they occasionally assigned names suggested by one team to be used for elements discovered by another. However, not all suggestions were met with equal approval; the teams openly protested naming proposals on several occasions. In 1994, IUPAC Commission on Nomenclature of Inorganic Chemistry recommended that element108 be named "hahnium" (Hn) after the German physicist Otto Hahn so elements named after Hahn and Lise Meitner (it was recommended element109 should be named meitnerium, following GSI's suggestion) would be next to each other, honouring their joint discovery of nuclear fission; IUPAC commented that they felt the German suggestion was obscure. GSI protested, saying this proposal contradicted the long-standing convention of giving the discoverer the right to suggest a name; the American Chemical Society supported GSI. The name "hahnium", albeit with the different symbol Ha, had already been proposed and used by the American scientists for element105, for which they had a discovery dispute with JINR; they thus protested the confusing scrambling of names. Following the uproar, IUPAC formed an ad hoc committee of representatives from the national adhering organizations of the three countries home to the competing institutions; they produced a new set of names in 1995. Element108 was again named hahnium; this proposal was also retracted. The final compromise was reached in 1996 and published in 1997; element108 was named hassium (Hs). Simultaneously, the name dubnium (Db; from Dubna, the JINR location) was assigned to element105, and the name hahnium was not used for any element. The official justification for this naming, alongside that of darmstadtium for element110, was that it completed a set of geographic names for the location of the GSI; this set had been initiated by 19th-century names europium and germanium. This set would serve as a response to earlier naming of americium, californium, and berkelium for elements discovered in Berkeley. Armbruster commented on this, "this bad tradition was established by Berkeley. We wanted to do it for Europe." Later, when commenting on the naming of element112, Armbruster said, "I did everything to ensure that we do not continue with German scientists and German towns." ## Isotopes Hassium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. As of 2019, the quantity of all hassium ever produced was on the order of hundreds of atoms. Thirteen isotopes with mass numbers ranging from 263 to 277 (with the exceptions of 274 and 276) have been reported, four of which—hassium-265, -266, -267, and -277—have known metastable states, although that of hassium-277 is unconfirmed. Most of these isotopes decay predominantly through alpha decay; this is the most common for all isotopes for which comprehensive decay characteristics are available, the only exception being hassium-277, which undergoes spontaneous fission. Lighter isotopes were usually synthesized by direct fusion between two lighter nuclei, whereas heavier isotopes were typically observed as decay products of nuclei with larger atomic numbers. Atomic nuclei have well-established nuclear shells, and the existence of these shells provides nuclei with additional stability. If a nucleus has certain numbers of protons or neutrons, called magic numbers, that complete certain nuclear shells, then the nucleus is even more stable against decay. The highest known magic numbers are 82 for protons and 126 for neutrons. This notion is sometimes expanded to include additional numbers between those magic numbers, which also provide some additional stability and indicate closure of "sub-shells". In contrast to the better-known lighter nuclei, superheavy nuclei are deformed. Until the 1960s, the liquid drop model was the dominant explanation for nuclear structure. It suggested that the fission barrier would disappear for nuclei with about 280nucleons. It was thus thought that spontaneous fission would occur nearly instantly before nuclei could form a structure that could stabilize them; it appeared that nuclei with Z≈103 were too heavy to exist for a considerable length of time. The later nuclear shell model suggested that nuclei with about three hundred nucleons would form an island of stability in which nuclei will be more resistant to spontaneous fission and will primarily undergo alpha decay with longer half-lives, and the next doubly magic nucleus (having magic numbers of both protons and neutrons) is expected to lie in the center of the island of stability in the vicinity of Z=110–114 and the predicted magic neutron number N=184. Subsequent discoveries suggested that the predicted island might be further than originally anticipated; they also showed that nuclei intermediate between the long-lived actinides and the predicted island are deformed, and gain additional stability from shell effects. The addition to the stability against the spontaneous fission should be particularly great against spontaneous fission, although increase in stability against the alpha decay would also be pronounced. The center of the region on a chart of nuclides that would correspond to this stability for deformed nuclei was determined as <sup>270</sup>Hs, with 108 expected to be a magic number for protons for deformed nuclei—nuclei that are far from spherical—and 162 a magic number for neutrons for such nuclei. Experiments on lighter superheavy nuclei, as well as those closer to the expected island, have shown greater than previously anticipated stability against spontaneous fission, showing the importance of shell effects on nuclei. Theoretical models predict a region of instability for some hassium isotopes to lie around A=275 and N=168–170, which is between the predicted neutron shell closures at N=162 for deformed nuclei and N=184 for spherical nuclei. Nuclides within this region are predicted to have low fission barrier heights, resulting in short partial half-lives toward spontaneous fission. This prediction is supported by the observed eleven-millisecond half-life of <sup>277</sup>Hs and the five-millisecond half-life of the neighbouring isobar <sup>277</sup>Mt because the hindrance factors from the odd nucleon were shown to be much lower than otherwise expected. The measured half-lives are even lower than those originally predicted for the even–even <sup>276</sup>Hs and <sup>278</sup>Ds, which suggests a gap in stability away from the shell closures and perhaps a weakening of the shell closures in this region. In 1991, Polish physicists Zygmunt Patyk and Adam Sobiczewski predicted that 108 is a proton magic number for deformed nuclei and 162 is a neutron magic number for such nuclei. This means such nuclei are permanently deformed in their ground state but have high, narrow fission barriers to further deformation and hence relatively long life-times toward spontaneous fission. Computational prospects for shell stabilization for <sup>270</sup>Hs made it a promising candidate for a deformed doubly magic nucleus. Experimental data is scarce, but the existing data is interpreted by the researchers to support the assignment of N=162 as a magic number. In particular, this conclusion was drawn from the decay data of <sup>269</sup>Hs, <sup>270</sup>Hs, and <sup>271</sup>Hs. In 1997, Polish physicist Robert Smolańczuk calculated that the isotope <sup>292</sup>Hs may be the most stable superheavy nucleus against alpha decay and spontaneous fission as a consequence of the predicted N=184 shell closure. ## Natural occurrence Hassium is not known to occur naturally on Earth; the half-lives of all its known isotopes are short enough that no primordial hassium would have survived to the present day. This does not rule out the possibility of the existence of unknown, longer-lived isotopes or nuclear isomers, some of which could still exist in trace quantities if they are long-lived enough. As early as 1914, German physicist Richard Swinne proposed element108 as a source of X-rays in the Greenland ice sheet. Although Swinne was unable to verify this observation and thus did not claim discovery, he proposed in 1931 the existence of "regions" of long-lived transuranic elements, including one around Z=108. In 1963, Soviet geologist and physicist Viktor Cherdyntsev, who had previously claimed the existence of primordial curium-247, claimed to have discovered element108—specifically the <sup>267</sup>108 isotope, which supposedly had a half-life of 400 to 500million years—in natural molybdenite and suggested the provisional name sergenium (symbol Sg); this name takes its origin from the name for the Silk Road and was explained as "coming from Kazakhstan" for it. His rationale for claiming that sergenium was the heavier homologue to osmium was that minerals supposedly containing sergenium formed volatile oxides when boiled in nitric acid, similarly to osmium. Cherdyntsev's findings were criticized by Soviet physicist Vladimir Kulakov on the grounds that some of the properties Cherdyntsev claimed sergenium had were inconsistent with the then-current nuclear physics. The chief questions raised by Kulakov were that the claimed alpha decay energy of sergenium was many orders of magnitude lower than expected and the half-life given was eight orders of magnitude shorter than what would be predicted for a nuclide alpha-decaying with the claimed decay energy. At the same time, a corrected half-life in the region of 10<sup>16</sup>years would be impossible because it would imply the samples contained about a hundred milligrams of sergenium. In 2003, it was suggested that the observed alpha decay with energy 4.5MeV could be due to a low-energy and strongly enhanced transition between different hyperdeformed states of a hassium isotope around <sup>271</sup>Hs, thus suggesting that the existence of superheavy elements in nature was at least possible, although unlikely. In 2006, Russian geologist Alexei Ivanov hypothesized that an isomer of <sup>271</sup>Hs might have a half-life of around (2.5±0.5)×10<sup>8</sup> years, which would explain the observation of alpha particles with energies of around 4.4MeV in some samples of molybdenite and osmiridium. This isomer of <sup>271</sup>Hs could be produced from the beta decay of <sup>271</sup>Bh and <sup>271</sup>Sg, which, being homologous to rhenium and molybdenum respectively, should occur in molybdenite along with rhenium and molybdenum if they occurred in nature. Because hassium is homologous to osmium, it should occur along with osmium in osmiridium if it occurs in nature. The decay chains of <sup>271</sup>Bh and <sup>271</sup>Sg are hypothetical and the predicted half-life of this hypothetical hassium isomer is not long enough for any sufficient quantity to remain on Earth. It is possible that more <sup>271</sup>Hs may be deposited on the Earth as the Solar System travels through the spiral arms of the Milky Way; this would explain excesses of plutonium-239 found on the ocean floors of the Pacific Ocean and the Gulf of Finland. However, minerals enriched with <sup>271</sup>Hs are predicted to have excesses of its daughters uranium-235 and lead-207; they would also have different proportions of elements that are formed during spontaneous fission, such as krypton, zirconium, and xenon. The natural occurrence of hassium in minerals such as molybdenite and osmiride is theoretically possible, but very unlikely. In 2004, JINR started a search for natural hassium in the Modane Underground Laboratory in Modane, Auvergne-Rhône-Alpes, France; this was done underground to avoid interference and false positives from cosmic rays. In 2008–09, an experiment run in the laboratory resulted in detection of several registered events of neutron multiplicity (number of emitted free neutrons after a nucleus hit has been hit by a neutron and fissioned) above three in natural osmium, and in 2012–13, these findings were reaffirmed in another experiment run in the laboratory. These results hinted natural hassium could potentially exist in nature in amounts that allow its detection by the means of analytical chemistry, but this conclusion is based on an explicit assumption that there is a long-lived hassium isotope to which the registered events could be attributed. Since <sup>292</sup>Hs may be particularly stable against alpha decay and spontaneous fission, it was considered as a candidate to exist in nature. This nuclide, however, is predicted to be very unstable toward beta decay and any beta-stable isotopes of hassium such as <sup>286</sup>Hs would be too unstable in the other decay channels to be observed in nature. A 2012 search for <sup>292</sup>Hs in nature along with its homologue osmium at the Maier-Leibnitz Laboratory in Garching, Bavaria, Germany, was unsuccessful, setting an upper limit to its abundance at 3×10<sup>−15</sup> grams of hassium per gram of osmium. ## Predicted properties Various calculations suggest hassium should be the heaviest group 8 element so far, consistently with the periodic law. Its properties should generally match those expected for a heavier homologue of osmium; as is the case for all transactinides, a few deviations are expected to arise from relativistic effects. Very few properties of hassium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that hassium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, such as enthalpy of adsorption of hassium tetroxide, but properties of hassium metal remain unknown and only predictions are available. ### Relativistic effects Relativistic effects on hassium should arise due to the high charge of its nuclei, which causes the electrons around the nucleus to move faster—so fast their velocity becomes comparable to the speed of light. There are three main effects: the direct relativistic effect, the indirect relativistic effect, and spin–orbit splitting. (The existing calculations do not account for Breit interactions, but those are negligible, and their omission can only result in an uncertainty of the current calculations of no more than 2%.) As atomic number increases, so does the electrostatic attraction between an electron and the nucleus. This causes the velocity of the electron to increase, which leads to an increase in its mass. This in turn leads to contraction of the atomic orbitals, most specifically the s and p<sub>1/2</sub> orbitals. Their electrons become more closely attached to the atom and harder to pull from the nucleus. This is the direct relativistic effect. It was originally thought to be strong only for the innermost electrons, but was later established to significantly influence valence electrons as well. Since the s and p<sub>1/2</sub> orbitals are closer to the nucleus, they take a bigger portion of the electric charge of the nucleus on themselves ("shield" it). This leaves less charge for attraction of the remaining electrons, whose orbitals therefore expand, making them easier to pull from the nucleus. This is the indirect relativistic effect. As a result of the combination of the direct and indirect relativistic effects, the Hs<sup>+</sup> ion, compared to the neutral atom, lacks a 6d electron, rather than a 7s electron. In comparison, Os<sup>+</sup> lacks a 6s electron compared to the neutral atom. The ionic radius (in oxidation state +8) of hassium is greater than that of osmium because of the relativistic expansion of the 6p<sub>3/2</sub> orbitals, which are the outermost orbitals for an Hs<sup>8+</sup> ion (although in practice such highly charged ions would be too polarised in chemical environments to have much reality). There are several kinds of electronic orbitals, denoted by the letters s, p, d, and f (g orbitals are expected to start being chemically active among elements after element 120). Each of these corresponds to an azimuthal quantum number l: s to 0, p to 1, d to 2, and f to 3. Every electron also corresponds to a spin quantum number s, which may equal either +1/2 or −1/2. Thus, the total angular momentum quantum number j = l + s is equal to j = l ± 1/2 (except for l = 0, for which for both electrons in each orbital j = 0 + 1/2 = 1/2). Spin of an electron relativistically interacts with its orbit, and this interaction leads to a split of a subshell into two with different energies (the one with j = l − 1/2 is lower in energy and thus these electrons more difficult to extract): for instance, of the six 6p electrons, two become 6p<sub>1/2</sub> and four become 6p<sub>3/2</sub>. This is the spin–orbit splitting (sometimes also referred to as subshell splitting or jj coupling). It is most visible with p electrons, which do not play an important role in the chemistry of hassium, but those for d and f electrons are within the same order of magnitude (quantitatively, spin–orbit splitting in expressed in energy units, such as electronvolts). These relativistic effects are responsible for the expected increase of the ionization energy, decrease of the electron affinity, and increase of stability of the +8 oxidation state compared to osmium; without them, the trends would be reversed. Relativistic effects decrease the atomization energies of the compounds of hassium because the spin–orbit splitting of the d orbital lowers binding energy between electrons and the nucleus and because relativistic effects decrease ionic character in bonding. ### Physical and atomic The previous members of group8 have relatively high melting points: Fe, 1538°C; Ru, 2334°C; Os, 3033°C. Much like them, hassium is predicted to be a solid at room temperature although its melting point has not been precisely calculated. Hassium should crystallize in the hexagonal close-packed structure (<sup>c</sup>/<sub>a</sub>=1.59), similarly to its lighter congener osmium. Pure metallic hassium is calculated to have a bulk modulus (resistance to uniform compression) of 450GPa, comparable with that of diamond, 442GPa. Hassium is expected to be one of the densest of the 118 known elements, with a predicted density of 27–29 g/cm<sup>3</sup> vs. the 22.59 g/cm<sup>3</sup> measured for osmium. The atomic radius of hassium is expected to be around 126pm. Due to the relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Hs<sup>+</sup> ion is predicted to have an electron configuration of [Rn]5f<sup>14</sup>6d<sup>5</sup>7s<sup>2</sup>, giving up a 6d electron instead of a 7s electron, which is the opposite of the behaviour of its lighter homologues. The Hs<sup>2+</sup> ion is expected to have an electron configuration of [Rn]5f<sup>14</sup>6d<sup>5</sup>7s<sup>1</sup>, analogous to that calculated for the Os<sup>2+</sup> ion. In chemical compounds, hassium is calculated to display bonding characteristic for a d-block element, whose bonding will be primarily executed by 6d<sub>3/2</sub> and 6d<sub>5/2</sub> orbitals; compared to the elements from the previous periods, 7s, 6p<sub>1/2</sub>, 6p<sub>3/2</sub>, and 7p<sub>1/2</sub> orbitals should be more important. ### Chemical Hassium is the sixth member of the 6d series of transition metals and is expected to be much like the platinum group metals. Some of these properties were confirmed by gas-phase chemistry experiments. The group8 elements portray a wide variety of oxidation states but ruthenium and osmium readily portray their group oxidation state of +8; this state becomes more stable down the group. This oxidation state is extremely rare: among stable elements, only ruthenium, osmium, and xenon are able to attain it in reasonably stable compounds. Hassium is expected to follow its congeners and have a stable +8 state, but like them it should show lower stable oxidation states such as +6, +4, +3, and +2. Hassium(IV) is expected to be more stable than hassium(VIII) in aqueous solution. Hassium should be a rather noble metal. The standard reduction potential for the Hs<sup>4+</sup>/Hs couple is expected to be 0.4V. The group 8 elements show a distinctive oxide chemistry. All the lighter members have known or hypothetical tetroxides, MO<sub>4</sub>. Their oxidizing power decreases as one descends the group. FeO<sub>4</sub> is not known due to its extraordinarily large electron affinity—the amount of energy released when an electron is added to a neutral atom or molecule to form a negative ion—which results in the formation of the well-known oxyanion ferrate(VI), FeO<sup>2−</sup> <sub>4</sub>. Ruthenium tetroxide, RuO<sub>4</sub>, which is formed by oxidation of ruthenium(VI) in acid, readily undergoes reduction to ruthenate(VI), RuO<sup>2−</sup> <sub>4</sub>. Oxidation of ruthenium metal in air forms the dioxide, RuO<sub>2</sub>. In contrast, osmium burns to form the stable tetroxide, OsO<sub>4</sub>, which complexes with the hydroxide ion to form an osmium(VIII) -ate complex, [OsO<sub>4</sub>(OH)<sub>2</sub>]<sup>2−</sup>. Therefore, hassium should behave as a heavier homologue of osmium by forming of a stable, very volatile tetroxide HsO<sub>4</sub>, which undergoes complexation with hydroxide to form a hassate(VIII), [HsO<sub>4</sub>(OH)<sub>2</sub>]<sup>2−</sup>. Ruthenium tetroxide and osmium tetroxide are both volatile due to their symmetrical tetrahedral molecular geometry and because they are charge-neutral; hassium tetroxide should similarly be a very volatile solid. The trend of the volatilities of the group8 tetroxides is experimentally known to be RuO<sub>4</sub>\<OsO<sub>4</sub>\>HsO<sub>4</sub>, which confirms the calculated results. In particular, the calculated enthalpies of adsorption—the energy required for the adhesion of atoms, molecules, or ions from a gas, liquid, or dissolved solid to a surface—of HsO<sub>4</sub>, −(45.4±1)kJ/mol on quartz, agrees very well with the experimental value of −(46±2)kJ/mol. ## Experimental chemistry The first goal for chemical investigation was the formation of the tetroxide; it was chosen because ruthenium and osmium form volatile tetroxides, being the only transition metals to display a stable compound in the +8 oxidation state. Despite this selection for gas-phase chemical studies being clear from the beginning, chemical characterization of hassium was considered a difficult task for a long time. Although hassium isotopes were first synthesized in 1984, it was not until 1996 that a hassium isotope long-lived enough to allow chemical studies was synthesized. Unfortunately, this hassium isotope, <sup>269</sup>Hs, was synthesized indirectly from the decay of <sup>277</sup>Cn; not only are indirect synthesis methods not favourable for chemical studies, but the reaction that produced the isotope <sup>277</sup>Cn had a low yield—its cross section was only 1pb—and thus did not provide enough hassium atoms for a chemical investigation. Direct synthesis of <sup>269</sup>Hs and <sup>270</sup>Hs in the reaction <sup>248</sup>Cm(<sup>26</sup>Mg,xn)<sup>274−x</sup>Hs (x=4 or 5) appeared more promising because the cross section for this reaction was somewhat larger at 7pb. This yield was still around ten times lower than that for the reaction used for the chemical characterization of bohrium. New techniques for irradiation, separation, and detection had to be introduced before hassium could be successfully characterized chemically. Ruthenium and osmium have very similar chemistry due to the lanthanide contraction but iron shows some differences from them; for example, although ruthenium and osmium form stable tetroxides in which the metal is in the +8 oxidation state, iron does not. In preparation for the chemical characterization of hassium, research focused on ruthenium and osmium rather than iron because hassium was expected to be similar to ruthenium and osmium, as the predicted data on hassium closely matched that of those two. The first chemistry experiments were performed using gas thermochromatography in 2001, using the synthetic osmium radioisotopes <sup>172</sup>Os and <sup>173</sup>Os as a reference. During the experiment, seven hassium atoms were synthesized using the reactions <sup>248</sup>Cm(<sup>26</sup>Mg,5n)<sup>269</sup>Hs and <sup>248</sup>Cm(<sup>26</sup>Mg,4n)<sup>270</sup>Hs. They were then thermalized and oxidized in a mixture of helium and oxygen gases to form hassium tetroxide molecules. Hs + 2 O<sub>2</sub> → HsO<sub>4</sub> The measured deposition temperature of hassium tetroxide was higher than that of osmium tetroxide, which indicated the former was the less volatile one, and this placed hassium firmly in group 8. The enthalpy of adsorption for HsO<sub>4</sub> measured, −46±2 kJ/mol, was significantly lower than the predicted value, −36.7±1.5 kJ/mol, indicating OsO<sub>4</sub> is more volatile than HsO<sub>4</sub>, contradicting earlier calculations that implied they should have very similar volatilities. For comparison, the value for OsO<sub>4</sub> is −39±1 kJ/mol. (The calculations that yielded a closer match to the experimental data came after the experiment, in 2008.) It is possible hassium tetroxide interacts differently with silicon nitride than with silicon dioxide, the chemicals used for the detector; further research is required to establish whether there is a difference between such interactions and whether it has influenced the measurements. Such research would include more accurate measurements of the nuclear properties of <sup>269</sup>Hs and comparisons with RuO<sub>4</sub> in addition to OsO<sub>4</sub>. In 2004, scientists reacted hassium tetroxide and sodium hydroxide to form sodium hassate(VIII), a reaction that is well known with osmium. This was the first acid-base reaction with a hassium compound, forming sodium hassate(VIII): HsO <sub>4</sub> + 2 NaOH → Na <sub>2</sub>[HsO <sub>4</sub>(OH) <sub>2</sub>] The team from the University of Mainz planned in 2008 to study the electrodeposition of hassium atoms using the new TASCA facility at GSI. Their aim was to use the reaction <sup>226</sup>Ra(<sup>48</sup>Ca,4n)<sup>270</sup>Hs. Scientists at GSI were hoping to use TASCA to study the synthesis and properties of the hassium(II) compound hassocene, Hs(C<sub>5</sub>H<sub>5</sub>)<sub>2</sub>, using the reaction <sup>226</sup>Ra(<sup>48</sup>Ca,xn). This compound is analogous to the lighter compounds ferrocene, ruthenocene, and osmocene, and is expected to have the two cyclopentadienyl rings in an eclipsed conformation like ruthenocene and osmocene and not in a staggered conformation like ferrocene. Hassocene, which is expected to be a stable and highly volatile compound, was chosen because it has hassium in the low formal oxidation state of +2—although the bonding between the metal and the rings is mostly covalent in metallocenes—rather than the high +8 state that had previously been investigated, and relativistic effects were expected to be stronger in the lower oxidation state. The highly symmetrical structure of hassocene and its low number of atoms make relativistic calculations easier. As of 2021, there are no experimental reports of hassocene.
64,109,331
Not My Responsibility
1,173,273,114
2020 American film by Billie Eilish
[ "2020 short films", "2020s English-language films", "2021 songs", "American short films", "Billie Eilish", "Billie Eilish songs", "Body image in popular culture", "Song recordings produced by Finneas O'Connell", "Songs written by Billie Eilish", "Songs written by Finneas O'Connell" ]
Not My Responsibility is a 2020 American short film written and produced by singer-songwriter Billie Eilish. A commentary on body shaming and double standards placed upon young women's appearances, it features a monologue from Eilish about the media scrutiny surrounding her body. The film is spoken-word and stars Eilish in a dark room, where she gradually undresses before submerging herself in black substance. The film premiered during Elish's Where Do We Go? World Tour on March 9, 2020, as a concert interlude, and was released online on May 26, 2020. Critics gave positive reviews, praising the commentary and tone, which they considered empowering. The film's audio was later included as a song on Eilish's second studio album, Happier Than Ever (2021). Some music journalists described it as the album's thematic centerpiece; others questioned its appearance on the tracklist, feeling that it lost its emotional impact without the visuals. ## Background and release American singer-songwriter Billie Eilish released her debut studio album, When We All Fall Asleep, Where Do We Go?, in March 2019, to commercial success; the album assisted her rise to widespread recognition. Her fashion style at the time, specifically her choice to wear baggy clothing, attracted public attention and scrutiny. She wore such clothes to avoid sexual objectification, being extremely conscious about her body as a teenager who had struggled with her self-image since 11 years old. Eilish faced several comments about how she was undesirable and unfeminine for it and how she needed to "act like a woman" to be more attractive. Upon turning 18 years old, Eilish thought about wearing less oversized clothes. She believed that her detractors would not stop shaming her body anyway, potentially being called a slut, a "fat cow", and a hypocrite who was selling her body, if she were to do so. In response, Eilish, who had been using her platform to spread body positivity and counter the culture of body shaming, wrote and produced the short film Not My Responsibility. It addresses misogynistic double standards placed upon young women's appearances, with focus towards public discussion around Eilish's body. A spoken-word piece, Not My Responsibility premiered during the Miami date of her Where Do We Go? World Tour on March 9, 2020, as a concert interlude. The film was uploaded onto Eilish's YouTube channel on May 26, 2020. Uproxx music editor Derrick Rossignol wrote that, at the time of its release, the film marked Eilish's "biggest statement" about body shaming in her career. ## Synopsis Not My Responsibility is set in a dimly lit room and begins with Eilish in a black jacket. As electronic music plays in the background, she gradually undresses until she is in nothing but a necklace and a black sleeveless shirt. She takes off the shirt and reveals a black bra underneath. Slowly, Eilish submerges herself in a pool of black water and resurfaces, fully covered in the substance. The film features commentary from Eilish while she undresses. She comments on the public discussion around her physical appearance and acknowledges the varying opinions people hold of her, but she questions whether they "really know" her enough to make assumptions about her body. She criticizes the way in which they decide her worth based on the assumptions in question. Eilish addresses the double standards she faces for wearing anything she likes: "If I wear what is comfortable, I am not a woman. If I shed the layers, I'm a slut." She concludes with the lines, "Is my value based only on your perception? Or is your opinion of me not my responsibility?" ## Critical reception Not My Responsibility received praise from critics. Lars Brandle wrote for Billboard that Eilish got to demonstrate her "creative juices" with the film's visuals, and he commented positively on the background music. Other music journalists, including Althea Legaspi of Rolling Stone, Ruth Kinane of Entertainment Weekly, and Dorany Pineda of Los Angeles Times, found Not My Responsibility an effective, powerful takedown of sexist beauty standards. Riley Runnells, a Paper author, praised Eilish's showcasing of her vulnerability through the film's thesis, whereas the zealous manner in which she depicted the sentiment was a point of praise for Carolyn Twersky of Seventeen. Teen Vogue's Laura Pitcher shared her empathy for Eilish's longstanding experience with body shaming. With the film, she felt inspired by Eilish's continued dedication to speak out against the misogynistic policing of how women look. ## As a song The short film's audio appears as the ninth track of Eilish's second studio album, Happier Than Ever. It was released on July 30, 2021, through Darkroom and Interscope Records. The album's lyrical themes discuss the struggles that young women face in the entertainment industry: emotional abuse, power imbalance, and misogyny. In a Happier Than Ever commentary for Spotify, Eilish described the song's lyrics as "some of my favorite words that I've ever written", though she felt nobody paid attention to its message. An ambient, electropop track that uses synthesizers, "Not My Responsibility" was written in part by Eilish; her older brother, Finneas O'Connell, receives co-writing and production credits. Dave Kutch and Rob Kinelski worked on the mastering and mixing, respectively. The song's instrumental was sampled to create the beat for the following album track, "Overheated", into which the music at the end of "Not My Responsibility" transitions. Eilish performed "Not My Responsibility" as part of the Disney+ concert film Happier Than Ever: A Love Letter to Los Angeles, released on September 3, 2021. She also included it in the set list of a 2022–2023 world tour in support of Happier Than Ever. Upon the album's release, "Not My Responsibility" charted in four countries and peaked at number 125 of the Billboard Global 200. ### Critical reception Some critics considered "Not My Responsibility" to be Happier Than Ever's centerpiece. According to them, the song exemplified the album's crucial motifs: the intense media gossip around Eilish as a young woman, as well as her reflection on its negative effects. Analyzing Happier Than Ever for Slate, Carl Wilson wrote that amid all the speculation about her personal life, "the focus on her body has clearly hit Eilish hardest". "Not My Responsibility", for Pitchfork's Quinn Moreland, sets the tone for the second half of the album, which deals with topics such as power dynamics, voyeurism, and sexuality. Sophie Williams of NME argued that its approach to Eilish's media scrutiny made it one of the "most powerful and haunting songs" in her discography. In a review of Happier Than Ever for The A.V. Club, Alex McLevy observed that "Not My Responsibility" felt more like a TED talk than a song that contained any artistry. Insider's Callie Ahgrim, while appreciating its commentary, felt that it could have been excluded from the album's tracklist. She believed that it lost its impact without the visuals from the short film—a sentiment that Moreland and McLevy echoed. Courteney Larocca, as a reply to Ahgrim, wrote: "slapping it haphazardly onto an official tracklist only evokes an eye-roll and a guarantee of pressing skip". Other critics described its lyrics as pretentious or excessive in the context of the album. ### Credits and personnel Credits are adapted from the liner notes of Happier Than Ever. - Billie Eilish – vocals, vocal engineering, songwriting - Finneas O'Connell – songwriting, production, engineering, vocal arranging, bass, drum programming, synthesizer - Dave Kutch – mastering - Rob Kinelski – mixing ### Charts ## Note
187,509
Pomona College
1,173,890,745
Liberal arts college in Claremont, California
[ "1887 establishments in California", "Claremont Colleges", "Claremont, California", "Liberal arts colleges in California", "Need-blind educational institutions", "Pomona College", "Private universities and colleges in California", "Schools accredited by the Western Association of Schools and Colleges", "Universities and colleges established in 1887", "Universities and colleges in Los Angeles County, California" ]
Pomona College (/pəˈmoʊnə/ pə-MOH-nə) is a private liberal arts college in Claremont, California. It was established in 1887 by a group of Congregationalists who wanted to recreate a "college of the New England type" in Southern California. In 1925, it became the founding member of the Claremont Colleges consortium of adjacent, affiliated institutions. Pomona is a four-year undergraduate institution that approximately 0 students. It offers 48 majors in liberal arts disciplines and roughly 650 courses, as well as access to more than 2,000 additional courses at the other Claremont Colleges. Its Error in convert: Needs the number to be converted (help) campus is in a residential community 35 miles (56 km) east of downtown Los Angeles, near the foothills of the San Gabriel Mountains. Pomona has the lowest acceptance rate of any U.S. liberal arts college as of 2021 and is considered the most prestigious liberal arts college in the American West and one of the most prestigious in the country. It has a \$ endowment as of September 2023, making it the one of the 10 wealthiest schools in the U.S. on a per student basis. Nearly all students live on campus, and the student body is noted for its racial, geographic, and socioeconomic diversity. The college's athletics teams, the Sagehens, compete jointly with Pitzer College in the SCIAC, a Division III conference. Prominent alumni of Pomona include Oscar, Emmy, Grammy, and Tony award winners; U.S. Senators, ambassadors, and other federal officials; Pulitzer Prize recipients; billionaire executives; a Nobel Prize laureate; National Academies members; and Olympic athletes. The college is a top producer of Fulbright scholars and recipients of other fellowships. ## History ### Founding era Pomona College was established as a coeducational and nonsectarian Christian institution on October 14, 1887, amidst a real estate boom and anticipated population influx precipitated by the arrival of a transcontinental railroad to Southern California. Its founders, a regional group of Congregationalists, sought to create a "college of the New England type", emulating the institutions where many of them had been educated. Classes first began at Ayer Cottage, a rental house in Pomona, California, on September 12, 1888, with a permanent campus planned at Piedmont Mesa four miles north of the city. That year, as the real estate bubble burst, making the Piedmont campus financially untenable, the college was offered the site of an unfinished hotel (later renamed Sumner Hall) in the nearby, recently founded town of Claremont. It moved there but kept its name. Trustee Charles B. Sumner led the college during its first years, helping hire its first official president, Cyrus G. Baldwin, in 1890. The first graduating class, in 1894, had 11 members. Pomona suffered through a severe financial crisis during its early years, but raised enough money to add several buildings to its campus. Although the first Asian and black students enrolled in 1897 and 1900, respectively, the student body (like most others of the era) remained almost all white throughout this period. In 1905, during president George A. Gates' tenure, the college acquired a 64-acre (26 ha) parcel of land to its east known as the Wash. In 1911, as high schools became more common in the region, the college eliminated its preparatory department, which had taught pre-college level courses. The following year, it committed to a liberal arts model, soon after turning its previously separate schools of art and music into departments within the college. In 1914, the Phi Beta Kappa honor society established a chapter at the college. Daily attendance at chapel was mandated until 1921, and student culture emphasized athletics and academic class rivalries. During World War I, male students were divided into three military companies and a Red Cross unit to assist in the war effort. ### Mid-20th century Confronted with growing demand in the 1920s, Pomona's fourth president, James A. Blaisdell, considered whether to grow the college into a large university that could acquire additional resources or remain a small institution capable of providing a more intimate educational experience. Seeking both, he pursued an alternative path inspired by the collegiate university model he observed at Oxford, envisioning a group of independent colleges sharing centralized resources such as a library. On October 14, 1925, Pomona's 38th anniversary, the college founded the Claremont Colleges consortium. Construction of the Clark dormitories on North Campus (then the men's campus) began in 1929, a reflection of president Charles Edmunds' prioritization of the college's residential life. Edmunds, who had previously served as president of Lingnan University in Guangzhou, China, inspired a growing interest in Asian culture at the college and established its Asian studies program. Pomona's enrollment declined during the Great Depression as students became unable to afford tuition, and its budget was slashed by a quarter. The college reoriented itself toward wartime activities again during World War II, hosting an Air Force military meteorology program and Army Specialized Training Program courses in engineering and foreign languages. ### Postwar transformations Pomona's longest-serving president, E. Wilson Lyon, guided the college through a transformational and turbulent period from 1941 to 1969. The college's enrollment rose above 1,000 following the war, leading to the construction of several residence halls and science facilities. Its endowment grew steadily, due in part to the introduction in 1942 of a deferred giving fundraising scheme pioneered by Allen Hawley called the Pomona Plan, where participants receive a lifetime annuity in exchange for donating to the college upon their death. The plan's model has since been adopted by many other colleges. Lyon made several progressive decisions relating to civil rights, including supporting Japanese-American students during internment and establishing an exchange program in 1952 with Fisk University, a historically black university in Tennessee. He and dean of women Jean Walton ended the gender segregation of Pomona's residential life, first with the opening of Frary Dining Hall (then part of the men's campus) to women beginning in 1957 and later with the elimination of parietal rules in the late 1960s and the introduction of co-educational housing in 1968. The student body, influenced by the countercultural revolution, became less socially conservative and more politically engaged in this era. Protesters opposed to the Vietnam War occupied Sumner Hall to obstruct Air Force recruiters in 1968 and forced the cancellation of classes at the end of the spring 1970 semester. The college's ethnic diversity also began to increase, and activists successfully pushed the consortium to establish black and Latino studies programs in 1969. A bomb exploded at the Carnegie Building that February, permanently injuring a secretary; no culprit was ever identified. During the tenure of president David Alexander from 1969 to 1991, Pomona gained increased prominence on the national stage. The endowment increased ten-fold, enabling the construction and renovation of a number of buildings. Several identity-based groups, such as the Pomona College Women's Union (founded in 1984), were established. In the mid-1980s, out-of-state students began to outnumber in-state students. In 1991, the college converted the dormitory basements used by fraternities into lounges, arguing that this created a more equitable distribution of campus space. The move lowered the profile of Greek life on campus. ### 21st century In the 2000s, under president David W. Oxtoby, Pomona began placing more emphasis on reducing its environmental impact, committing in 2003 to obtaining LEED certifications for new buildings and launching various sustainability initiatives. The college also entered partnerships with several college access groups (including the Posse Foundation in 2004 and QuestBridge in 2005) and committed to meeting the full demonstrated financial need of students through grants rather than loans in 2008. These efforts, combined with Pomona's previously instituted need-blind admission policy, resulted in increased enrollment of low-income and racial minority students. In 2008, it was discovered that Pomona's alma mater may have been originally written to be sung as the ensemble finale to a student-produced blackface minstrel show performed on campus in 1910. The college stopped singing it at convocation and commencement, alienating some alumni. Pomona requested proof of legal residency from employees amid a unionization drive by dining hall workers in 2011. Seventeen workers who were unable to provide documentation were fired, drawing national media attention and sparking criticism from activists; the dining hall staff voted to unionize in 2013. A rebranding initiative that year sought to emphasize students' passion and drive, angering students who thought it would lead to a more stressful culture. Several protests in the 2010s criticized the college's handling of sexual assault, leading to various reforms. In 2017, G. Gabrielle Starr became Pomona's tenth president; she is the first woman and first African American to hold the office. From March 2020 through the spring 2021 semester, the college switched to online instruction in response to the COVID-19 pandemic. ## Campus Pomona's Error in convert: Needs the number to be converted (help) campus is in Claremont, California, an affluent suburban residential community 35 miles (56 km) east of downtown Los Angeles. It is directly northwest of the Claremont Village (the city's downtown commercial district) and directly south of the other contiguous Claremont Colleges. The area has a Mediterranean climate and consists of a gentle slope from the alluvial fan of San Antonio Creek in the San Gabriel Mountains to the north. In its early years, Pomona quickly expanded from its initial home in Sumner Hall, constructing several buildings to accommodate its growing enrollment and ambitions. After 1908, development of the campus was guided by master plans from architect Myron Hunt, who envisioned a central quadrangle flanked by buildings connected via visual axes. In 1923, landscape architect Ralph Cornell expanded on Hunt's plans, envisioning a "college in a garden" defined by native Southern California vegetation but incorporating global influences in the tradition of the acclimatization movement. President James Blaisdell's decision to purchase undeveloped land around Pomona while it was still available later gave the college room to grow and found the consortium. Many of the earlier buildings were constructed in the Mission Revival and Spanish Colonial Revival styles, with stucco walls and red terracotta tile roofs. Other and later construction incorporated elements of neoclassical, Victorian, Italian Romanesque, modern, and postmodern styles. As a result, the present campus features a blend of architectural styles. Most buildings are three or fewer stories in height, and are designed to facilitate both indoor and outdoor use. The campus consists of 88 facilities as of 2022, including 70 addressed buildings. It is bounded by First Street on the south, Mills and Amherst Avenues on the east, Eighth Street on the north, and Harvard Avenue on the west. It is informally divided into North Campus and South Campus by Sixth Street, with most academic buildings in the western half and a naturalistic area known as the Wash in the east. It has been featured in numerous films and television shows, often standing in for other schools. Pomona has undertaken initiatives to make its campus more sustainable, including requiring that all new construction be built to LEED Gold standards, replacing turf with drought-tolerant landscaping, and committing to achieving carbon neutrality without the aid of purchased carbon credits by 2030. The Association for the Advancement of Sustainability in Higher Education gave the college a gold rating in its 2018 Sustainable Campus Index. ### South Campus South Campus consists of mostly first-year and second-year housing and academic buildings for the social sciences, arts, and humanities. A row of four residence halls is south of Bonita Avenue, with Frank Dining Hall at the eastern end. Sumner Hall, the home of admissions and several other administrative departments, is to the north of the dormitories. Oldenborg Center, a foreign-language housing option that includes a foreign-language dining hall, is across from Sumner. South Campus has several arts buildings and performance venues. Bridges Auditorium ("Big Bridges") is used for concerts and speakers and has a capacity of 2,500. Bridges Hall of Music ("Little Bridges") is a concert hall with seating for 550. On the western edge of campus is the Benton Museum of Art, which has a collection of approximately 18,000 items, including Italian Renaissance panel paintings, indigenous American art and artifacts, and American and European prints, drawings, and photographs. The Seaver Theatre Complex has a 335-seat thrust stage theater and 125-seat black box theater, among other facilities. The Studio Art Hall garnered national recognition for its steel-frame design when it was completed in 2014. Pomona's main social science and humanities buildings are located west of College Avenue. They include the Carnegie Building, a neoclassical structure built in 1908 as a Carnegie library. Several historic Victorian houses line the southern portion of the avenue, including the Queen Anne–style Renwick House, which was listed on the National Register of Historic Places in 2016. Marston Quadrangle, a 5-acre (2 ha) lawn framed by California sycamore and coastal redwood trees, serves as a central artery for the campus, anchored by Carnegie on the west and Bridges Auditorium on the east. To its north is Alexander Hall, the college's central administration building, and the Smith Campus Center (SCC), home to many student services and communal spaces. East of the SCC is the Center for Athletics, Recreation and Wellness (Pomona's primary indoor athletics and recreation facility) and Smiley Hall dormitory, built in 1908. At the intersection of Sixth Street and College Avenue are the college gates, built in 1914, which mark the historical northern edge of the campus. They bear two quotes from President Blaisdell. On the north is "let only the eager, thoughtful and reverent enter here", and on the south is "They only are loyal to this college who departing bear their added riches in trust for mankind". Per campus tradition, enrolling students walk south through the gates during orientation and seniors walk north through them shortly before graduation. The less-developed 40-acre (16 ha) eastern portion of the campus is known as the Wash (formally Blanchard Park), and contains a large grove of coast live oak trees, as well as many of the college's athletics facilities, an outdoor amphitheater, an astronomical observatory, and the Pomona College Organic Farm, an experiment in sustainable agriculture. ### North Campus North Campus was designed by architect Sumner Spaulding, and its initial phase was completed in 1930. It consists primarily of residential buildings for third- and fourth-year students and academic buildings for the natural sciences. The academic buildings are located to the west of North College Way. This area includes Dividing the Light (2007), a skyspace by Light and Space artist and alumnus James Turrell. The residence halls include the Clark halls (I, III, and V) and several more recent constructions. The North Campus dining hall, Frary Dining Hall, features a vaulted ceiling and is the location of the murals Prometheus (1930) by José Clemente Orozco, the first Mexican fresco in the U.S., and Genesis (1960) by Rico Lebrun. ### Other facilities The college owns the 53-acre (21 ha) Trails Ends Ranch (a wilderness area in the Webb Canyon north of campus), the 320-acre (130 ha) Mildred Pitt Ranch in southeastern Monterey County, and the Halona Lodge retreat center in Idyllwild, California. The astronomy department built and operates a telescope at the Table Mountain Observatory in Big Pines, California. Along the north side of campus are several joint buildings maintained by The Claremont Colleges Services. The consortium also owns the Robert J. Bernard Field Station north of Foothill Boulevard. ## Organization and administration ### Governance Pomona is governed as a private, nonprofit organization by a board of trustees responsible for overseeing the long-term interests of the college. The board consists of up to 42 members, most of whom are elected by existing members to four-year terms with a term limit of 12 years. It is responsible for hiring the college's president ( since 2023), approving budgets, setting overarching policies, and various other tasks. The president, in turn, oversees the college's general operation, assisted by administrative staff and a faculty cabinet. The college has total employees as of the 2023 semester. Pomona operates under a shared governance model, in which faculty and students sit on many policymaking committees and have a degree of control over other major decisions. ### Academic affiliations Pomona is the founding member of the Claremont Colleges (colloquially "7Cs", for "seven colleges"), a consortium of five undergraduate liberal arts colleges ("5Cs")—Pomona, Scripps, Claremont McKenna, Harvey Mudd, and Pitzer—and two graduate schools—Claremont Graduate University and Keck Graduate Institute. All are located in Claremont. Although each member has individual autonomy and a distinct identity, there is substantial collaboration through The Claremont Colleges Services (TCCS), a coordinating entity that manages the central library, campus safety services, health services, and other resources. Overall, the 7Cs have been praised by higher education experts for their close cooperation, although there have been occasional tensions. Pomona is the largest undergraduate and wealthiest member. Pomona is a member of several other consortia of selective colleges, including the Consortium of Liberal Arts Colleges, the Oberlin Group, and the Annapolis Group. The college is accredited by the WASC Senior College and University Commission, which reaffirmed its status in 2021 with particular praise for its diversity initiatives. ### Finances, costs, and financial aid Pomona has an endowment of \$ as of September 2023, giving it one of the 10 highest endowments per student of any college or university in the U.S. The college's total assets (including its campus) are valued at \$. Its operating budget for the 2023‍–‍2024 academic year \$, of which roughly half funded by endowment earnings. In 2021, 46% of the budget was allocated to instruction, 1% to research, 1% to public service, 14% to academic support, 15% to student services, and 23% to institutional support. In 2021, Fitch Ratings gave the college a AAA bond credit rating, its highest rating, reflecting an "extremely strong financial profile". For the 2023‍–‍2024 academic year, Pomona charged a tuition fee of \$, with a total estimated on-campus cost of attendance of \$. In 2022‍–‍2023, 52% of students received a financial aid package, with an average award of \$59,183, including 40% of international students, who received an average award of \$67,160. The college meets the full demonstrated need of all admitted students, including international students, through grants rather than loans. It does not offer merit awards or athletic scholarships. ## Academics and programs Pomona offers instruction in the liberal arts disciplines and awards the Bachelor of Arts degree. The college operates on a semester system, with a normal course load of four full-credit classes per semester. 32 credits and a C average GPA are needed to graduate, along with the requirements of a major, a first-year critical inquiry seminar, at least one course in each of six "breadth of study" areas, proficiency in a foreign language, two physical education courses, a writing-intensive course, a speaking-intensive course, and an "analyzing difference" course (typically examining a type of structural inequality). Pomona offers 48 majors, most of which also have a corresponding minor. For the 2022 graduation cohort, 23% of students majored in the arts and humanities, 39% in the natural sciences, 22% in the social sciences, and 16% in interdisciplinary fields. 15% of students completed a double major, 23% completed a minor, and 3% completed multiple minors. The college does not permit majoring in pre-professional disciplines such as medicine or law but offers academic advising for those areas and 3-2 engineering programs with Caltech, Dartmouth, and Washington University. ### Courses Individually, Pomona offers approximately 650 courses per semester. Additionally, students may take a significant portion of their courses at the other Claremont Colleges, enabling access to approximately 2,700 courses total. The academic calendars and registration procedures across the colleges are synchronized and consolidated, and there are no additional fees for cross-enrollment. Students may also create independent study courses evaluated by faculty mentors. All classes at Pomona are taught by professors (as opposed to teaching assistants). The average class size is 15; for the fall 2022 semester, 92% of traditional courses had under 30 students, and only three courses had 50 or more students. The college employs faculty members as of the 2023 semester, approximately four-fifths of whom are full-time, resulting in a 7∶1 ratio of students to full-time equivalent professors. Among full-time faculty, 37% are members of racial minority groups, 50% are women, and 96% have a doctorate or other terminal degree in their field. Students and professors often form close relationships, and the college provides faculty with free meals to encourage them to eat with students. Semesters end with a week-long final examination period preceded by two reading days. The college operates several resource centers to help students develop academic skills in quantitative tasks, writing, and foreign languages. ### Research, study abroad, and professional development More than half of Pomona students conduct research with faculty. The college sponsors an annual Summer Undergraduate Research Program (SURP), in which more than 200 students are paid a stipend of up to \$5,600 to conduct research with professors or pursue independent research projects with professorial mentorship. The Pomona College Humanities Studio, established in 2018, supports research in the humanities. Pomona is home to the Pacific Basin Institute, a research institute that studies issues pertaining to the Pacific Rim. Approximately half of Pomona students study abroad. As of 2023, the college offers 69 pre-approved programs in 39 countries. Study-away programs are available for Washington, D.C., Silicon Valley, and the Marine Biological Laboratory in Massachusetts, and semester exchanges are offered at Colby, Spelman, and Swarthmore colleges. The Pomona College Career Development Office (CDO) provides students and alumni with career advising, networking, and other pre-professional opportunities. It runs the Pomona College Internship Program (PCIP), which provides stipends for completing unpaid or underpaid internships during the semester or summer; more than 250 students participate annually. The office connects students with alumni for networking and mentoring via the Sagehen Connect platform. During the 2015‍–‍2016 academic year, 175 employers hosted on-site informational events at the Claremont Colleges and 265 unique organizations were represented in 9 career fairs. ### Outcomes For the 2022 entering class, 0% of students returned for their second year, giving Pomona one of the highest retention rates of any college or university in the U.S. For the 2016 entering class, 87% of students graduated within four years (among the highest rate of any U.S. college or university) and 94% graduated within six years. Within 10 years, 81% of Pomona graduates attend graduate or professional school, according to a 2017 alumni survey. The college ranked 11th among all U.S. colleges and universities for doctorates awarded to alumni per capita, according to data collected by the National Science Foundation for 2012 to 2021. The top destinations between 2009 and 2018 (in order) were the University of California, Los Angeles; the University of California, Berkeley; Harvard University; the University of Southern California; and Stanford University. A 2023 analysis of the schools that send the most students per capita to the highest-ranked U.S. medical, business, and law schools placed Pomona 17th for medical schools, 22nd for business schools, and 14th for law schools. The top industries for graduates are technology; education; consulting and professional services; finance; government, law, and politics; arts, entertainment, and media; healthcare and social services; nonprofits; and research. Pomona alumni earn a median early career salary of \$73,700 and a median mid-career salary of \$146,400, according to 2023 survey data from compensation analytics company PayScale. Pomona ranks among the top producers of recipients of various competitive postgraduate fellowships, including the Churchill Scholarship, Fulbright Program, Goldwater Scholarship, Marshall Scholarship, National Science Foundation graduate research fellowship, and Rhodes Scholarship. ### Reputation and rankings Pomona is considered the most prestigious liberal arts college in the Western United States and one of the most prestigious in the country. However, among the broader public, it has less name recognition than many larger schools. The 2022 U.S. News & World Report Best Colleges Ranking places Pomona third in the national liberal arts colleges category out of 223 colleges. Pomona has been ranked in the top 10 liberal arts colleges every year by U.S. News since it began ranking them in 1984, and is one of five schools with such a history, alongside Amherst, Swarthmore, Wellesley, and Williams. Pomona has rated similarly in other college rankings. In 2015, the Forbes ranking placed it first among all colleges and universities in the U.S., drawing media attention. Pomona is the third most desirable college or university in the U.S., according to a 2020 analysis of admitted students' revealed preferences among their college choices conducted by the digital credential service Parchment. ## People ### Admissions Pomona offers three routes for students to apply: the Common Application, the QuestBridge application, and the Coalition Application. Applicants who want an earlier, binding decision can apply via early decision I or II; others apply through regular decision. Additionally, the college enrolls two 10-student Posse Foundation cohorts, from Chicago and Miami, in each class. Pomona considers various factors in its admissions process, placing greatest importance on course rigor, class rank, GPA, application essays, recommendations, extracurricular activities, talent, and character. Interviews, test scores, first generation status, geographic residence, race and ethnicity, volunteer work, and work experience are considered. Alumni relationships, religious affiliation, and level of interest are not considered. Admission is need-blind for students who are U.S. citizens, permanent residents, DACA recipients, undocumented, or graduates of a U.S. high school, and need-aware for international students. The college is part of many coalitions and initiatives targeted at recruiting underrepresented demographics. Pomona has the lowest acceptance rate of any national liberal arts college in the U.S. as of 2021. The college admitted 0.0% of applicants for the 2023 entering class, 0.0% of whom chose to enroll. The number of transfer applicants admitted has varied by year; in 2022, Pomona admitted 36 of 487 applicants (7.4%). ### Student body As of the 2023 semester, Pomona's student body of degree-seeking undergraduate students and a token number of non–degree seeking students. Compared to its closest liberal arts peers, Pomona has been characterized as laid back, academically oriented, mildly quirky, and politically liberal. The student body is roughly evenly split between men and women, and 91% of students are under 22 years old. Approximately 62% of domestic students are non-white and 12% of students are international, making Pomona one of the most racially and ethnically diverse colleges in the U.S. The geographic origins of the student body are also diverse, with all 50 U.S. states, the major U.S. territories, and more than 60 foreign countries represented. Students from California make up 27%, with sizable concentrations from the other western states. The median family income of students was \$166,500 as of 2013, with 52% of students coming from the top 10% highest-earning families and 22% from the bottom 60%. The college has been increasing its enrollment of low-income students since the early 2000s, and was ranked second among all private institutions and eighth among all institutions in The New York Times' 2017 College Access Index, a measure of economic diversity. Various religious and spiritual beliefs are represented among students, with many leaning secular. Among students in the 2022 entering class who submitted test scores, the middle 50% scored 730‍–‍770 on the SAT evidence-based reading and writing section, 750‍–‍790 on the SAT math section, and 32‍–‍35 on the ACT. Among students with an official high school class rank, 31% were valedictorians, 91% ranked in the top tenth, and 98% ranked in the top quarter. ### Noted alumni and faculty ## Student life ### Residential life Pomona is a residential campus, and nearly all students live on campus for all four years in one of the college's sixteen residence halls. All first-year students live on South Campus, and most third- and fourth-year students live on North Campus. Housing is offered in various configurations, including singles, one-room or two-room doubles, and "friendship suites" consisting of a cluster of rooms, often around a central common area. All incoming students are placed into a sponsor group, with ten to twenty peers and two or three upper-class "sponsors" tasked with easing the transition to college life but not enforcing rules (a duty given to resident advisors). Sponsor groups often share activities such as "fountaining", a tradition in which students are thrown into a campus fountain on their birthday. The program dates back to 1927 for women and was expanded in 1950 to include men. Pomona's social scene is intertwined with that of the other 5Cs, with many activities and events shared between the colleges. The college's alcohol policies are aimed at encouraging responsible consumption and include a strict ban of hard liquor on South Campus. Dedicated substance-free housing is also offered. Overall, drinking culture is present but does not dominate over other elements of campus life, nor does athletics culture. Violations of the student code are typically handled by the student-run Judicial Council, known as "J-Board". Pomona's dining services are run in house. All on-campus students are required to have a meal plan, which can be used at any of the Claremont Colleges' seven buffet-style dining halls. The menus emphasize sustainable and healthy options, and the food quality is generally praised. Every night Sunday through Wednesday, Frary Dining Hall opens for a late-night snack. Meal plans also include "Flex Dollars" usable at the various campus eateries, including the Coop Fountain, Coop Store, and sit-down Café 47 in the SCC. ### Campus organizations Some extracurricular organizations at Pomona are specific to the college, whereas others are open to students at all of the Claremont Colleges. In total, there are nearly 300 clubs and organizations across the 5Cs. The Associated Students of Pomona College (ASPC) is Pomona's official student government. Composed of elected representatives and appointed committee members, ASPC distributes funding for clubs and organizations, represents the student body in discussions with the administration, runs student programming (such as the Yule Ball dance and Ski-Beach Day) through the Pomona Events Committee (PEC), and provides various student services such as an airport rideshare program. `Pomona's yearbook, Metate, was founded in 1894 and discontinued in 2012. The college's official magazine, Pomona College Magazine, is published three times per year by the communications office.` Pomona has numerous clubs or support offices which provide resources and mentoring programs for students with particular identities, including female, non-white, Asian, South Asian, Latino, black, indigenous, multi-ethnic or multi-racial, international, queer, religious, and undocumented or DACA recipient students. The college's first-generation and low-income community, FLI Scholars, has more than 200 members. The Campus Advocates and EmPOWER Center support survivors of sexual violence and work to promote consent culture. The Pomona Student Union (PSU) facilitates the discussion of political and social issues on campus by hosting discussions, panels, and debates with prominent speakers holding diverse viewpoints. Other speech and debate organizations include a mock trial team, model UN team, and debate union. Pomona's secret society, Mufti, is known for gluing small sheets of paper around campus with cryptic puns offering social commentary on campus happenings. Pomona's music department manages several ensembles, including an orchestra, band, choir, glee club, jazz ensemble, and Balinese gamelan ensemble. All students can receive free private music lessons. The Draper Center for Community Partnerships, established in 2009, coordinates Pomona's various community engagement programs. These include mentoring for local youth communities, English tutoring for Pomona staff, and volunteering trips over spring break. It also operates the Pomona Academy for Youth Success (PAYS), a three-year pre-college summer program for local low-income and first-generation students of color. Pomona has two remaining local Greek letter organizations, Sigma Tau and Kappa Delta, both of which are co-educational. Neither have special housing, and Greek life is not considered a major part of the social scene on campus the way it is at many other U.S. colleges. ### Traditions #### Forty-seven reverence #### Other traditions As part of Pomona's 10-day orientation, incoming students spend four days off campus completing an "Orientation Adventure" or "OA" trip. The OA program began in 1995, and is one of the oldest outdoor orientation programs in the U.S. Every spring, the college hosts "Ski-Beach Day", in which students visit a ski resort in the morning and then head to the beach after lunch. The tradition dates back to an annual mountain picnic established in 1891. Since the 1970s, Pomona has used a cinder block flood barrier along the northern edge of its campus, Walker Wall, as a free speech wall. Over the years, provocative postings on the wall have spawned numerous controversies. ### Transportation Pomona's campus is located immediately north of Claremont Station, where the Metrolink San Bernardino Line train provides regular service to Los Angeles Union Station (the city's main transit hub) and the Foothill Transit bus system connects to cities in the San Gabriel Valley and Pomona Valley. Pomona's "Green Bikes" program maintains a fleet of more than 300 bicycles that are rented free to students each semester. Non-first-year students are allowed to park on campus after registering their vehicle. The college has several Zipcar vehicles on campus that may be rented and owns vehicles that can be checked out for club and extracurricular purposes. PEC and SCC off-campus events are usually served with the college's "Sagecoach" passenger bus. ### Athletics Pomona's varsity athletics teams compete jointly with Pitzer College as the Pomona-Pitzer Sagehens. The 11 women's and 10 men's teams participate in NCAA Division III in the Southern California Intercollegiate Athletic Conference (SCIAC). Pomona-Pitzer's mascot is Cecil the Sagehen, a greater sage-grouse, and its colors are blue and orange. Its main rival is the Claremont-Mudd-Scripps Stags and Athenas (CMS), the other sports combination of the Claremont Colleges. Club and intramural sports are also offered in various areas, such as dodgeball, flag football, and surfing. The physical education department offers a variety of activity classes each semester, such as karate, playground games, geocaching, and social dance. #### Athletics history Pomona's first intercollegiate sports teams were formed in 1895. They competed under several names in the school's early years; the name "Sagehen" first appeared in 1913 and became the sole moniker in 1917. Pomona was one of the three founding members of the SCIAC in 1914, and its football team played in the inaugural game at the Los Angeles Coliseum in 1923. In 1946, Pomona joined with Claremont Men's College (which would later be renamed Claremont McKenna College) to compete as Pomona-Claremont. The teams separated in 1956, and Pomona's athletics program operated independently until it joined with Pitzer College in 1970.
643,758
Rudolf Caracciola
1,173,421,518
German/Swiss racing and motorcycle driver
[ "1901 births", "1959 deaths", "24 Hours of Le Mans drivers", "AAA Championship Car drivers", "European Championship drivers", "German racing drivers", "German sportspeople of Italian descent", "Grand Prix drivers", "House of Caracciolo", "International Motorsports Hall of Fame inductees", "Land speed record people", "Mille Miglia drivers", "National Socialist Motor Corps members", "People from Ahrweiler (district)", "Racing drivers from Rhineland-Palatinate", "Sportspeople from the Rhine Province" ]
Otto Wilhelm Rudolf Caracciola (30 January 1901 – 28 September 1959) was a racing driver from Remagen, Germany. He won the European Drivers' Championship, the pre-1950 equivalent of the modern Formula One World Championship, an unsurpassed three times. He also won the European Hillclimbing Championship three times – twice in sports cars, and once in Grand Prix cars. Caracciola raced for Mercedes-Benz during their original dominating Silver Arrows period, named after the silver colour of the cars, and set speed records for the firm. He was affectionately dubbed Caratsch by the German public, and was known by the title of Regenmeister, or "Rainmaster", for his prowess in wet conditions. Caracciola began racing while he was working as apprentice at the Fafnir automobile factory in Aachen during the early 1920s, first on motorcycles and then in cars. Racing for Mercedes-Benz, he won his first two Hillclimbing Championships in 1930 and 1931, and moved to Alfa Romeo for 1932, where he won the Hillclimbing Championship for the third time. In 1933, he established the privateer team Scuderia C.C. with his fellow driver Louis Chiron, but a crash in practice for the Monaco Grand Prix left him with multiple fractures of his right thigh, which ruled him out of racing for more than a year. He returned to the newly reformed Mercedes-Benz racing team in 1934, with whom he won three European Championships, in 1935, 1937 and 1938. Like most German racing drivers in the 1930s, Caracciola was a member of the Nazi paramilitary group National Socialist Motor Corps (NSKK), but never a member of the Nazi Party. He returned to racing after the Second World War, but crashed in qualifying for the 1946 Indianapolis 500. A second comeback in 1952 was halted by another crash, in a sports car race in Switzerland. After he retired, Caracciola worked as a Mercedes-Benz salesman targeting North Atlantic Treaty Organization (NATO) troops stationed in Europe. He died in the German city of Kassel, after suffering liver failure. He was buried in Switzerland, where he had lived since the early 1930s. He is remembered as one of the greatest pre-1939 Grand Prix drivers, a perfectionist who excelled in all conditions. His record of six German Grand Prix wins remains unbeaten. ## Early life and career Rudolf Caracciola was born in Remagen, Germany, just south of Bonn on 30 January 1901. He was the fourth child of Maximilian and Mathilde, who ran the Hotel Fürstenberg. His ancestors had migrated during the Thirty Years' War from Naples to the German Rhineland, where Prince Bartolomeo Caracciolo (nephew of Spanish-allegiance Field Marshal Tommaso Caracciolo) had commanded the Ehrenbreitstein Fortress near Koblenz. Caracciola was interested in cars from a young age, and from his fourteenth birthday wanted to become a racing driver. He drove an early Mercedes during the First World War, and gained his driver's license before the legal age of 18. After Caracciola's graduation from school soon after the war, his father wanted him to attend university, but when he died Caracciola instead became an apprentice in the Fafnir automobile factory in Aachen. Motorsport in Germany at the time, as in the rest of Europe, was an exclusive sport, mainly limited to the upper classes. As the sport became more professional in the early 1920s, specialist drivers, like Caracciola, began to dominate. Caracciola enjoyed his first success in motorsport while working for Fafnir, taking his NSU motorcycle to several victories in endurance events. When Fafnir decided to take part in the first race at the Automobil-Verkehrs- und Übungs-Straße (AVUS) track in 1922, Caracciola drove one of the works cars to fourth overall, the first in his class and the quickest Fafnir. He followed this with victory in a race at the Opelbahn in Rüsselsheim. He did not stay long in Aachen, however; in 1923, after punching a soldier from the occupying Belgian Army in a nightclub, he fled the city. He moved to Dresden, where he continued to work as a Fafnir representative. In April of that year, Caracciola won the 1923 ADAC race at the Berlin Stadium in a borrowed Ego 4 hp. In his autobiography, Caracciola said he only ever sold one car for Fafnir, but due to inflation by "the time the car was delivered the money was just enough to pay for the horn and two headlights". Later in 1923, he was hired by the Daimler Motoren Gesellschaft as a car salesman at their Dresden outlet. Caracciola continued racing, driving a Mercedes 6/25/40 hp to victory in four of the eight races he entered in 1923. His success continued in 1924 with the new supercharged Mercedes 1.5-litre; he won 15 races during the season, including the Klausenpass hillclimb in Switzerland. He attended the Italian Grand Prix at Monza as a reserve driver for Mercedes, but did not take part in the race. He drove his 1.5-litre to five victories in 1925, and won the hillclimbs at Kniebis and Freiburg in a Mercedes 24/100/140 hp. With his racing career becoming increasingly successful, he abandoned his plans to study mechanical engineering. ## 1926–1930: Breakthrough Caracciola's breakthrough year was in 1926. The inaugural German Grand Prix was held at the AVUS track on 11 July, but the date clashed with a more prestigious race in Spain. The newly merged company Mercedes-Benz, conscious of export considerations, chose the latter race to run their main team. Hearing this, Caracciola took a short leave from his job and went to the Mercedes office in Stuttgart to ask for a car. Mercedes agreed to lend Caracciola and Adolf Rosenberger two 1923 2-litre M218s, provided they enter not as works drivers but independents. Rosenberger started well in front of the 230,000 spectators, but Caracciola stalled his engine. His riding mechanic, Eugen Salzer, jumped out and pushed the car to get it started, but by the time they began moving they had lost more than a minute to the leaders. It started to rain, and Caracciola passed many cars that had retired in the poor conditions. Rosenberger lost control at the North Curve on the eighth lap when trying to pass a slower car, and crashed into the timekeepers' box, killing all three occupants; Caracciola kept driving. In the fog and rain, he had no idea which position he was in, but resolved to keep driving so he could at least finish the race. When he finished the 20th and final lap, he was surprised to find that he had won the race. The German press dubbed him Regenmeister, or "Rainmaster", for his prowess in the wet conditions. Caracciola used the prize money——to set up a Mercedes-Benz dealership on the prestigious Kurfürstendamm in Berlin. He also married his girlfriend, Charlotte, whom he had met in 1923 while working at the Mercedes-Benz outlet in Dresden. He continued racing in domestic competitions, returning again to Freiburg to compete in the Flying Kilometre race where he set a new sports car record in the new Mercedes-Benz 2-litre Model K, and finished first. Caracciola entered the Klausenpass hillclimb and set a new touring car record; he also won the touring car class at the Semmering hillclimb before driving a newly supercharged 1914 Mercedes Grand Prix car over the same route to set the fastest time of the day for any class. The recently completed Nürburgring was the host of the 1927 Eifelrennen, a race which had been held on public roads in the Eifel mountains since 1922. Caracciola won the first race on the track, and returned to the Nürburgring a month later for the 1927 German Grand Prix, but his car broke down and the race was won by Otto Merz. However, he won 11 competitions in 1927, almost all of them in the Ferdinand Porsche-developed Mercedes-Benz Model S. Caracciola regained his German Grand Prix title at the Nürburgring at the 1928 German Grand Prix, driving the new 7.1-litre Mercedes-Benz SS. He shared the driving with Christian Werner, who took over Caracciola's car when the latter collapsed with heat exhaustion at a pit stop. The German Grand Prix, like many other races at the time, ignored the official Grand Prix racing rules set by the Association Internationale des Automobile Clubs Reconnus (International Association of Recognized Auto Clubs, or AIACR), which limited weight and fuel consumption, and instead ran races under a Formula Libre, or free formula. As a result, Mercedes-Benz focused less on producing Grand Prix cars and more on sports cars, and Caracciola drove the latest incarnation of this line, the SSK, at the Semmering hillclimb, and further reduced his own record on the course by half a second. The inaugural Monaco Grand Prix was held on 14 April 1929. Caracciola, driving a 7.1-litre Mercedes-Benz SSK, started from the back row of the grid (which was allocated randomly), and battled Bugatti driver William Grover-Williams for the lead early on. However, his pit stop, which took four and a half minutes to refill his car with petrol, left him unable to recover the time, and he eventually finished third. He won the RAC Tourist Trophy in slippery conditions, and confirmed his reputation as a specialist in wet track racing. He partnered Werner in the Mille Miglia and Le Mans endurance races in 1930; they finished sixth in the former but were forced to retire after leading for most of the race in the latter after their car's generator burnt out. Caracciola took victory in the 1930 Irish Grand Prix at Phoenix Park, and won four hillclimbs to take the title of European Hillclimb Champion for the first time. However, he was forced to close his dealership in Berlin after the firm went bankrupt. ## 1931–1932: Move to Alfa Romeo Mercedes-Benz officially withdrew from motor racing in 1931—citing the global economic downturn as a reason for their decision—although they continued to support Caracciola and a few other drivers covertly, retaining manager Alfred Neubauer to run the 'independent' operation. In part because of the financial situation, Caracciola was the only Mercedes driver to appear at the 1931 Monaco Grand Prix, driving an SSKL (a shorter version of the SSK). Caracciola and Maserati driver Luigi Fagioli challenged the Bugattis of Louis Chiron and Achille Varzi for the lead early in the race, but when the SSKL's clutch failed Caracciola withdrew from the race. A crowd of 100,000 turned out for the German Grand Prix at the Nürburgring. Rain began to fall before the race, and continued as Caracciola chased Fagioli for the lead in the early laps. The spray from Fagioli's Maserati severely impaired Caracciola's vision, but he was able to pass to take the lead at the Schwalbenschwanz corner. The track began to dry on lap six, and Chiron's Bugatti, which was by then running second, began to catch the heavier Mercedes. Caracciola's pit stop, completed in record time, kept him ahead of Chiron, and despite the Bugatti lapping 15 seconds faster than the Mercedes late in the race, Caracciola won by more than one minute. Caracciola was lucky to escape from a crash in the Masaryk Grand Prix. He and Chiron were chasing Fagioli when Fagioli crashed into a wooden footbridge, bringing it crashing down onto the road. Caracciola and Chiron drove into a ditch at the side of the road to avoid the debris; while Chiron drove out of the ditch and was able to continue, Caracciola drove into a tree and retired. Despite this accident, Caracciola again performed strongly in the Hillclimbing Championship; he won eight climbs in his SSKL to take the title. Perhaps his most significant achievement of 1931 was his win in the Mille Miglia. The local fleet of Alfa Romeos battled for the lead early in the race, but when they fell back Caracciola in his SSKL was able to take control. His win, in record time, made him and his co-driver Wilhelm Sebastian (who allowed Caracciola to drive the entire race) the first foreigners to win the Italian race. The only other foreigners to win the race on the full course were Stirling Moss and Denis Jenkinson in 1955, also in a Mercedes-made car. Mercedes-Benz withdrew entirely from motor racing at the start of 1932 in the face of the economic crisis, so Caracciola moved to Alfa Romeo with a promise to return to Mercedes if they resumed racing. His contract stipulated he would begin racing for the Italian team as a semi-independent. Caracciola later wrote that the Alfa Romeo manager was defensive when he questioned him about this clause; Caracciola believed it was because the firm's Italian drivers did not believe he could adjust smoothly from the big Mercedes cars to the smaller Alfa Romeos. His first race for his new team was at the Mille Miglia; he led early in the race, but retired when a valve connection broke. Caracciola later wrote, "I can still see the expression on [Alfa Romeo driver Giuseppe] Campari's face when I arrived back at the factory. He smiled to himself as if to say, Well, didn't I tell you that one wasn't going to make it?" The next race was the Monaco Grand Prix, where Caracciola was again entered as a semi-independent. He ran fourth early in the race, but moved to second as Alfa Romeo driver Baconin Borzacchini pitted for a wheel change and the axle on Achille Varzi's Bugatti broke. Tazio Nuvolari, in the other Alfa Romeo, found his lead reduced rapidly as Caracciola closed in; with ten laps remaining in the race Caracciola was so close he could see Nuvolari changing gears. He finished the race just behind Nuvolari. The crowd jeered Caracciola: they believed he had deliberately lost for the team, denying them a fight for the win. However, on the strength of his performance, Caracciola was offered a full spot on the Alfa Romeo team, which he accepted. Alfa Romeo dominated the rest of the Grand Prix season. Nuvolari and Campari drove the light and newly introduced Alfa Romeo P3 at the Italian Grand Prix, while Borzacchini and Caracciola drove much heavier 8Cs. Caracciola was forced to retired when his car broke down, but he took over Borzacchini's car when the Italian was hit by a stone, and came third, behind Nuvolari and Fagioli. In the French Grand Prix, Caracciola, now driving a P3, battled Nuvolari for the lead early on. Alfa Romeo's dominance was so great and their cars so far ahead the team could choose the top three finishing positions, thus Nuvolari won from Borzacchini and Caracciola, with the two Italians ahead of the German. The order was different at the 1932 German Grand Prix, where Caracciola won from Nuvolari and Borzacchini. Caracciola performed strongly in other races; he won the Polish and Monza Grands Prix and the Eifelrennen at the Nürburgring, and took five more hillclimbs to win that Championship for the third and final time. He was, however, beaten by the Mercedes-Benz of Manfred von Brauchitsch at the Avusrennen (the yearly race at the AVUS track). Von Brauchitsch drove a privately entered SSK with streamlined bodywork, and beat Caracciola's Alfa Romeo, which finished in second place. Caracciola was seen by the German crowd as having defected to the Italian team and was booed, while von Brauchitsch's all German victory drew mass support. ## 1933–1934: Injury and return for Mercedes Alfa Romeo withdrew its factory team from motor racing at the start of the 1933 season, leaving Caracciola without a contract. He was close friends with the French-Monegasque driver Louis Chiron, who had been fired from Bugatti, and while on vacation in Arosa in Switzerland the two decided to form their own team, Scuderia C.C. (Caracciola-Chiron). They bought three Alfa Romeo 8Cs (known as Monzas), and Daimler-Benz provided a truck to transport them. Chiron's car was painted blue with a white stripe, and Caracciola's white with a blue stripe. The new team's first race was at the Monaco Grand Prix. On the second day of practice for the race, while Caracciola was showing Chiron around the circuit (it was Chiron's first time in an Alfa Romeo), the German lost control heading into the Tabac corner. Three of the four brakes failed, which destabilised the car. Faced with diving into the sea or smashing into the wall, Caracciola instinctively chose the latter. Caracciola later recounted what happened after the impact: > Only the body of the car was smashed, especially around my seat. Carefully I drew my leg out of the steel trap. Bracing myself against the frame of the body, I slowly extricated myself from the seat ... I tried to hurry out of the car. I wanted to show that nothing had happened to me, that I was absolutely unhurt. I stepped to the ground. At that instant the pain flashed through my leg. It was a ferocious pain, as if my leg were being slashed by hot, glowing knives. I collapsed, Chiron catching me in his arms. Caracciola was carried on a chair to the local tobacco shop, and from there he went to the hospital. He had sustained multiple fractures of his right thigh, and his doctors doubted he would race again. He transferred to a private clinic in Bologna, where his injured leg remained in a plaster cast for six months. Caracciola defied the predictions of his doctors and healed faster than expected, and in the winter Charlotte took her husband back to Arosa, where the altitude and fresh air would aid his recovery. The rise to power of the Nazi Party on 30 January 1933 gave German motor companies, notably Mercedes-Benz and Auto Union, an opportunity to return to motor racing. Having secured promises of funding shortly after the Nazis' rise to power, both companies spent the better part of 1933 developing their racing projects. Alfred Neubauer, the Mercedes racing manager, travelled to the Caracciolas' chalet in Lugano in November with a plan to sign him for the 1934 Grand Prix season if he was fit. Neubauer challenged Caracciola to walk, and although the driver laughed and smiled while he did so Neubauer was not fooled: Caracciola was not yet fit. Nevertheless, he offered him a contract, provided he prove his fitness in testing at the AVUS track early in the next year. Caracciola agreed and went to Stuttgart to sign the contract. The trip wore him out so much he spent much of his time lying on his hotel bed recuperating. Upon his return to Lugano, another tragedy befell him. In February, Charlotte died when the party she was skiing with in the Swiss Alps was hit by an avalanche. Caracciola withdrew almost entirely from public life while he mourned, almost deciding to retire completely from motor racing. A visit from Chiron encouraged him to return to racing, and despite his initial reservations he was persuaded to drive the lap of honour before the 1934 Monaco Grand Prix. Although his leg still ached while he drove, the experience convinced him to return to racing. Caracciola tested the new Mercedes-Benz W25 at the AVUS track in April, and despite his injuries—his right leg had healed five 5 centimetres (2.0 in) shorter than his left, leaving him with a noticeable limp—he was cleared to race. However, Neubauer withdrew the Mercedes team from their first race, also at the AVUS track, as their practice times compared too unfavourably to Auto Union's. Caracciola was judged not fit to race for the Eifelrennen at the Nürburgring, but made the start for the German Grand Prix at the same track six weeks later. He took the lead from Auto Union driver Hans Stuck on the outside of the Karussel on the 13th lap, but retired a lap later when his engine failed. He had better luck at the 1934 Italian Grand Prix in September. In very hot weather, Caracciola started from fourth and moved to second, where he trailed Stuck. After 59 laps, the pain in his leg overwhelmed him, and he pitted, letting teammate Fagioli take over his car. Fagioli won from Stuck's car which by then had been taken over by Nuvolari. His best results in the rest of the season were a second place in the Spanish Grand Prix—he led before Fagioli passed him, much to the anger of Neubauer, who had ordered the Italian to hold position—and first at the Klausenpass hillclimb. ## 1935–1936: First Championship and rivalry with Rosemeyer Caracciola took the first of his three European Drivers' Championships in 1935. Seven Grands Prix—the Monaco, French, Belgian, German, Swiss, Italian and Spanish—would be included for Championship consideration. He opened the Championship season with a pole in Monaco but he retired just after the half of the race, then he won in France and took the lead of the standings with another win in Belgium, ahead of Fagioli and von Brauchitsch, who shared the other Mercedes-Benz W25. Nuvolari won a surprise victory at the Nürburgring in his Alfa Romeo P3, ahead of Stuck and Caracciola. The Swiss Grand Prix was held at the Bremgarten Circuit in Bern, and Caracciola won from Fagioli and the new Auto Union star Bernd Rosemeyer. Caracciola won the Spanish Grand Prix from Fagioli and von Brauchitsch; although his transmission failed at the Italian Grand Prix and he was forced to retire, his four wins allowed him to take the Championship. In the other races of the 1935 season, Caracciola won the Eifelrennen at the Nürburgring and finished second at the Penya Rhin Grand Prix in Barcelona. He also won the Tripoli Grand Prix, organised by the Libyan Governor-General Italo Balbo. The Grand Prix was held in the desert, around a salt lake, and because of the intense heat Neubauer was concerned that the tyres on the Mercedes-Benz cars would not last. Caracciola started poorly, but moved to third, after four pit stops to change tyres, by lap 30 of 40. He inherited the lead from Nuvolari and Varzi when the two Italians pitted, and held it to the finish, despite a late charge from Varzi. Caracciola later wrote that this was the race where he began to feel he had recovered from his crash in Monaco two years before, and he was now back among the contenders. Remaining in such a position would require Mercedes-Benz to produce a competitive car for the 1936 season. Although the chassis of the W25 was shortened, and the engine was significantly upgraded to 4.74 litres, the car proved inferior to the Type C developed by Auto Union. Mercedes had not improved the chassis to match the engine, and the W25 proved uncompetitive and unreliable. Despite this, Caracciola opened the season with a win in the rain at the Monaco Grand Prix, after starting from third position. He led the Hungarian Grand Prix early but retired with mechanical problems. At the German Grand Prix, he retired with a failed fuel pump, before taking over his teammate Hermann Lang's car; he later retired that car with supercharger problems. Caracciola led the Swiss Grand Prix for several laps, Rosemeyer trailing him closely, but the Clerk of the Course ordered Caracciola to cede the lead to Rosemeyer on the ninth lap after he was found to be blocking the Auto Union. The two had a heated argument after the race despite Caracciola's later retirement with a rear axle problem. Mercedes were so uncompetitive in 1936—Caracciola won only twice, in Monaco and Tunis—that Neubauer withdrew the team mid-season, leaving Rosemeyer to take the Championship for Auto Union. ## 1937: Second Championship Mercedes-Benz returned to Grand Prix racing at the start of the 1937 season with a new car. The W125 was a vast improvement on its predecessor, its supercharged eight cylinder 5.6-litre straight-8 engine delivered significantly more power than the W25: 650 brake horsepower compared to 500- an incredible output for the time, and this figure would not be surpassed in Grand Prix racing until the early 1980s, and would also not be surpassed in any form of racing until Can-Am racing in the late 1960s. The first major race of 1937 was the Avusrennen where 300,000 people turned out to see the cars race on the newly re-constructed track. In order to keep speeds consistently high, the north curve was turned into a steeply banked turn, apparently at the suggestion of Adolf Hitler. Driving a streamlined Mercedes-Benz, Caracciola won his heat against Rosemeyer, averaging around 250 kilometres per hour (160 mph), although a transmission failure forced him to retire in the final. Following the AVUS race, Caracciola, along with Rosemeyer, Nuvolari and Mercedes' new driver, Richard Seaman, went to race in the revived Vanderbilt Cup in America, and in doing so missed the Belgian Grand Prix, which took place six days later. Caracciola led until lap 22, when he retired with a broken supercharger. Caracciola started from the second row of the grid at the German Grand Prix, but was into the lead soon after the start. There he remained to the finish, in front of von Brauchitsch and Rosemeyer. He took pole position at the Monaco Grand Prix three weeks later, and was soon engaged in a hard fight with von Brauchitsch. The Mercedes-Benz drivers took the lead from each other several times, but von Brauchitsch won after a screw fell into Caracciola's induction system during a pit stop, costing him three and a half minutes. Caracciola won his second race of the season at the Swiss Grand Prix. Despite heavy rain which made the Bremgarten Circuit slippery and hazardous, Caracciola set a new lap record, at an average speed of 169 kilometres per hour (105 mph), and cemented his reputation as the Regenmeister. For the first time, the Italian Grand Prix was held at the Livorno Circuit rather than the traditional venue of Monza. Caracciola took pole position, and despite two false starts caused by spectators pouring onto the track, held his lead for the majority of the race and won from his teammate Lang by just 0.4 seconds. In doing so Caracciola clinched the European Championship for the second time. He backed up the win with another at the Masaryk Grand Prix two weeks later. He trailed Rosemeyer for much of the race until the Auto Union skidded against a kerb and allowed the Mercedes into the lead. Caracciola married for the second time in 1937, to Alice Hoffman-Trobeck, who worked as a timekeeper for Mercedes-Benz. He had met her in 1932, when she was having an affair with Chiron. She was, at that time, married to Alfred Hoffman-Trobeck, a Swiss businessman and heir to a pharmaceutical empire. She had taken care of Caracciola after Charlotte died, and shortly after began an affair with him, unbeknownst to Chiron. They were married in June in Lugano, just before the trip to America. ## 1938: Speed records and third Championship On 28 January 1938, Caracciola and the Mercedes-Benz record team appeared on the Reichs-Autobahn A5 between Frankfurt and Darmstadt, in an attempt to break numerous speed records set by the Auto Union team. The system of speed records at the time used classes based on engine capacity, allowing modified Grand Prix cars, in this case a W125, to be used to break records. Caracciola had broken previous records—he reached 311.985 kilometres per hour (193.858 mph) in 1935—but these had been superseded by Auto Union drivers, first Stuck and then Rosemeyer. Driving a Mercedes-Benz W125 Rekordwagen, essentially a W125 with streamlined bodywork and a larger engine, Caracciola set a new average speed of 432.7 kilometres per hour (268.9 mph) for the flying kilometre and 432.4 kilometres per hour (268.7 mph) for the flying mile, speeds which remain to this day as some of the fastest ever achieved on public roads. The day ended in tragedy however; Rosemeyer set off in his Auto Union in an attempt to break Caracciola's new records, but his car was struck by a violent gust of wind while he was travelling at around 400 kilometres per hour (250 mph), hurling the car off the road, where it rolled twice, killing its driver. Rosemeyer's death had a profound effect on Caracciola, as he later wrote: > What was the sense in men chasing each other to death for the sake of a few seconds? To serve progress? To serve mankind? What a ridiculous phrase in the face of the great reality of death. But then—why? Why? And for the first time, at that moment, I felt that every life is lived according to its own laws. And that the law for a fighter is: to burn oneself up to the last fibre, no matter what happens to the ashes. The Grand Prix formula was changed again in 1938, abandoning the previous system of weight restrictions and instead limiting piston displacement. Mercedes-Benz' new car, the W154, proved its abilities at the French Grand Prix, where von Brauchitsch won ahead of Caracciola and Lang to make it a Mercedes 1–2–3. Caracciola won two races in the 1938 season: the Swiss Grand Prix and the Coppa Acerbo; finished second in three: the French, German and Pau Grands Prix; and third in two: the Tripoli and Italian Grands Prix, to take the European Championship for the third and final time. The highlight of Caracciola's season was his win in the pouring rain at the Swiss Grand Prix. His teammate Seaman led for the first 11 laps before Caracciola passed him; he remained in the lead for the rest of the race, despite losing the visor on his helmet, severely reducing visibility, especially given the spray thrown up by tyres of the many lapped cars. ## 1939: Claims of favouritism towards Lang The 1939 season took place under the looming shadow of the coming Second World War, and the schedule was only halted with the invasion of Poland in September. The Championship season began with the Belgian Grand Prix in June. In heavy rain, Caracciola spun at La Source, got out and pushed his car off into the safety of the trees. Later in the race, Seaman left the track at the same corner, his car bursting into flames upon impact with the trees, where he was burnt alive in the cockpit. He died that night in hospital, after briefly regaining consciousness. The entire Mercedes team travelled to London for his burial. In the rest of the season, Caracciola won the German Grand Prix for the sixth and final time, again in the rain, after starting third on the grid. He finished second behind Lang at the Swiss and Tripoli Grands Prix. The latter race was seen as a major win for Mercedes-Benz. In an effort to halt German dominance at the event, the Italian organisers decided to limit engine sizes to 1.5 litres (the German teams at the time ran 3-litre engines), and announce the change at the last moment. The change was, however, leaked to Mercedes-Benz well in advance, and in just eight months the firm developed and built two W165s under the new restrictions; both of them beat the combined might of 28 Italian cars, much to the disappointment of the organisers. Caracciola believed that the Mercedes-Benz team were favouring Lang during the 1939 season; in a letter sent to Mercedes' brand owner Daimler-Benz CEO Dr. Wilhelm Kissel, he wrote: > I see little chance of the situation changing at all. Starting with Herr (Mr.) Sailer [Max Sailer, then the head of the Mercedes racing division] through Neubauer, down to the mechanics, there is an obsession with Lang. Herr Neubauer admitted frankly to Herr von Brauchitsch that he was standing by the man who has good luck, and whom the sun shines on ... I really enjoy racing and want to go on driving for a long time. However, this presupposes that I fight with the same weapons as my stablemates. Yet this will be hardly possible in the future, as almost all the mechanics and engine specialists in the racing division are on Lang's side ... Despite Caracciola's protests, Lang was declared the 1939 European Champion by the NSKK (Nationalsozialistisches Kraftfahrkorps, or National Socialist Motor Corps)—although this was never ratified by the AIACR, and Auto Union driver Hermann Paul Müller may have a valid claim to the title under the official scoring system—and motor racing was put on hold upon the outbreak of war. ## War, comeback and later years Caracciola and his wife Alice returned to their home in Lugano. For the duration of the war he was unable to drive; the rationing of petrol meant motor racing was unfeasible. The pain in his leg grew worse, and they went back to the clinic in Bologna to consult a specialist. Surgery was recommended, but Caracciola decided against that option, deterred by the minimum three months it would take to recover from the operation. He spent much of the last part of the war—from 1941 onwards—attempting to gain possession of the two W165s used at the 1939 Tripoli Grand Prix, with a view to maintaining them for the duration of the hostilities. When they finally arrived in Switzerland in early 1945, they were confiscated as German property by the Swiss authorities. He was invited to participate in the 1946 Indianapolis 500, and originally intended to drive one of the W165s, but was unable to have them released in time by Swiss customs. Nevertheless, he headed to the United States to watch the race. Joe Thorne, a local team owner, offered him one of his Thorne Engineering Specials to drive, but during a practice session before the race Caracciola suffered his second major accident when was hit on the head by an object, believed to be a bird, and crashed into the south wall. His life was saved by a tank driver's helmet the organisers insisted he wear. He suffered a severe concussion and was in a coma for several days. Tony Hulman, owner of the Indianapolis Motor Speedway, invited Caracciola and his wife to stay at his lodge near Terre Haute to let him fully recover. Caracciola returned to racing in 1952, when he was recalled to the Mercedes-Benz factory team to drive the new Mercedes-Benz W194 in sports car races. The first major race with the car was the Mille Miglia, alongside Karl Kling and his old teammate Hermann Lang. Kling finished second in the race, Caracciola fourth. It later emerged that Caracciola had been given a car with an inferior engine to his teammates, perhaps because of a lack of time to prepare for the race. Caracciola's career ended with his third major crash; during a support race for the 1952 Swiss Grand Prix, the brakes on his 300SL locked and he skidded into a tree at the fast, tree-lined Bremgarten circuit, fracturing his left leg. After his retirement from racing, he continued to work for Daimler-Benz as a salesman, targeting NATO troops stationed in Europe. He organised shows and demonstrations which toured military bases, leading in part to an increase in Mercedes-Benz sales during that period. In early 1959, he became sick and developed signs of jaundice, which worsened despite treatment. Later in the year he was diagnosed with advanced cirrhosis. On 28 September 1959, in Kassel, Germany, he suffered liver failure and died, aged 58. He was buried in his home town of Lugano. ## Nazi interactions Caracciola first met Adolf Hitler, the leader of the Nazi Party, in 1931. Hitler had ordered a Mercedes-Benz 770, at that point Mercedes' most expensive car, but due to the amount of time spent upgrading the car in line with the Nazi leader's wishes, the delivery was late. To mollify Hitler's anger, Caracciola was dispatched by Mercedes to deliver the car to the Brown House in Munich. Caracciola drove Hitler and his niece Geli Raubal around Munich to demonstrate the car. He later wrote (after the fall of the Nazi Party) that he was not particularly awed by Hitler: "I could not imagine that this man would have the requirements for taking over the government someday." Like most German racing drivers in Nazi Germany, Caracciola was a member of the NSKK, a paramilitary organisation of the Nazi Party devoted to motor racing and motor cars; during the Second World War it handled transport and supply. In reports on races by German media Caracciola was referred to as NSKK-Staffelführer Caracciola, the equivalent of a Squadron Leader. After races in Germany the drivers took part in presentations to the crowd coordinated by NSKK leader Adolf Hühnlein and attended by senior Nazis. Although he wrote after the fall of the Nazi regime that he found such presentations dull and uninspiring, Caracciola occasionally used his position as a famous racing driver to publicly support the Nazi regime; for example, in 1938, while supporting the Nazi platform at the Reichstag elections, he said, "[t]he unique successes of these new racing cars in the past four years are a victorious symbol of our Führers (Hitler's) achievement in rebuilding the nation." Despite this, when Caracciola socialised with the upper Nazi echelons he did so merely as an "accessory", not as an active member, and at no time was he a member of the Nazi Party. According to his autobiography, he turned down a request from the NSKK in 1942 to entertain German troops, as he "could not find it in myself to cheer up young men so that they would believe in a victory I myself could not believe in". Caracciola lived in Switzerland from the early 1930s, and despite strict currency controls, his salary was paid in Swiss francs. During the war, he continued to receive a pension from Daimler-Benz, until the firm ceased his payments under pressure from the Nazi party in 1942. ## Legacy Caracciola is remembered—along with Nuvolari and Rosemeyer—as one of the greatest pre-1939 Grand Prix drivers. He has a reputation of a perfectionist, who very rarely had accidents or caused mechanical failures in his cars, who could deliver when needed regardless of the conditions. His relationship with Mercedes racing manager Alfred Neubauer, one of mutual respect, is often cited as a contributing factor to his success. After Caracciola's death, Neubauer described him as: > ... the greatest driver of the twenties and thirties, perhaps even of all time. He combined, to an extraordinary extent, determination with concentration, physical strength with intelligence. Caracciola was second to none in his ability to triumph over shortcomings. His trophy collection was donated to the Indianapolis Hall of Fame Museum, and he was inducted into the International Motorsports Hall of Fame in 1998. In 2001, on the 100th anniversary of his birth, a monument to Caracciola was erected in his birth town of Remagen, and on the 50th anniversary of his death in 2009 Caracciola Square was dedicated off of the town's Rheinpromenade. Karussel corner at the Nürburgring was renamed after him, officially becoming the Caracciola Karussel. As of 2023, Caracciola's record of six German Grand Prix victories remains unbeaten. During the inaugural official meeting of the 200 Mile Per Hour Club on 2 September 1953, Caracciola was inducted as one of the original three foreigners who met the club's requirements of achieving an average of over 200 mph over two runs for his past achievements prior to the club's foundation. ## Racing record ### Complete 24 Hours of Le Mans results ### Complete European Championship results (key) (Races in bold*' indicate pole position) (Races in italics* indicate fastest lap) ### Notable victories - European Drivers Championship (3): 1935, 1937, 1938 - European Hill Climb Championship GP Cars (1): 1932 - European Hill Climb Championship Sports Cars (2): 1930, 1931 Grandes Épreuves: - French Grand Prix (1): 1935 - Italian Grand Prix (2): 1934, 1937 - Belgian Grand Prix (1): 1935 - Spanish Grand Prix (1): 1935 - German Grand Prix (6): 1926, 1928, 1931; 1932, 1937; 1939 - Monaco Grand Prix (1): 1936 - Swiss Grand Prix (3): 1935, 1937, 1938 Minor Grand Prix: - Eifelrennen (4): 1927, 1931, 1932, 1935 - Irish Grand Prix (1): 1930 - Avusrennen (1): 1931 - Lviv Grand Prix (1): 1932 - Monza Grand Prix (1): 1932 - Tripoli Grand Prix (1): 1935 - Tunis Grand Prix (1): 1936 - Czechoslovakian Grand Prix (1): 1937 - Coppa Arcebo (1): 1938 Other notable Events: - Tourist Trophy: (1): 1929 - Mille Miglia: (1): 1931
37,532,581
Pitfour estate
1,095,530,592
Ancient barony in North-East Scotland
[ "Aberdeenshire", "Buildings and structures in Buchan", "Country houses in Aberdeenshire", "Geography of Aberdeenshire", "Rural Scotland" ]
The Pitfour Estate, in the Buchan area of North-East Scotland, was an ancient barony encompassing most of the extensive Longside Parish, stretching from St Fergus to New Pitsligo. It was purchased in 1700 by James Ferguson of Badifurrow, who became the first Laird of Pitfour. The estate was substantially renovated by Ferguson and the following two generations of his family. At the height of its development in the 18th and 19th centuries the 50-square-mile (130 km<sup>2</sup>) property had several extravagant features including a two-mile racecourse, an artificial lake and an observatory. The original mansion house was extended before being rebuilt. The surrounding parklands were landscaped, major renovations were undertaken, and follies such as a small replica Temple of Theseus were constructed, in which George Ferguson, the fifth laird, was thought to keep alligators in a cold bath. The first three lairds transformed the estate into a valuable asset. Lord Pitfour, the second laird, purchased additional lands including Deer Abbey and Inverugie Castle. Pitfour's son, James Ferguson, who became the third laird, continued to improve and expand the estate by adding the lake and bridges, and establishing planned villages. The third laird died a bachelor with no children, so the estate passed to the elderly George Ferguson, who was only in possession of the property for a few months. George was already a wealthy man, owning lands in Trinidad and Tobago, but despite not directly improving the Pitfour estate he added considerable value to the inheritance passed to his illegitimate son. The extravagant lifestyles of the fifth and sixth lairds led to the sequestration of the estate, which was sold off piecemeal to pay their debts. What remained of the estate was sold after the First World War. The mansion house was demolished in about 1926, and its stone used to build council houses in Aberdeen. In more recent times some of the remaining buildings, including the temple, the bridges and the stables, have been classified as at high risk by Historic Scotland because their condition has become poor. The chapel was fully renovated and converted to a private residence in 2003; the observatory was purchased and restored by Banff & Buchan District Council (now Aberdeenshire Council) and can be accessed by the public. The racecourse has been forested since 1926, and the lake is used by members of a private fishing club. ## Early history The Pitfour estate in Mintlaw extended from St Fergus to New Pitsligo and encompassed most of the extensive Longside Parish. The meaning of Pitfour is given in the 1895 records of the Clan Fergusson as "cold croft", but the historian John Milne breaks the name into two parts and indicates the meaning as Pit being place and feoir or feur being grass. The Pitfour estate is shown on old maps as Petfouir or Petfour. It was formerly one of Scotland's largest and best-appointed estates and was referred to as "The Blenheim of Buchan", "The Blenheim of the North" and "The Ascot of the North" by the architectural historian Charles McKean. Scant early records exist of the lands, but Alexander Stewart (Alexandro Senescalli), the natural son of King Robert II of Scotland, was given the Pitfour lands together with those of Lunan by his father in 1383. However, writing in 1887 Cadenhead states the lands were sold to Stewart by Ricardus Mouet, also known as Richard Lownan. During the next three centuries, the lands had several different owners. Transactions show it passed to a burgess of Aberdeen in 1477 from Egidia Stewart; Walter Innes of Invermarkie gained feudal superiority to all Pitfour lands in 1493; and in 1506 the land was purchased by Thomas Innes, who died the following year. His son, John, inherited the property. It remained in the possession of the Innes family until at least 1581, when it was owned by James Innes and his wife Agnes Urquhart. Between 1581 and 1667, the lands were bought by George Morrison. His son William inherited the property in 1700, and immediately sold the estate to James Ferguson, who became the first Laird of Pitfour. The lands purchased by Ferguson were recorded in 1667 in a charter granted by Charles II and were stated as encompassing "the lands and Barony of Toux and Pitfour in the Parish of Old Deer and Sheriffdom of Aberdeen including the towns and lands of Mintlaw, Longmuir, Dumpston in the Parish of Longside and County of Aberdeen." Several other lands, including "the Barony of Aden with the Tower, Fortaliss, Mains and Manor Place thereof and pertinents of the same called Fortry, Rora Mill thereof, Croft Brewerie, Inverquhomrie and Yockieshill" were individually listed. State papers from the reign of Queen Anne in the 18th century record the lands in favour of James Ferguson. ## Lairds and subsequent development 1st laird James Ferguson—known as the Sheriff, reflecting the post he held, recognised by the Society of Advocates—bought the Pitfour estate after selling the lands of Badifurrow. He had inherited Badifurrow after demanding that his uncle Robert Ferguson should appear in court if he wished to contest the inheritance. Robert, nicknamed the Plotter, was in hiding to avoid charges of treachery, and after his non-appearance in court James Ferguson's inheritance was confirmed in mid-June 1700. At that time, the estate contained only a small country house. 2nd laird James was laird until his death in 1734, after which the estate passed to his eldest son, also called James, who was born in Pitfour soon after it was purchased. A solicitor like his father, he was promoted to the bench in 1764 and became Lord Pitfour. He continued to expand and improve the estate until his death in 1777, and set up the planned village of Fetterangus in 1752. Lord Pitfour purchased the lands of the last Earl Marischal, George Keith, which were adjacent to Pitfour, in 1766. They were considered the Earl Marischal's most significant property and had been forfeited when the Earl Marischal fell out of favour. He had bought it back from the York Buildings Company for £31,000, but Pitfour only paid £15,000 for it. The 8,000 acres (32 square kilometres) of land included Deer Abbey and Inverugie Castle, but consisted predominantly of peat bogs, woods and uncultivated land. This addition made the Pitfour estate the largest in the area, with more than 30,000 acres (120 square kilometres) stretching from Buchanhaven to Maud along the course of the River Ugie. 3rd laird The third laird, also named James, inherited the estate in 1777; he was usually referred to as the Member to differentiate him from previous generations. Like his forebears, he was an advocate but also became a Member of Parliament. He too continued to expand and improve the estate; he constructed a lake and a canal, and built the new mansion. He also expanded and altered Longside at the start of the 1800s, founded Mintlaw in 1813, assisted in the extension of New Deer and extended Buchanhaven. The Member died unmarried, childless and intestate in 1820. In normal circumstances his brother Patrick would have been his heir, but he died in battle in October 1780. 4th laird In 1820, the estate was inherited by the Member's younger brother, George Ferguson, who was by then in his seventies. He was known as the Governor, reflecting his appointment as Lieutenant Governor of Tobago. He was laird from September 1820, but died in December that year. The Governor had spent most of his life in Trinidad and Tobago, where he was a principal landowner, and had inherited the Castara Estate on Tobago from Patrick. George was appointed Lieutenant Governor of Tobago in 1779, and after a battle with the French in 1781 surrendered the island to the French on 2 June. The Governor returned to Britain, although the terms of the surrender meant he still owned the Castara estate and all the slaves who worked on it. George had illegitimate children with an unknown woman. He continued to buy estates in the Caribbean and returned there in 1793, staying until 1810. 5th laird The estate started to deteriorate after it was inherited by the Governor's illegitimate son George Ferguson, known as the Admiral because of his naval career. He was already heavily in debt when he became the fifth laird in 1821, but he still enjoyed a lavish lifestyle and undertook much extravagant construction on the estate, including the erection of follies. To cover his substantial gambling debts, he began to sell parcels of estate land, and upon inheriting Pitfour he began selling furniture, books, farm equipment and other items, realising more than £9,000. 6th laird After the Admiral's death in March 1867, the estate passed to his son, George Arthur Ferguson, the sixth and final laird. He served in the Grenadier Guards and eventually became a captain. He married Nina Maria Hood, the eldest daughter of Alexander Nelson Hood, 1st Viscount Bridport, in February 1861. Later that year Captain Ferguson was posted to Canada, where he was promoted to Lieutenant Colonel and where his first two sons, Arthur and Francis William, were born. His eldest son, Arthur, became Inspector of Constabulary for Scotland. Returning to Britain in 1864, the family had a nomadic lifestyle, but the sixth laird and his wife were extravagant and habitual gamblers. In June 1909, a trust deed was registered, and what remained of the estate was put on the market. After large parts of the land had been sold under the ownership of the sixth laird, the estate was listed by Bateman in 1883 as being just over 23,000 acres (93 square kilometres) with an income of £19,938; at the height of its development the estate had occupied 50 square miles (130 square kilometres), and was valued at £30 million. The last laird died in 1924 and is buried in Luton. Following its 20th-century decline, the estate changed hands several times until local farmer Hamish Watson purchased it in December 2010. The local historian Alex Buchan summed up the demise of the estate: "They thought the estate was here to provide them with money, to gamble, to travel, to simply fritter away and very quickly, within a couple of decades, they had wasted the whole lot." He added, "Eccentricity amounted to just squandering money." ## Mansion house The original small country house was first altered during the early 18th century. In 1809 the Sheriff's grandson James Ferguson, the third laird, employed the architect John Smith to design new accommodation. The resulting three-storey house, 98 feet (30 metres) square and 33 feet (10 metres) high, is reputed to have had 365 windows. When the fourth laird, George (the Governor), died in 1820, the estate was worth £300,000 with almost £35,000 of moveable assets. George Ferguson, the fifth laird (the Admiral) added a large, glazed gallery when he inherited the house. The Admiral had a lavish lifestyle and despite having a healthy income incurred heavy debts. When the Admiral died after 46 years of managing the estate it was mortgaged for £250,000, despite the sale of a number of the lands originally included in it. The house fell into disrepair under the ownership of George Arthur, the sixth and final laird, who had inherited his father's lifestyle. The entire estate was put on the market in September 1909 but remained unsold until after the First World War; the house and what remained of the estate were finally purchased by a speculator, Edgar Fairweather, from London in 1926. Fairweather bought several other Scottish estates, including those nearby at Auchmeddan and Strichen; he habitually reduced the estates into smaller holdings that he then sold on or rented out. The house was sold to an Aberdeen building company and was demolished sometime between 1927 and 1930. After demolition, the mansion's veranda was installed at the front of Kinloch Farmhouse in St Fergus. Other remains from the mansion have been discovered at the farmhouse, including a crest above the conservatory door and tiles inscribed with the Ferguson of Pitfour family crest. The stone from the mansion was transported to Aberdeen and used to construct Torry Junior Secondary School. ## Chapels The Fergusons were Episcopalian, and in 1766, the second laird, Lord Pitfour had a small Qualified Chapel built on the estate at Waulkmill. It was a large, plain building that could accommodate up to 500 people. Saplinbrae, a house that was initially used as a coaching inn after it was built under instruction from Lord Pitfour in 1756, was used as the minister's manse for the first chapel. A more modern chapel was built in 1850 after the Admiral had an argument with the Reverend Arthur Ranken, the minister at Old Deer. This was a small, private chapel for the use of the Pitfour estate. It was built in the Gothic style from rubble but was recast in 1871. A 60-foot (18-metre) tower with a battlemented top is at its western end. The chapel fell into disrepair, and by the 1980s it was a roofless ruin. In 1990 Historic Scotland said that Kinloch Farmhouse, in St Fergus featured a bench and chair salvaged from the Pitfour Chapel. In 2003, the second chapel was renovated and converted to a private residence. The chapel restoration won a "Highly commended" award for craftsmanship from Aberdeenshire Council in 2010; the council said the craftsmanship "allowed for the retention of the ecclesiastical spirit and integrity to remain prevalent both internally and externally." It was also "Highly Commended" in the conservation category. ## Stables and riding school The stables were built in 1820, during the early part of the Admiral's ownership of the estate, based on a design by John Smith; the buildings are sited to the rear of the mansion house. Built in a horseshoe shape, neoclassical design, the two-storey building was constructed in pinned rubble with granite dressings; grey granite was used for the parapet and quoins. The main buildings were originally harled. A corrugated asbestos hipped roof was at some point substituted for the original slate roof. It features a columned rotunda above a timber clock tower, which has a finial and domed copper roof. The pedimented centrepiece of the symmetrical front elevation is a segmental arch and has three panels set back between columns. Each side is bordered by wings of three bays with single-bay pavilions. The stables are connected to an adjoining two-storey house. They provided accommodation for ten horses and included four loose boxes, a harness room and a coachman's house; six bedrooms above were for servants. Two coach houses were later used as garages. Charles McKean describes the stables as "straddling the skyline like a palace". The stables are listed by Historic Scotland as being at very high risk. An indoor riding school slightly to the north-west of the stables measured 98 feet (30 metres) by 49 feet (15 metres). It was used to entertain guests when the facilities at the mansion house were not large enough. More than two hundred local farmers and other landowners celebrated the wedding of George Arthur, the sixth laird, in the riding school in 1861. In 1883 it was again used to entertain; on that occasion it was decorated with flags and Chinese lanterns, and pine flooring was laid. Later, it was used as indoor tennis courts before being demolished. ## Canal and lake James Ferguson, the third laird, owned the estate during the Industrial Revolution in Britain. He began work on a canal between Pitfour and Peterhead in 1797, despite fierce opposition from adjoining landowners. The canal was proposed to cover about ten miles following the course of the River Ugie. Pitfour's canal is sometimes called the St Fergus and River Ugie Canal. Ferguson had thought about building the canal since 1793, but it was never completed because of "difficulties in effecting the necessary arrangements with neighbouring heritors." Objections were raised by the Merchant Maiden Hospital, which owned the land on the south side of the Ugie. Despite being advised to take out an interdict to prevent the work, in January 1797 the hospital thought its case was not strong enough. The hospital applied for an interdict four months later, however, when two miles (3.2 km) of the canal had been dug to the point where the north and south Ugie joined; it was granted in July 1797. A few years after starting work on the canal, Ferguson had a lake built on flat land to the front of the mansion house. The landscape gardener William S. Gilpin was carrying out work on the adjacent Strichen estate at about the same time, and it is assumed he helped with the work at Pitfour. The lake extends to almost 50 acres (20 hectares) and is 174 feet (53 metres) above sea level. Designed in the same style as the lake in Windsor Great Park, the lake was stocked with trout, both rainbow and brown; there were three bridges and four islands. The siting of the lake meant the driveway had to be moved, and ornate bridges were constructed to cross the water. Built from granite, the northern bridge has three arches with ashlar starlings, the southern bridge has a single arch and the third, smaller bridge crosses a large stream that drains into the lake. The neighbouring Russell family of Aden were concerned their land would be flooded when the lake was built, and their animosity was fully demonstrated when a bridge had to be jointly constructed by the two landowners over the River Ugie. It was wide enough for carriages on the Pitfour side but too narrow on the Russells' half. ## Theseus temple Alongside the lake was a six-bay Greek Doric temple, a small replica styled after the Temple of Theseus. Its exact date of construction is unknown; it may have been built during the time of James Ferguson, the third laird, or under the instruction of George Ferguson, the fifth laird. The local historian Alex Buchan attributes it to James, the third laird; according to Historic Scotland, it was built "probably circa 1835". Like the mansion house, the temple is credited to the architect John Smith. Measuring 8 metres (26 ft) by 16 metres (52 ft), it has six columns at both ends and thirteen columns down each side. It had a flat roof with an ornate wooden entablature and contained a cold-water bath in which George, the fifth laird was believed to have kept alligators. As of 2013 the temple is in a ruinous state; it has been held up by scaffolding since 1992 and is listed by Historic Scotland as being in critical condition. ## Racecourse and observatory George Ferguson (the Admiral) had a racecourse about 2.2 miles (3.5 kilometres) long and 52 feet (16 metres) wide built near White Cow Woods, an area which is quite flat. This led to the estate being called the "Ascot of the North". In 1845 the Admiral had an observatory built, again designed by the architect John Smith. It is an octagonal tower with a crenellated parapet and is symmetrical in design. The observatory stands at the top of a hill 396 feet (121 m) above sea level. The tower is 50 feet (15 m) high and is more than half a mile (1 kilometre) from the racecourse. It has three storeys with square windows on the upper floor, and was fully renovated by Banff & Buchan District Council (now Aberdeenshire Council) in 1983. ## Twentieth century Sales of country estates became common around the 1920s. The annual tax payable had spiralled and was twenty times greater than in 1870 resulting in the break-up of many larger landholdings. Pitfour was no exception and the dispersal of the estate continued piecemeal after the sequestration of George Arthur, the sixth laird. The main estate policies including the lake and other land were purchased by Bernard Drake in November 1926 when he bought Saplinbrae, the former minister's house. Drake was a partner in the electrical engineering company, Drake and Gorham. Sixty years later, in 1986 the BBC Domesday Project does not give any ownership details but indicates many of the buildings are in poor condition. Other surviving structures are used for storage by a farmer who also "manages the land". ## Recent times At the end of 2012 Aberdeenshire Council gave the go-ahead for the present owner's planned restoration work on the temple and bridges, which he hoped would enhance existing facilities at nearby Drinnies Wood surrounding the Observatory, White Cow Woods and Aden Country Park. The lake is used regularly by local fishermen, and a fishing club with about 120 members was established in 2011. The rest of the estate is seldom used by local residents, many of whom are completely unaware of it. ## See also - Destruction of country houses in 20th-century Britain
362,660
Atlantic City–Brigantine Connector
1,168,013,707
Highway in Atlantic City, New Jersey, US
[ "Atlantic City, New Jersey", "Limited-access roads in New Jersey", "Road tunnels in New Jersey", "State highways in New Jersey", "Transportation in Atlantic County, New Jersey", "Tunnels completed in 2001" ]
The Atlantic City–Brigantine Connector (officially the Atlantic City Expressway Connector; also known as the Atlantic City Connector or Brigantine Connector) is a connector freeway in Atlantic City, New Jersey, United States. It is a 2.37-mile (3.81 km) extension of the Atlantic City Expressway, connecting it to New Jersey Route 87, which leads into Brigantine via the Marina district of Atlantic City. Locally, the freeway is known as "the Tunnel", due to the tunnel along its route that passes underneath the Westside neighborhood. The connector is a state highway owned and operated by the South Jersey Transportation Authority (SJTA); it has an unsigned designation of Route 446X. Proposals for a similar connector road in Atlantic City date to 1964; planning began in 1995 after businessman Steve Wynn proposed a new casino in the Marina district. The goals were to reduce traffic on Atlantic City streets and improve access to the Marina district and Brigantine. It was supported by Governor Christine Todd Whitman and Mayor Jim Whelan, but faced major opposition during its planning. Residents whose homes were to be destroyed for the tunnel construction fought the project, and competing casino owner Donald Trump filed lawsuits to prevent its construction. Construction took almost three years and opened in July 2001 at a total cost of \$330 million. Since its opening, the connector has served up to 30,000 vehicles daily, and affected the city's economy by bringing business to the casinos in the Marina district. ## Route description The Atlantic City–Brigantine Connector is a freeway located entirely within Atlantic City, New Jersey, and has a route length of 2.37 miles (3.81 km). It is a toll-free extension of the tolled Atlantic City Expressway (A.C. Expressway) and serves as a connector between the expressway and Route 87 near Brigantine. The connector averages two lanes per direction and has a posted speed limit of 35 mph (56 km/h). The northernmost 0.89 miles (1.43 km) serves northbound traffic only, whereas southbound traffic travels along the parallel Route 87. Exits along the route are designated by letter from A to I. It is owned and operated by the SJTA and is classified by the New Jersey Department of Transportation (NJDOT) as a state highway, unsigned Route 446X, which is part of the National Highway System. The route begins near the eastern terminus of the A.C. Expressway with a southbound-only exit to the Midtown and Downbeach districts. It then turns north along the western shore of Atlantic City and comes to a railroad grade crossing with NJ Transit's Atlantic City Line adjacent to the Atlantic City Rail Terminal, followed by an interchange at Bacharach Boulevard. At 0.87 miles (1.40 km) along the route, the freeway enters a 1,957-foot (596 m) tunnel under Horace Bryant Park in the Westside neighborhood. North of the tunnel is a southbound on-ramp from Route 87, followed by an interchange with U.S. Route 30 (US 30) via Route 187. After the US 30 interchange, the freeway continues for northbound traffic only, with an exit that serves as a U-turn to the southbound connector, an exit to Renaissance Pointe, Borgata, and MGM Tower, and an exit to the Farley Marina and Golden Nugget Atlantic City. The final exit ramp leads to Harrah's Atlantic City, after which the northbound connector terminates as it merges onto Route 87 northbound, which continues into Brigantine via the Brigantine Bridge. ## History ### Initial proposals The 44-mile (71 km) A.C. Expressway was built from 1962 to 1965, connecting the Philadelphia metropolitan area with the coastal resort city of Atlantic City. During construction in 1964, the Atlantic City Planning Board proposed the Route 30 Connector, a connector road linking the end of the expressway in Midtown Atlantic City with US 30. The purpose of the connector was to reduce traffic congestion and improve access to the Marina district and the neighboring city of Brigantine. Because of a lack of funds and environmental concerns about construction near the adjacent wetlands, the connector project remained dormant until 1990 when plans for the road were included in a report by the city's Transportation Executive Council. A 1991 study found the project was environmentally feasible, and a route was proposed with a one-mile (1.6 km) elevated highway over the wetlands. Construction costs were estimated at \$80 million, but due to a continuing lack of funds and the complexity of constructing above the wetlands, the project was again postponed. ### Planning Plans for the connector reemerged in 1995 following a proposal from real estate businessman and Mirage Resorts president Steve Wynn. The city of Atlantic City issued requests for proposals to developers interested in developing the H-Tract, a former landfill site in the Marina district. Wynn obtained the property from the city following his proposal to construct Le Jardin, a \$1 billion casino resort. He said he would only build if better road access was provided directly to the site, which prompted state officials to revive the connector plans. Governor Christine Todd Whitman created a transportation task force in September 1995 to consider options. It studied 11 alternative routes, including elevated highways, tunnels, and improvements to existing streets. In March 1996, the task force determined that the best alternative was the Westside Bypass route, which included a highway along the western shore of the city with a tunnel under the Westside neighborhood. Whitman formally adopted the task force's recommendation in July 1996, which ensured that the alternative would be built. The goals of the project were to improve access to the Atlantic City Convention Center, the Marina district, and Brigantine, and to improve traffic flow along the city's streets. It was expected to accommodate 14,000 to 17,000 vehicles per day. The tunnel was designed to have as little impact on the surrounding environment as possible; its design included both portals on opposite ends of the community, with landscaping added between the construction site and adjacent homes. Nine existing homes along Horace J. Bryant Jr. Drive would be demolished for the construction of the tunnel. Funding for the project, formally known as the Atlantic City–Brigantine Connector, was approved in January 1997. The total cost of the project was \$330 million (equivalent to \$ million in ). To fund the project, Mirage Resorts paid \$110 million, with the remainder coming from state funds from the SJTA (\$60 million), the Transportation Trust Fund (\$95 million), and the Casino Reinvestment Development Authority (\$65 million). ### Controversies The project was controversial, as tunnel construction would displace homes in the Westside neighborhood, and residents vowed to fight it. Its opponents described the project as an effort to destroy a community, while supporters claimed it was necessary to reduce traffic and create new jobs at the planned casino. Atlantic City Mayor Jim Whelan, a supporter, felt the project would benefit the city. Mirage offered each affected property owner on Horace J. Bryant Jr. Drive \$200,000 for their homes, an offer five of the nine accepted. A group of 92 Westside homeowners filed a lawsuit against the company and the city claiming the tunnel construction would require the demolition of "their stable, black neighborhood" and create health concerns, thus violating their rights. Donald Trump, the chairman of Trump Hotels & Casino Resorts at the time, was also opposed to the connector, and paid the Westside residents' legal bills. Knowing that Wynn's casino would not be built without the connector, Trump also filed lawsuits against the use of state funds for the project. According to Whelan, Trump "didn't want the competition" with his three existing Atlantic City casinos, including Trump Marina, next to the site of Wynn's future casino at the H-Tract. Trump criticized the connector as a state-funded "private driveway" to Wynn's casino, and denounced the funding as "corporate welfare" that unfairly favored an out-of-state company (Mirage) over those that had previously made business investments in the city. He claimed that the tunnel would have "immense environmental impacts", and urged the state's Department of Environmental Protection to deny construction permits. Mirage and Wynn retaliated by filing an antitrust lawsuit against Trump Hotels alleging that the company's only goal was to prevent the Mirage resort from being built. The feud between Trump and Wynn over the connector was later the subject of the 2012 book The War at the Shore: Donald Trump, Steve Wynn, and the Epic Battle to Save Atlantic City, by former Mirage director Richard "Skip" Bronson. According to the Las Vegas Sun, "more than a dozen" lawsuits were filed over the connector project. The lawsuit by the Westside homeowners was eventually dismissed by a federal judge in February 1998. Trump's legal battles against the project lasted four years; he dropped them in February 2001 in exchange for a settlement that would include a new ramp to provide access from the future H-Tract casino to Trump Marina. Trump agreed to pay half the ramp's \$12 million cost. A group of New Jersey mayors who also opposed the project filed suit to block "an inappropriate use of state funds". Their lawsuit was also dismissed; the court found the construction of the connector necessary whether the casino was built or not. Aside from the tunnel, the project was criticized for including a railroad grade crossing on a freeway. The design was opposed by the Federal Railroad Administration and rail advocacy groups for safety concerns; however, the SJTA said the design was a compromise to allow for a full interchange at Bacharach Boulevard and provide access to the convention center. ### Construction Construction bids for the design–build contract of the Atlantic City–Brigantine Connector were submitted to the SJTA in July 1997. The contract was awarded to the joint venture of Yonkers Contracting Company and Granite Construction who served as the general contractors. At the time of inception, the connector was the largest design–build project performed by the State of New Jersey. Permits were granted in October 1998, and the groundbreaking ceremony took place on November 4. Completion was originally scheduled for May 2001. Excavation of the tunnel began in May 1999; the cut and cover method was used. The nine homes were demolished and a 2,900-foot (880 m) trench was dug down to 35 feet (11 m) deep. A total 160,000 cubic yards (120,000 m<sup>3</sup>) of dirt were removed, most of which was reused to construct ramps at other sites on the connector. For the tunnel walls, 100,000 cubic yards (76,000 m<sup>3</sup>) of reinforced concrete were poured, and a five-foot-thick (1.5 m) concrete roof was constructed on top of the tunnel where the homes once stood; the site was later turned into a neighborhood park. Since the tunnel runs adjacent to the Penrose Canal, groundwater was present five feet (1.5 m) below the bottom of the trench, requiring a dewatering process to complete the construction. Technology was installed to monitor traffic flow and control the tunnel ventilation, which automatically triggers jet fans if carbon monoxide levels become too high. The tunnel is 14.5 feet (4.4 m) high, but is restricted to vehicles with a maximum clearance of 14 feet (4.3 m). In addition to the tunnel, the project included the construction of 16 overpasses, 15 ramps, and 23 retaining walls, plus landscaping, drainage, and the installation of variable-message signs. Workers also relocated public utility infrastructure, shifted 2,000 feet (610 m) of railroad tracks, rebuilt 3,680 feet (1,120 m) of bulkhead, and demolished a pumping station, a warehouse, and portions of a power station. A promenade at Trump Marina was leveled to make way for new ramps, and 37 ornamental lampposts were dismantled and later shipped to the nearby Tuckerton Seaport, which opened in 2000. To avoid disruptions in the neighborhood, construction materials were delivered by barge, and construction vehicles did not travel along any local streets. During construction, Wynn sold Mirage Resorts to MGM Grand Inc. in 2000, forming the MGM Mirage company. Wynn's plans for his Atlantic City casino resort were cancelled. MGM Mirage took over the H-Tract site and renamed it Renaissance Pointe, and developed plans for Borgata Hotel Casino & Spa, which opened in 2003 after three years of construction. ### Opening A shortage of materials and delivery issues in late 2000 delayed the connector's opening from May to July 2001. The grand opening ceremony took place on July 27, with festivities including a pedestrian tunnel walk. The connector was expected to open to traffic that evening, but due to last-minute malfunctions with the tunnel's emergency communication system, it did not open to vehicles until July 31. Upon opening, the freeway was formally named the Atlantic City Expressway Connector, although it is called "the Tunnel" by locals. Exit ramps to Borgata and Trump Marina were completed and opened in 2003. Once the connector opened, travel times between the Midtown and the Marina districts fell from fifteen minutes to four. Initial traffic volume was lower than expected; the connector served only 11,000 to 12,000 vehicles per day during its first several months, which was attributed to a decline in travel following the September 11 terrorist attacks. However, traffic increased the following year, and the connector served up to 20,000 vehicles daily by July 2002, significantly higher than the original projections. Due to the opening of Borgata in 2003, annual traffic volume increased by 25 percent that year, serving 30,000 vehicles daily. Whelan said "the impact of the [connector] project is undeniable" in improving traffic flow in the city and access to Brigantine. Traffic data from 2013 shows that the connector was used by 24,590 vehicles daily, including 1,229 trucks. The connector also affected the city's economy and casino industry. Whelan credited the project for bringing Borgata, which has since become the city's top-grossing casino. Joe Kelly, executive director of the Greater Atlantic City Chamber of Commerce, said "the Connector has been vitally important to furthering Atlantic City's economic development objectives" by improving access to the Marina district and making it more "economically viable". State records from 2016 showed that the three casinos in the Marina district had an average annual gross revenue of \$134 million, compared to \$70 million for the casinos along the Atlantic City Boardwalk. Transportation analyst and former SJTA executive Anthony Marino cited the connector's ease of access to the Marina district casinos as a factor in their success and a challenge for boardwalk casinos; Whelan said it forced boardwalk casinos to reevaluate their business models. The tunnel was used as a filming location in 2018 for the TV series Succession; the series portrayed the location as the Brooklyn–Battery Tunnel in New York City. ## Exit list ## See also
180,824
Ulf Merbold
1,149,603,415
German astronaut and physicist
[ "1941 births", "European amateur radio operators", "German astronauts", "Living people", "Members of the Order of Merit of North Rhine-Westphalia", "Mir crew members", "Officers Crosses of the Order of Merit of the Federal Republic of Germany", "Recipients of the Order of Merit of Baden-Württemberg", "Space Shuttle program astronauts" ]
Ulf Dietrich Merbold (born June 20, 1941) is a German physicist and astronaut who flew to space three times, becoming the first West German citizen in space and the first non-American to fly on a NASA spacecraft. Merbold flew on two Space Shuttle missions and on a Russian mission to the space station Mir, spending a total of 49 days in space. Merbold's father was imprisoned in NKVD special camp Nr. 2 by the Red Army in 1945 and died there in 1948, and Merbold was brought up in the town of Greiz in East Germany by his mother and grandparents. As he was not allowed to attend university in East Germany, he left for West Berlin in 1960, planning to study physics there. After the Berlin Wall was built in 1961, he moved to Stuttgart, West Germany. In 1968, he graduated from the University of Stuttgart with a diploma in physics, and in 1976 he gained a doctorate with a dissertation about the effect of radiation on iron. He then joined the staff at the Max Planck Institute for Metals Research. In 1977, Merbold successfully applied to the European Space Agency (ESA) to become one of their first astronauts. He started astronaut training with NASA in 1978. In 1983, Merbold flew to space for the first time as a payload specialist or science astronaut on the first Spacelab mission, STS-9, aboard the Space Shuttle Columbia. He performed experiments in materials science and on the effects of microgravity on humans. In 1989, Merbold was selected as payload specialist for the International Microgravity Laboratory-1 (IML-1) Spacelab mission STS-42, which launched in January 1992 on the Space Shuttle Discovery. Again, he mainly performed experiments in life sciences and materials science in microgravity. After ESA decided to cooperate with Russia, Merbold was chosen as one of the astronauts for the joint ESA–Russian Euromir missions and received training at the Russian Yuri Gagarin Cosmonaut Training Center. He flew to space for the third and last time in October 1994, spending a month working on experiments on the Mir space station. Between his space flights, Merbold provided ground-based support for other ESA missions. For the German Spacelab mission Spacelab D-1, he served as backup astronaut and as crew interface coordinator. For the second German Spacelab mission D-2 in 1993, Merbold served as science coordinator. Merbold's responsibilities for ESA included work at the European Space Research and Technology Centre on the Columbus program and service as head of the German Aerospace Center's astronaut office. He continued working for ESA until his retirement in 2004. ## Early life and education Ulf Merbold was born in Greiz, in the Vogtland area of Thuringia, Germany, on June 20, 1941. He was the only child of two teachers who lived in the school building of Wellsdorf [de], a small village. During World War II, Ulf's father Herbert Merbold was a soldier who was imprisoned and then released from an American prisoner of war camp in 1945. Soon after, he was imprisoned by the Red Army in NKVD special camp Nr. 2, where he died on February 23, 1948. Merbold's mother Hildegard was dismissed from her school by the Soviet zone authorities in 1945. She and her son moved to a house in Kurtschau [de], a suburb of Greiz, where Merbold grew up close to his maternal grandparents and his parental grandfather. After graduating in 1960 from Theodor-Neubauer-Oberschule high school—now Ulf-Merbold-Gymnasium Greiz [de]—in Greiz, Merbold wanted to study physics at the University of Jena. Because he had not joined the Free German Youth, the youth organization of the Socialist Unity Party of Germany, he was not allowed to study in East Germany so he decided to go to Berlin, and crossed into West Berlin by bicycle. He obtained a West German high school diploma (Abitur) in 1961, as West German universities did not accept the East German one, and intended to start studying in Berlin so he could occasionally see his mother. When the Berlin Wall was built on August 13, 1961, it became impossible for Ulf's mother to visit him. Merbold then moved to Stuttgart, where he had an aunt, and started studying physics at the University of Stuttgart, graduating with a Diplom in 1968. He lived in a dormitory in a wing of Solitude Palace. Thanks to an amnesty for people who had left East Germany, Merbold could again see his mother from late December 1964. In 1976, Merbold obtained a doctorate in natural sciences, also from the University of Stuttgart, with a dissertation titled Untersuchung der Strahlenschädigung von stickstoffdotierten Eisen nach Neutronenbestrahlung bei 140 Grad Celsius mit Hilfe von Restwiderstandsmessungen on the effects of neutron radiation on nitrogen-doped iron. After completing his doctorate, Merbold became a staff member at the Max Planck Institute for Metals Research in Stuttgart, where he had held a scholarship from 1968. At the institute, he worked on solid-state and low-temperature physics, with a special focus on experiments regarding lattice defects in body-centered cubic (bcc) materials. ## Astronaut training In 1973, NASA and the European Space Research Organisation, a precursor organization of the European Space Agency (ESA), agreed to build a scientific laboratory that would be carried on the Space Shuttle. The memorandum of understanding contained the suggestion the first flight of Spacelab should have a European crew member on board. The West German contribution to Spacelab was 53.3% of the cost; 52.6% of the work contracts were carried out by West German companies, including the main contractor ERNO. In March 1977, ESA issued an Announcement of Opportunity for future astronauts, and several thousand people applied. Fifty-three of these underwent an interview and assessment process that started in September 1977, and considered their skills in science and engineering as well as their physical health. Four of the applicants were chosen as ESA astronauts; these were Merbold, Italian Franco Malerba, Swiss Claude Nicollier and Dutch Wubbo Ockels. The French candidate Jean-Loup Chrétien was not selected, angering the President of France. Chrétien participated in the Soviet-French Soyuz T-6 mission in June 1982, becoming the first West European in space. In 1978, Merbold, Nicollier and Ockels went to Houston for NASA training at Johnson Space Center while Malerba stayed in Europe. NASA first discussed the concept of having payload specialists aboard spaceflights in 1972, and payload specialists were first used on Spacelab's initial flight. Payload specialists did not have to meet the strict NASA requirements for mission specialists. The first Spacelab mission had been planned for 1980 or 1981 but was postponed until 1983; Nicollier and Ockels took advantage of this delay to complete mission specialist training. Merbold did not meet NASA's medical requirements due to a ureter stone he had in 1959, and he remained a payload specialist. Rather than training with NASA, Merbold started flight training for instrument rating at a flight school at Cologne Bonn Airport and worked with several organizations to prepare experiments for Spacelab. In 1982, the crew for the first Spacelab flight was finalized, with Merbold as primary ESA payload specialist and Ockels as his backup. NASA chose Byron K. Lichtenberg and his backup Michael Lampton. The payload specialists started their training at Marshall Space Flight Center in August 1978, and then traveled to laboratories in several countries, where they learned the background of the planned experiments and how to operate the experimental equipment. The mission specialists were Owen Garriott and Robert A. Parker, and the flight crew John Young and Brewster Shaw. In January 1982, the mission and payload specialists started training at Marshall Space Flight Center on a Spacelab simulator. Some of the training took place at the German Aerospace Center in Cologne and at Kennedy Space Center. While Merbold was made very welcome at Marshall, many of the staff at Johnson Space Center were opposed to payload specialists, and Merbold felt like an intruder there. Although payload specialists were not supposed to train on the Northrop T-38 Talon jet, Young took Merbold on a flight and allowed him to fly the plane. ## STS-9 Space Shuttle mission Merbold first flew to space on the STS-9 mission, which was also called Spacelab-1, aboard Space Shuttle Columbia. The mission's launch was planned for September 30, 1983, but this was postponed because of because of issues with a communications satellite. A second launch date was set for October 29, 1983, but was again postponed after problems with the exhaust nozzle on the right solid rocket booster. After repairs, the shuttle returned to the launch pad on November 8, 1983, and was launched from Kennedy Space Center Launch Complex 39A at 11:00 a.m. EST on November 28, 1983. Merbold became the first non-US citizen to fly on a NASA space mission and also the first West German citizen in space. The mission was the first six-person spaceflight. During the mission, the shuttle crew worked in groups of three in 12-hour shifts, with a "red team" consisting of Young, Parker and Merbold, and a "blue team" with the other three astronauts. The "red team" worked from 9:00 p.m. to 9:00 a.m. EST. Young usually worked on the flight deck, and Merbold and Parker in the Spacelab. Merbold and Young became good friends. On the mission's first day, approximately three hours after takeoff and after the orbiter's payload bay doors had been opened, the crew attempted to open the hatch leading to Spacelab. At first, Garriott and Merbold could not open the jammed hatch; the entire crew took turns trying to open it without applying significant force, which might damage the door. They opened the hatch after 15 minutes. The Spacelab mission included about 70 experiments, many of which involved fluids and materials in a microgravity environment. The astronauts were subjects of a study on the effects of the environment in orbit on humans; these included experiments aiming to understand space adaptation syndrome, of which three of the four scientific crew members displayed some symptoms. Following NASA policy, it was not made public which astronaut had developed space sickness. Merbold later commented he had vomited twice but felt much better afterwards. Merbold repaired a faulty mirror heating facility, allowing some materials science experiments to continue. The mission's success in gathering results, and the crew's low consumption of energy and cryogenic fuel, led to a one-day mission extension from nine days to ten. On one of the last days in orbit, Young, Lichtenberg and Merbold took part in an international, televised press conference that included US president Ronald Reagan in Washington, DC, and the Chancellor of Germany Helmut Kohl, who was at a European economic summit meeting in Athens, Greece. During the telecast, which Reagan described as "one heck of a conference call", Merbold gave a tour of Spacelab and showed Europe from space while mentioning die Schönheit der Erde (the beauty of the earth). Merbold spoke to Kohl in German, and showed the shuttle's experiments to Kohl and Reagan, pointing out the possible importance of the materials-science experiments from Germany. When the crew prepared for the return to earth, around five hours before the planned landing, two of the five onboard computers and one of three inertial measurement units malfunctioned, and the return was delayed by several orbits. Columbia landed at Edwards Air Force Base (AFB) at 6:47 p.m. EST on December 8, 1983. Just before the landing, a leak of hydrazine fuel caused a fire in the aft section. After the return to earth, Merbold compared the experience of standing up and walking again to walking on a ship rolling in a storm. The four scientific crew members spent the week after landing doing extensive physiological experiments, many of them comparing their post-flight responses to those in microgravity. After landing, Merbold was enthusiastic about the mission and the post-flight experiments. ## Ground-based astronaut work In 1984, Ulf Merbold became the backup payload specialist for the Spacelab D-1 mission, which West Germany funded. The mission, which was numbered STS-61-A, was carried out on the Space Shuttle Challenger from October 30 to November 6, 1985. In ESA parlance, Merbold and the three other payload specialists—Germans Reinhard Furrer and Ernst Messerschmid and the Dutch Wubbo Ockels—were called "science astronauts" to distinguish them from "passengers" like Saudi prince Sultan bin Salman Al Saud and Utah senator Jake Garn, both of whom had also flown as payload specialists on the Space Shuttle. During the Spacelab mission, Merbold acted as crew interface coordinator, working from the German Space Operations Center in Oberpfaffenhofen to support the astronauts on board while working with the scientists on the ground. From 1986, Merbold worked for ESA at the European Space Research and Technology Centre in Noordwijk, Netherlands, contributing to plans for what would become the Columbus module of the International Space Station (ISS). In 1987, he became head of the German Aerospace Center's astronaut office, and in April–May 1993 he served as science coordinator for the second German Spacelab mission D-2 on STS-55. ## STS-42 Space Shuttle mission In June 1989, Ulf Merbold was chosen to train as payload specialist for the International Microgravity Laboratory (IML-1) Spacelab mission. STS-42 was intended to launch in December 1990 on Columbia but was delayed several times. After first being reassigned to launch with Atlantis in December 1991, it finally launched on the Space Shuttle Discovery on January 22, 1992, with a final one-hour delay to 9:52 a.m. EST caused by bad weather and issues with a hydrogen pump. The change from Columbia to Discovery meant the mission had to be shortened, as Columbia had been capable of carrying extra hydrogen and oxygen tanks that could power the fuel cells. Merbold was the first astronaut to represent reunified Germany. The other payload specialist on board was astronaut Roberta Bondar, the first Canadian woman in space. Originally, Sonny Carter was assigned as one of three mission specialists, he died in a plane crash on April 5, 1991, and was replaced by David C. Hilmers. The mission specialized in experiments in life sciences and materials science in microgravity. IML-1 included ESA's Biorack module, a biological research facility in which cells and small organisms could be exposed to weightlessness and cosmic radiation. It was used for microgravity experiments on various biological samples including frog eggs, fruit flies, and Physarum polycephalum slime molds. Bacteria, fungi and shrimp eggs were exposed to cosmic rays. Other experiments focused on the human response to weightlessness or crystal growth. There were also ten Getaway Special canisters with experiments on board. Like STS-9, the mission operated in two teams who worked 12-hour shifts: a "blue team" consisting of mission commander Ronald J. Grabe together with Stephen S. Oswald, payload commander Norman Thagard, and Bondar; and a "red team" of William F. Readdy, Hilmers, and Merbold. Because the crew did not use as many consumables as planned, the mission was extended from seven days to eight, landing at Edwards AFB on January 30, 1992, at 8:07 a.m. PST. ## Euromir 94 mission In November 1992, ESA decided to start cooperating with Russia on human spaceflight. The aim of this collaboration was to gain experience in long-duration spaceflights, which were not possible with NASA at the time, and to prepare for the construction of the Columbus module of the ISS. On May 7, 1993, Merbold and the Spanish astronaut Pedro Duque were chosen as candidates to serve as the ESA astronaut on the first Euromir mission, Euromir 94. Along with other potential Euromir 95 astronauts, German Thomas Reiter and Swedish Christer Fuglesang, in August 1993 Merbold and Duque began training at Yuri Gagarin Cosmonaut Training Center in Star City, Russia, after completing preliminary training at the European Astronaut Centre, Cologne. On May 30, 1994, it was announced Merbold would be the primary astronaut and Duque would serve as his backup. Equipment with a mass of 140 kg (310 lb) for the mission was sent to Mir on the Progress M-24 transporter, which failed to dock and collided with Mir on August 30, 1994, successfully docking only under manual control from Mir on September 2. Merbold launched with commander Aleksandr Viktorenko and flight engineer Yelena Kondakova on Soyuz TM-20 on October 4, 1994, 1:42 a.m. Moscow time. Merbold became the second person to launch on both American and Russian spacecraft after cosmonaut Sergei Krikalev, who had flown on Space Shuttle mission STS-60 in February 1994 after several Soviet and Russian spaceflights. During docking, the computer onboard Soyuz TM-20 malfunctioned but Viktorenko managed to dock manually. The cosmonauts then joined the existing Mir crew of Yuri Malenchenko, Talgat Musabayev and Valeri Polyakov, expanding the crew to six people for 30 days. Onboard Mir, Merbold performed 23 life sciences experiments, four materials science experiments, and other experiments. For one experiment designed to study the vestibular system, Merbold wore a helmet that recorded his motion and his eye movements. On October 11, a power loss disrupted some of these experiments but power was restored after the station was reoriented to point the solar array toward the sun. The ground team rescheduled Merbold's experiments but a malfunction of a Czech-built materials processing furnace caused five of them to be postponed until after Merbold's return to Earth. None of the experiments were damaged by the power outage. Merbold's return flight with Malechenko and Musabayev on Soyuz TM-19 was delayed by one day to experiment with the automated docking system that had failed on the Progress transporter. The test was successful and on November 4, Soyuz TM-19 de-orbited, carrying the three cosmonauts and 16 kg (35 lb) of Merbold's samples from the biological experiments, with the remainder to return later on the Space Shuttle. The STS-71 mission was also supposed to return a bag containing science videotapes created by Merbold but this bag was lost. The landing of Soyuz TM-19 was rough; the cabin was blown off-course by nine kilometres (5.6 mi) and bounced after hitting the ground. None of the crew were hurt during landing. During his three spaceflights—the most of any German national—Merbold has spent 49 days in space. ## Later career In January 1995, shortly after the Euromir mission, Merbold became head of the astronaut department of the European Astronaut Centre in Cologne. From 1999 to 2004, Merbold worked in the Microgravity Promotion Division of the ESA Directorate of Manned Spaceflight and Microgravity in Noordwijk, where his task was to spread awareness of the opportunities provided by the ISS among European research and industry organizations. He retired on July 30, 2004, but has continued to do consulting work for ESA and give lectures. ## Personal life Since 1969, Ulf Merbold has been married to Birgit, née Riester and the couple have two children, a daughter born in 1975 and a son born in 1979. They live in Stuttgart. In 1984, Merbold met the East-German cosmonaut Sigmund Jähn, who had become the first German in space after launching on August 26, 1978, on Soyuz 31. They both were born in the Vogtland (Jähn was born in Morgenröthe-Rautenkranz) and grew up in East Germany. Jähn and Merbold became founding members of the Association of Space Explorers in 1985. Jähn helped Merbold's mother, who had moved to Stuttgart, to obtain a permit for a vacation in East Germany. After German reunification, Merbold helped Jähn become a freelance consultant for the German Aerospace Center. At the time of the Fall of the Berlin Wall, they were at an astronaut conference in Saudi Arabia together. In his spare time Merbold enjoys playing the piano and skiing. He also flies planes including gliders. Holding a commercial pilot license, he has over 3,000 hours of flight experience as a pilot. On his 79th birthday, he inaugurated the new runway at the Flugplatz Greiz-Obergrochlitz [de] airfield, landing with his wife in a Piper Seneca II. ## Awards and honors In 1983, Merbold received the American Astronautical Society's Flight Achievement Award, together with the rest of the STS-9 crew. He was also awarded the Order of Merit of Baden-Württemberg in December 1983. In 1984, he was awarded the Haley Astronautics Award by the American Institute of Aeronautics and Astronautics and the Order of Merit of the Federal Republic of Germany (first class). In 1988, he was awarded the Order of Merit of North Rhine-Westphalia. Merbold received the Russian Order of Friendship in November 1994, the Kazakh Order of Parasat in January 1995 and the Russian Medal "For Merit in Space Exploration" in April 2011. In 1995, he received an honorary doctorate in engineering from RWTH Aachen University. In 2008, the asteroid 10972 Merbold was named after him.
8,053,237
Norton Priory
1,144,075,484
Historic site in Norton, Runcorn, Cheshire, England
[ "1115 establishments in England", "1536 disestablishments in England", "Archaeological museums in England", "Archaeological sites in Cheshire", "Augustinian monasteries in England", "British country houses destroyed in the 20th century", "Buildings and structures in Runcorn", "Christian monasteries established in the 12th century", "English Civil War", "Grade I listed buildings in Cheshire", "Grade I listed monasteries", "Monasteries in Cheshire", "Museums in Cheshire", "Norman architecture in England", "Religious organizations established in the 1110s", "Ruins in Cheshire", "Scheduled monuments in Cheshire", "Tourist attractions in Cheshire" ]
Norton Priory is a historic site in Norton, Runcorn, Cheshire, England, comprising the remains of an abbey complex dating from the 12th to 16th centuries, and an 18th-century country house; it is now a museum. The remains are a scheduled ancient monument and are recorded in the National Heritage List for England as a designated Grade I listed building. They are considered to be the most important monastic remains in Cheshire. The priory was established as an Augustinian foundation in the 12th century, and was raised to the status of an abbey in 1391. The abbey was closed in 1536, as part of the dissolution of the monasteries. Nine years later the surviving structures, together with the manor of Norton, were purchased by Sir Richard Brooke, who built a Tudor house on the site, incorporating part of the abbey. This was replaced in the 18th century by a Georgian house. The Brooke family left the house in 1921, and it was partially demolished in 1928. In 1966 the site was given in trust for the use of the general public. Excavation of the site began in 1971, and became the largest to be carried out by modern methods on any European monastic site. It revealed the foundations and lower parts of the walls of the monastery buildings and the abbey church. Important finds included: a Norman doorway; a finely carved arcade; a floor of mosaic tiles, the largest floor area of this type to be found in any modern excavation; the remains of the kiln where the tiles were fired; a bell casting pit used for casting the bell; and a large medieval statue of Saint Christopher. The priory was opened to the public as a visitor attraction in the 1970s. The 42-acre site, run by an independent charitable trust, includes a museum, the excavated ruins, and the surrounding garden and woodland. In 1984 the separate walled garden was redesigned and opened to the public. Norton Priory offers a programme of events, exhibitions, educational courses, and outreach projects. In August 2016, a larger and much extended museum opened. ## History ### Priory In 1115 a community of Augustinian canons was founded in the burh of Runcorn by William fitz Nigel, the second Baron of Halton and Constable of Chester, on the south bank of the River Mersey where it narrows to form the Runcorn Gap. This was the only practical site where the Mersey could be crossed between Warrington and Birkenhead, and the archaeologists Fraser Brown and Christine Howard-Davis consider it likely that the canons cared for travellers and pilgrims crossing the river. They also speculate that William may have sought to profit from the tolls paid by these travellers. The priory was the second religious house to be founded in the Earldom of Chester; the first was the Benedictine St Werburgh's Abbey at Chester, founded in 1093 by Hugh Lupus, the first Earl of Chester. The priory at Runcorn had a double dedication, to Saint Bertelin and to Saint Mary. The authors of the Victoria County History suggest that the dedication to St Bertelin was taken from a Saxon church already existing on the site. In 1134 William fitz William, the third Baron of Halton, moved the priory to a site in Norton, a village 3 miles (5 km) to the east of Runcorn. The reasons for the move are uncertain. It may have been that fitz William wanted greater control of the strategic crossing of the Mersey at Runcorn Gap, or it may have been because the canons wanted a more secluded site. Nothing remains of the site of the original priory in Runcorn. The site for the new priory was in damp, scrubby woodland. There is no evidence that it was agricultural land, or that it contained any earlier buildings. The first priority was to clear and drain the land. There were freshwater springs near the site, and these would have provided fresh running water for latrines and domestic purposes. They would also have been used to create watercourses and moated enclosures, some of which might have been used for orchards and herb gardens. Sandstone for building the priory was available at an outcrop nearby, sand for mortar could be obtained from the shores of the River Mersey, and boulder clay on the site provided material for floor and roof tiles. Excavation has revealed remnants of oak, some of it from trees hundreds of years old. It is likely that this came from various sources; some from nearby, and some donated from the forests at Delamere and Macclesfield. The church and monastic buildings were constructed in Romanesque style. The priory was endowed by William fitz Nigel with properties in Cheshire, Lancashire, Nottinghamshire, Lincolnshire and Oxfordshire, including the churches of St Mary, Great Budworth and St Michael, Chester. By 1195 the priory owned eight churches, five houses, the tithe of at least eight mills, the rights of common in four townships, and one-tenth of the profits from the Runcorn ferry. The prior supplied the chaplain to the hereditary Constables of Chester and to the Barons of Halton. During the 12th century, the main benefactors of the priory were the Barons of Halton, but after 1200 their gifts reduced, mainly because they transferred their interests to the Cistercian abbey at Stanlow, which had been founded in 1178 by John fitz Richard, the sixth baron. Archaeologist J. Patrick Greene states that it is unlikely that any of the Barons of Halton were buried in Norton Priory. The only members of the family known to be buried there are Richard, brother of Roger de Lacy, the seventh baron, and a female named Alice. The identity of Alice has not been confirmed, but Greene considers that she was the niece of William, Earl Warenne, the 5th Earl of Surrey and therefore a relative of the Delacy family, who were at that time the Barons of Halton. The earl made a grant to the priory of 30 shillings a year in order to "maintain a pittance for her soul". As the role played by the Barons of Halton declined, so the importance of members of the Dutton family increased. The Duttons had been benefactors since the priory's foundation, and from the 13th century they became the principal benefactors. There were two main branches of the family, one in Dutton and the other in Sutton Weaver. The Dutton family had their own burial chapel in the priory, and burial in the chapel is specified in three wills made by members of the family. The Aston family of Aston were also important benefactors. The priory buildings, including the church, were extended during the late 12th and early 13th centuries. It has been estimated that the original community would have consisted of 12 canons and the prior; this increased to around 26 members in the later part of the 12th century, making it one of the largest houses in the Augustinian order. By the end of the century the church had been lengthened, a new and larger chapter house had been built (I\* on the plan), and a large chapel had been added to the east end of the church (N). In about 1200 the west front of the church was enlarged (M), a bell tower was built and guest quarters were constructed. It is possible that the chapel at the east end was built to accommodate the holy cross of Norton, a relic which was reputed to have miraculous healing powers. A fire in 1236 destroyed the timber-built kitchen (Q) and damaged the west range of the monastic buildings and the roof of the church. The kitchen was rebuilt and the other damage was rapidly repaired. ### Abbey During the first half of the 14th century, the priory suffered from financial mismanagement and disputes with the Dutton family, exacerbated by a severe flood in 1331 that reduced the income from the priory's lands. The direct effects of the Black Death are not known, but during the 1350s financial problems continued. These were party mitigated with the selling of the advowson of the church at Ratcliffe-on-Soar. Matters further improved from 1366 with the appointment of Richard Wyche as prior. He was active in the governance of the wider Augustinian order and in political affairs, and in 1391 was involved in raising the priory's status to that of a mitred abbey. A mitred abbey was one in which the abbot was given permission to use pontifical insignia, including the mitre, ring and pontifical staff, and to give the solemn benediction provided a bishop was not present. It was rare for an Augustinian house to be elevated to this status. Out of about 200 Augustinian houses in England and Wales, 28 were abbeys and only seven of these became mitred. The only other mitred abbey in Cheshire was that of St Werburgh in Chester. In 1379 and in 1381 there were 15 canons at Norton and in 1401 there were 16, making it the largest Augustinian community in the northwest of England. By this time the barony of Halton had passed by a series of marriages to the duchy of Lancaster. John of Gaunt, the 1st Duke of Lancaster and 14th Baron of Halton, agreed to be the patron of the newly formed abbey. At this date the church was 287 feet (87 m) long; it was the second longest Augustinian church in northwest England, exceeded only by the 328 feet (100 m) long church at Carlisle. Towards the end of the 14th century, the abbey acquired a "giant" statue of Saint Christopher. Three wills from members of the Dutton family from this period survive; they are dated 1392, 1442 and 1527, and in each will money was bequeathed to the foundation. The abbey's fortunes went into decline after the death of Richard Wyche in 1400. Wyche was succeeded by his prior, John Shrewsbury, who "does not seem to have done more than keep the house in order". Frequent floods had reduced its income, and in 1429 the church and other abbey buildings were described as being "ruinous". Problems continued through the rest of the 15th century, resulting in the sale of more advowsons. By 1496 the number of canons had been reduced to nine, and to seven in 1524. In 1522 there were reports of disputes between the abbot and the prior. The abbot was accused of "wasting the house's resources, nepotism, relations with women" and other matters, while the prior admitted to "fornication and lapses in the observation of the Rule". The prior threatened the abbot with a knife, but then left the abbey. The physical state of the buildings continued to deteriorate. The records of the priory and abbey have not survived, but the excavations and the study of other documents have produced evidence of how the monastic lands were managed. The principal source of income came from farming. This income was required not only for the building and upkeep of the property, but also for feeding the canons, their guests, and visiting pilgrims. The priory also had an obligation from its foundation to house travellers fording the Mersey. It has been estimated that nearly half of the demesne lands were used for arable farming. The grain grown on priory lands was ground by a local windmill and by a watermill outside the priory lands. Excavations revealed part of a stone handmill in the area used in the monastic kitchen. In addition to orchards and herb gardens in the moated enclosures, it is likely that beehives were maintained for the production of honey. There is evidence from bone fragments that cattle, sheep, pigs, geese and chickens were reared and consumed, but few bone fragments from deer, rabbits or hares have been discovered. Horseflesh was not eaten. Although few fish bones have been discovered, it is known from documentary evidence that the canons owned a number of local fisheries. The fuel used consisted of wood and charcoal, and turf from marshes over which the priory had rights of turbary (to cut turf). The events in 1536 surrounding the fate of the abbey at the dissolution of the monasteries are complicated, and included a dispute between Sir Piers Dutton, who was in a powerful position as the Sheriff of Cheshire, and Sir William Brereton, the deputy-chamberlain of Chester. Dutton's estate was next to that of the abbey, and Dutton plotted to gain some of its land from the Crown after the dissolution; while Brereton supported the abbot against Dutton and held the lucrative position of steward of the abbey. A campaign of vilification was directed at the canons, asserting that they were guilty of "debauched conduct". Then, in 1535, Dutton falsely accused the abbot and Brereton of issuing counterfeit coins. This charge was dismissed mainly because one of Dutton's witnesses was considered to be "unconvincing". Playing into Duttons' hands was the gross undervaluation of the abbey's assets as reported to the royal commissioners of the Valor Ecclesiasticus of 1535; as a result of which the net annual income of the abbey was recorded, falsely, as falling below the £200 () threshold that would subsequently be chosen for the first round of dissolutions in 1536, although whether this subterfuge was due to the machinations of Dutton or the abbot (or both) remains unclear. Brereton and the abbot appear to have attempted to have the dissolution cancelled subject to the payment of a fine, as was the case in large numbers of other houses in similar circumstances; but in the abbot's absence dissolution commissioners arrived unannounced at the abbey in early October 1536. There was considerable opposition, the commissioners being menaced by around 300 local people; for whom the abbot, rushing back, threw an impromptu feast complete with roasted ox. According to Dutton's account, after barricading themselves in a tower the commissioners managed to send a letter to Dutton, who arrived with a force of men in the middle of the night. Most of the rioters fled, but Dutton arrested the abbot and four of the canons, who were sent to Halton Castle and then to prison in Chester. Dutton sent a report of the events to Henry VIII; who demanded that if the abbot and canons had behaved as Dutton reported, they should be immediately executed as traitors. However, because the kings instructions had been conveyed by the Lord Chancellor in the form of letters to both Dutton and Brereton, the two faction leaders would be required to act together to effect them; with the consequence that Brereton was temporarily able to stall any such action by refusing to meet with Dutton. Events elsewhere in the country further delayed the execution and, following an intercession to Thomas Cromwell (whose own informal contacts had cast doubt on the reliability of Dutton's reports), the abbot and canons were discharged and awarded pensions. The abbey was made uninhabitable, the lead from the roof, the bell metal, and other valuable materials were confiscated for the king, and the building lay empty for nine years. The estate came into the ownership of the Crown, and it was managed by Brereton. From the evidence of damage to the tiled floor of the church, Brown and Howard-Davis conclude it is likely that the church was demolished at an early stage, but otherwise the archaeological evidence for this period is sparse and largely negative. ### Country house In 1545 the abbey and the manor of Norton were sold to Sir Richard Brooke for a little over £1,512 (). Brooke built a house in Tudor style, which became known as Norton Hall, using as its core the former abbot's lodgings and the west range of the monastic buildings. It is not certain which other monastic buildings remained when the abbey was bought by the Brookes; excavations suggest that the cloisters were still present. A 17th-century sketch plan by one of the Randle Holme family shows that the gatehouse remained at that time, although almost all the church had been demolished. An engraving by the Buck brothers dated 1727 shows that little changed by the next century. During the Civil War the house was attacked by a force of Royalists. The Brookes were the first family in north Cheshire to declare allegiance to the Parliamentary side. Halton Castle was a short distance away, and was held by Earl Rivers for the Royalists. In February 1643 a large force from the castle armed with cannon attacked the house, which was defended by only 80 men. Henry Brooke successfully defended the house, with only one man wounded, while the Royalists lost 16 men including their cannonier (gunner). They burnt two barns and plundered Brooke's tenants, but "returned home with shame and the hatred of the country". At some time between 1727 and 1757 the Tudor house was demolished and replaced by a new house in Georgian style. The house had an L-plan, the main wing facing west standing on the footprint of the Tudor house, with a south wing at right-angles to it. The ground floor of the west wing retained the former vaulted undercroft of the west range of the medieval abbey, and contained the kitchens and areas for the storage of wines and beers. The first floor was the piano nobile, containing the main reception rooms. The west front was symmetrical, in three storeys, with a double flight of stairs leading up to the main entrance. Clearance of the other surviving remnants of the monastic buildings had started but the moated enclosures were still in existence at that time. A drawing dated 1770 shows that by then all these buildings and the moats had been cleared away, and the former fishponds were being used for pleasure boating. Between 1757 and the early 1770s modifications were made to the house, the main one being the addition of a north wing. According to the authors of the Buildings of England series, the architect responsible for this was James Wyatt. Also between 1757 and 1770, the Brooke family built a walled garden at a distance from the house to provide fruit, vegetables and flowers. The family also developed the woodland around the house, creating pathways, a stream-glade and a rock garden. Brick-built wine bins were added to the undercroft, developing it into a wine cellar, and barrel vaulting was added to the former entrance hall to the abbey (which was known as the outer parlour), obscuring its arcade. During the mid-18th century, Sir Richard Brooke was involved in a campaign to prevent the Bridgewater Canal from being built through his estate. The Bridgewater Canal Extension Act had been passed in 1762, and it made allowances for limited disturbance to the Norton estate. However Sir Richard did not see the necessity for the canal and opposed its passing through his estate. In 1773 the canal was opened from Manchester to Runcorn, except for 1 mile (2 km) across the estate, which meant that goods had to be unloaded and carted around it. Eventually Sir Richard capitulated, and the canal was completed throughout its length by March 1776. By 1853 a service range had been added to the south wing of the house. In 1868 the external flight of stairs was removed from the west front and a new porch entrance was added to its ground floor. The entrance featured a Norman doorway that had been moved from elsewhere in the monastery; Greene believes that it probably formed the entrance from the west cloister walk into the nave of the church. An exact replica of this doorway was built and placed to the north of the Norman doorway, making a double entrance. The whole of the undercroft was radically restored, giving it a Gothic theme, adding stained glass windows and a medieval-style fireplace. The ground to the south of the house was levelled and formal gardens were established. During the 19th century the estate was again affected by transport projects. In 1804 the Runcorn to Latchford Canal was opened, replacing the Mersey and Irwell Navigation; this cut off the northern part of the estate, making it only accessible by a bridge. The Grand Junction Railway was built across the estate in 1837, followed by the Warrington and Chester Railway, which opened in 1850; both of these lines affected the southeast part of the estate. In 1894, the Runcorn to Latchford Canal was replaced by the Manchester Ship Canal, and the northern part of the estate could only be accessed by a swing bridge. The Brooke family left the house in 1921, and it was almost completely demolished in 1928. Rubble from the house was used in the foundations of a new chemical works. During the demolition, the undercroft was retained and roofed with a cap of concrete. In 1966 the current Sir Richard Brooke gave Norton Priory in trust for the benefit of the public. ### Excavations and museum In 1971 J. Patrick Greene was given a contract to carry out a six-month excavation for Runcorn Development Corporation as part of a plan to develop a park in the centre of Runcorn New Town. The site consisted of a 500-acre (202 ha) area of fields and woods to the north of the Bridgewater Canal. Greene's initial findings led to his being employed for a further 12 years to supervise a major excavation of the site. The buildings found included the Norman doorway with its Victorian addition and three medieval rooms. Specialists were employed and local volunteers were recruited to assist with the excavation, while teams of supervised prisoners were used to perform some of the heavier work. The area excavated exceeded that at any European monastic site that used modern methods. The Development Corporation decided to create a museum on the site, and in 1975 Norton Priory Museum Trust was established. In 1989 Greene published his book about the excavations entitled Norton Priory: The Archaeology of a Medieval Religious House. Further work has been carried out, recording and analysing the archaeological findings. In 2008 Fraser Brown and Christine Howard-Davis published Norton Priory: Monastery to Museum, in which the findings are described in more detail. Howard-Davis was largely responsible for the post-excavation assessment and for compiling a database for the artefacts and, with Brown, for their analysis. ## Findings from excavations ### Priory 1134–1236 The excavations have revealed information about the original priory buildings and grounds, and how they were subsequently modified. A series of ditches was found that would have provided a supply of fresh water and also a means for drainage of a relatively wet site. Evidence of the earliest temporary timber buildings in which the canons were originally housed was found in the form of 12th-century post pits. Norton Priory is one of few monastic sites to have produced evidence of temporary quarters. The remains of at least seven temporary buildings have been discovered. It is considered that the largest of these, because it had more substantial foundations than the others, was probably the timber-framed church; another was most likely the gatehouse, and the other buildings provided accommodation for the canons and the senior secular craftsmen. The earliest masonry building was the church, which was constructed on shallow foundations of sandstone rubble and pebbles on boulder clay. The walls were built in local red sandstone with ashlar faces and a rubble and mortar core. The ground plan of the original church was cruciform, and consisted of a nave without aisles, a choir at the crossing with a tower above it, a square-ended chancel, and north and south transepts, each with an eastern chapel. The total length of the church was 148 feet (45.1 m) and the total length across the transepts was 74 feet (22.6 m), giving a ratio of 2:1. The walls of the church were 5 feet (1.5 m) wide at the base, and the crossing tower was supported on four piers. The other early buildings were built surrounding a cloister to the south of the church. The east range incorporated the chapter house and also contained the sacristy, the canons' dormitory and the reredorter. The upper storey of the west range provided living accommodation for the prior and an area where secular visitors could be received. In the lower storey was the undercroft where food and fuel were stored. The south range contained the refectory, and at a distance from the south range stood the kitchen. Evidence of a bell foundry dating from this period was found 55 yards (50 m) to the north of the church. It is likely that this was used for casting a tenor bell. A few moulded stones from this early period were found. These included nine blocks that probably formed part of a corbel table. There were also two beak-head voussoirs; this type of voussoir is rare in Cheshire, and has been found in only one other church in the county. Considerable expansion occurred during the last two decades of the 12th century and the first two or three decades of the 13th century. The south and west ranges were demolished and rebuilt, enlarging the cloister from about 36 feet (11 m) by 32 feet (10 m) to about 56 feet (17 m) by 52 feet (16 m). This meant that a door in the south wall of the church had to be blocked off and a new highly decorated doorway was built at the northeast corner of the cloister; this doorway has survived. The lower storey of the west range, the other standing remains of the priory, also dates from this period; it comprises the cellarer's undercroft and a passage to its north, known as the outer parlour. The outer parlour had been the entrance to the priory from the outside world, and was "sumptuously decorated" so that "the power and wealth of the priory could be displayed in tangible fashion to those coming from the secular world". The undercroft, used for storage, was divided into two chambers, and its decoration was much plainer. The upper floor has been lost; it is considered that this contained the prior's living quarters and, possibly, a chapel over the outer parlour. A new and larger reredorter was built at the end of the east range, and it is believed that work might have started on a new chapter house. A system of stone drains was constructed to replace the previous open ditches. The west wall of the church was demolished and replaced by a more massive structure, 10 feet (3 m) thick at the base. The east wall was also demolished and the chancel was extended, forming an additional area measuring approximately 27 feet (8 m) by 23 feet (7 m). ### Priory and abbey 1236–1536 The excavation revealed evidence of the fire of 1236, including ash, charcoal, burnt planks and a burnt wooden bowl. It is thought that the fire probably started in the timber-built kitchens at the junction of the west and south ranges, and then spread to the monastic buildings and church. Most of the wood in the buildings, including the furnishings and roofs, would have been destroyed, although the masonry walls remained largely intact. The major repairs required gave an opportunity for the extension of the church by the addition of new chapels to both of the transepts, and its refurbishment in a manner even grander than previously. The cloister had been badly damaged in the fire and its arcade was rebuilt on the previous foundations. The new arcade was of "very high quality and finely wrought construction". Brown and Howard-Davis state that the kitchens were rebuilt on the same site and it appears that they were rebuilt in timber yet again. Excavations have found evidence of a second bell foundry in the northwest of the priory grounds. The date of this is uncertain but Greene suggests that it was built to cast a new bell to replace the original one that was damaged in the fire. Later in the 13th century another chapel was added to the north transept. Accommodation for guests was constructed to the southwest of the monastic buildings. In the later part of the 13th century and during the following century the chapel in the south transept was replaced by a grander two-chambered chapel. This balanced the enlarged chapels in the north transept, restoring the church's cruciform plan. Around this time the east end of the church was further extended when a reliquary chapel was added measuring about 42 feet (13 m) by 24 feet (7 m). A guest hall was built to the west of the earlier guest quarters. After the status of the foundation was elevated from a priory to an abbey, a tower house was added to the west range. This is shown on the engraving by the Buck brothers, but it has left little in the way of archaeological remains. The church was extended by the addition of a north aisle. There is little evidence of later major alterations before the dissolution. There is evidence to suggest that the cloister was rebuilt, and that alterations were made to the east range. ### Burials The excavations revealed information about the burials carried out within the church and the monastic buildings, and in the surrounding grounds. They are considered to be "either those of Augustinian canons, privileged members of their lay household, or of important members of the Dutton family". Most burials were in stone coffins or in wooden coffins with stone lids, and had been carried out from the late 12th century up to the time of the dissolution. The site of the burial depended on the status of the individual, whether they were clerical or lay, and their degree of importance. Priors, abbots, and high-ranking canons were buried within the church, with those towards the east end of the church being the most important. Other canons were buried in a graveyard outside the church, in an area to the south and east of the chancel. Members of the laity were buried either in the church, towards the west end of the nave or in the north aisle, or outside the church around its west end. It is possible that there was a lay cemetery to the north and west of the church. The addition of the chapels to the north transept, and their expansion, was carried out for the Dutton family, making it their burial chapel, or family mausoleum, and the highest concentration of burials was found in this part of the church. It is considered that the north aisle, built after the priory became an abbey, was added to provide a burial place for members of the laity. The excavations revealed 49 stone coffins, 30 coffin lids, and five headstones. Twelve of the lids were carved in high relief, with designs including flowers or foliage. One lid depicts an oak tree issuing from a human head in the style of a green man, another has a cross, a dragon and a female effigy, while others have shield and sword motifs. Two contain inscriptions in Norman-French, identifying the deceased. The remaining lids have simpler incised patterns, mainly decorated crosses. The headstones contain crosses. Most of the coffins were sunk into the ground, with the lid at the level of the floor, although a few were found within the walls. Only three stone coffins for children were discovered. These lay in a group, together with a coffin containing a male skeleton, in the vestibule leading to the enlarged chapter house. The most prestigious type of coffin was tapered towards the feet, with the head end carved externally to a hemi-hexagonal shape. Another sign of higher status was the provision of an internal "pillow" for the head. A total of 144 graves was excavated; they contained 130 articulated skeletons in a suitable condition for examination. Of these, 36 were well-preserved, 48 were in a fair condition and 46 were poorly preserved. Males out-numbered females by a ratio of three to one, an expected ratio in a monastic site. Most of the males had survived into middle age (36–45 years) to old age (46 years or older), while equal numbers of females died before and after the age of about 45 years. One female death was presumably due to a complication of pregnancy as she had been carrying a 34-week foetus. The average height of the adult males was 5 feet 8 inches (1.73 m) and that of the adult females was 5 feet 2 inches (1.57 m). The bones show a variety of diseases and degenerative processes. Six skeletons showed evidence of Paget's disease of bone (osteitis deformans). The most severe case of Paget's disease was in a body buried in the nave in a stone coffin. The lid was carved with two shields, indicating that the occupant had been a knight. One skeleton showed signs of leprosy affecting bones in the face, hands and feet. No definite cases of tuberculosis directly affecting bones were found but in two individuals there were changes in the ribs consistent with their having suffered from tuberculosis of the lungs. The only major congenital abnormality found consisted of bony changes resulting from a possible case of Down's syndrome. Relatively minor congenital abnormalities of the spine were found in 19 skeletons, ten of which were cases of spina bifida occulta. Other spinal abnormalities included fused vertebrae, spondylolysis and transitional vertebrae. Definite evidence of fractured bones was found in ten skeletons, and evidence of possible fractures was found in three other cases. One cranium contained a large circular lesion which may have been the consequence of trepanning. Other diseases specific to bones and joints were osteoarthritis, diffuse idiopathic skeletal hyperostosis (DISH), and possible cases of spondyloarthropathy. Three skeletons showed possible evidence of rickets, two had changes of osteoporosis, and three crania had features of hyperostosis frontalis interna, a metabolic condition affecting post-menopausal women. Osteomata (benign tumours of bone) were found in three cases. Examination of the jaws and teeth gave information about the dental health of those buried in the priory. The degree of wear of teeth was greater than it is at present, while the incidence of dental caries was much lower than it is now, as was the incidence of periodontal disease. A consequence of the wear of the teeth was "compensatory eruption" of the teeth in order to keep contact with the opposing teeth. It was concluded that the people buried in the priory had few problems with their teeth or jaws. Loss of teeth was due to wear of the teeth, rather than from caries or periodontal disease. ### Country house Little archaeological evidence relates to the period immediately after the dissolution, or to the Tudor house built on part of the site. A sawpit was found in the outer courtyard. It is considered that this might date from the early period of the Brookes' house, or it may have been constructed during the later years of the abbey. The kitchens to the south of the Tudor house and their drainage systems appear to have been used by the Brookes, and according to Brown and Howard-Davis, were possibly rebuilt by the family. The areas previously occupied by the cloisters and the guest quarters were probably used as middens. Few archaeological findings remain from the Georgian house, apart from a fragment of a wall from the south front, and the foundations of the north wing. The much-altered medieval undercroft still stands, with its Norman doorway and Victorian replica, barrel vaulting, wine bins, and blind arcading in the former outer parlour. ### Artefacts from the buildings A large number of tiles and tile fragments that had lined the floor of the church and some of the monastic buildings were found in the excavations. The oldest tiles date from the early 14th century. The total area of tiles discovered was about 80 square metres (860 sq ft), and is "the largest area of a floor of this type to be found in any modern excavation". The site has "the largest, and most varied, excavated collection of medieval tiles in the North West" and "the greatest variety of individual mosaic shapes found anywhere in Britain". The tiles found made a pavement forming the floor of the choir of the church and the transepts. The chancel floor was probably also tiled; these tiles have not survived because the chancel was at a higher level than the rest of the church, and the tiles would have been removed during subsequent gardening. A dump of tiles to the south of the site of the chapter house suggests that this was also tiled. In the 15th century a second tile floor was laid on top of the original floor in the choir where it had become worn. The tiles on the original floor were of various shapes, forming a mosaic. The tiles were all glazed and coloured, the main colours being black, green and yellow. Many of them had been decorated by impressing a wooden stamp into the moist clay before it was fired; these are known as line-impressed tiles. The line-impressed designs included masks of lions or other animals, rosettes, and trefoils. Other tiles or tile fragments showed portions of trees, foliage, birds and inscriptions. In the chapels of the north transept, the burial place of the Dutton family, were tiles depicting mail, thought to be part of a military effigy, and tiles bearing fragments of heraldic designs. The tiles from the upper (later) pavement were all square, and again were line-inscribed with patterns forming parts of larger designs. A related discovery at the excavation was the kiln in which most of the tiles on the site were fired. The excavations also revealed stones or fragments of carved stone dating from the 12th to the 16th centuries. The earliest are in Romanesque style and include two voussoirs decorated with beakheads (grotesque animal heads with long pointed bird-like beaks). Other stones dating from the 12th century are in Gothic style; they include a capital decorated with leaves and a portion of the tracery from a rose window. Many of the stones from the 13th century were originally part of the cloister arcade, and had been re-used to form the core of a later cloister arcade. They include stones sculpted with depictions of humans and animals. The best preserved of these are the heads of two canons, each wearing a cowl with the tonsure visible, the head of a woman with shoulder-length hair, parts of a seated figure holding an open book, and a creature that might represent a serpent or an otter. There are numerous fragments dating from the 14th and 15th centuries. These include portions of string courses, tracery, corbels, window jambs, and arch heads. At least three of the corbels were carved in the form of human heads. Over 1,500 fragments of painted medieval glass were found, most of it in a poor condition. These show that the glazing scheme used in the priory was mainly in grisaille (monochrome) style. Almost 1,300 fragments of glass from later periods, and nearly 1,150 sherds of ceramic roof tiles were also found. ### Artefacts from daily life Some 500 fragments of pottery were found dating from the medieval period. Most of these were parts of jars, jugs or pipkins and were found in the area of the kitchen range. Most of it was produced locally, although 13 sherds of Stamford Ware, fragments of two jugs from North France, and two small pieces of Saintonge pottery have been identified. Only a few wooden bowls were recovered. Much more pottery was found dating from the post-medieval period and later. Again most of this had been manufactured in England, especially in Staffordshire. Fragments of pottery from abroad included pieces from a Westerwald mug, a jug from Cologne and items of Chinese porcelain. The excavations produced over 4,000 sherds of glass, dating from the 12th to the 20th centuries, but only 16 of these came from the period before the dissolution. A total of 1,170 fragments from clay tobacco pipes were found, dating from about 1580 to the early 20th century. Six medieval coins were recovered, the earliest of which was a silver penny of John from the early 13th century. Coins from later periods were a silver threepence from the reign of Elizabeth I and a silver penny from Charles I. Only low-denomination coins were found from the 18th century and later, including a 10-pfennig piece from Germany dated 1901. Two silver spoons were recovered, one of which was dated 1846 from the hallmark. Objects made from copper alloy were found, many of which were associated with personal adornment and dress including brooches, buckles, and buttons. Also found from this period was a small simple chape (scabbard tip), and part of a skimmer that had been used in the kitchen. Artefacts made from iron, other than nails, were again mainly items of personal adornment and dress. Other identifiable iron items from this period included keys, two possible rowel spurs (spurs with revolving pointed wheels), and about 12 horseshoes. Nearly 2,000 fragments of lead were found, 940 of which were droplets of melted metal, some of these being a consequence of the fire in 1236. One of the earliest artefacts was a papal bulla dating from the rule of Pope Clement III (1187–91). Two other possible seals were discovered. A total of 15 lead discs were recovered, some of which were inscribed with crosses. Two of these were found in graves, but the purpose of the discs has not been reliably explained. The other lead artefacts from this period were associated with the structure of the buildings and include fragments of kame (the lead used in leaded windows), ventilator grills, and water pipes. Leather fragments almost all came from shoes, and included an almost complete child's shoe dating from the late 16th or the 17th century. Another find was a small gemstone, a cabochon (polished) sardonyx. ## Present day Norton Priory is considered to be "a monastic site of international importance" and is "the most extensively excavated monastic site in Britain, if not Western Europe". It is open to the public and run by a charitable trust, the Norton Priory Museum Trust. The Trust was founded in 1975 and the first museum was opened in 1982; a much enlarged museum was built and opened in 2016. The Trust owns and maintains many of the artefacts found during the excavations, and has created an electronic database to record all the acquisitions. In addition, it holds records relating to the excavations, including site notebooks and photographs. The area open to the public consists of a museum, the standing archaeological remains, 42 acres of garden and woodland, and the walled garden of the former house. ### Museum The museum contains information relating to the history of the site and some of the artefacts discovered during the excavations. These include carved coffin lids, medieval mosaic tiles, pottery, scribe's writing equipment and domestic items from the various buildings on the site such as buttons, combs and wig curlers. Two medieval skeletons recovered in the excavations are on display, including one showing signs of Paget's disease of bone. Standing in the museum is a reconstruction of the cloister arcade as it had been built following the fire of 1236. It consists of moulded pointed arches with springer blocks, voussoirs and apex stones, supported on triple shafts with foliate capitals and moulded bases. Above the capitals, at the bases of the arches, are sculptures that include depictions of human and animal heads. The human heads consist of two canons with hoods and protruding tonsures, other males, and females with shoulder-length hair. In one spandrel is a seated figure with an outstretched arm holding a book. Other carvings depict such subjects as fabulous beasts, and an otter or a snake. The museum contains the medieval sandstone statue of Saint Christopher, which is considered to be "a work of national and even international importance". Saint Christopher was associated with the abbey because of its proximity to the River Mersey and the dangers associated with crossing the river. The statue shows the saint wading through fish-filled water carrying the Christ-child on his shoulder. It has been dated to about 1390, it is 3.37 metres (11.1 ft) tall, and was once painted in bright colours. The gallery also contains a three-dimensional representation of the statue as it is believed it would have originally appeared. ### Archaeological remains The archaeological remains are recognised as a Grade I listed building and a scheduled ancient monument, and are considered to be the most important monastic remains in Cheshire. They consist of the former undercroft and the foundations of the church and monastic buildings that were exposed during the excavations. The undercroft stands outside the museum building. It is a single-storey structure consisting of seven pairs of bays divided into two compartments, one of four and the other of three bays. It is entered through the portico added to the west front of the country house in 1886 by way of a pair of arched doorways in Norman style. The doorway to the right (south) is original, dating from the late 12th century, while the other doorway is a replica dated 1886. The older doorway has been described as "the finest decorated Norman doorway in Cheshire". It is in good condition with little evidence of erosion and Greene considers that this is because it has always been protected from the weather. The portico leads into the four-bay compartment. This has a tiled floor and contains a medieval-style fireplace. The roof is ribbed vaulted. On the east wall is a two-arched doorway leading to the former cloisters. To the north another archway leads to the three-bay compartment. This also has a tile floor and contains the brick wine bins added in the 1780s. The roof of this compartment has groined vaults. The undercroft also contains a bell mould, reconstructed from the fragments of the original mould found in the excavations. At the northern end of the undercroft is the passage known as the outer parlour. This has stone benches on each side and elaborately carved blind arcades above them. The arcades each consist of two groups of four round-headed arches with capitals, free-standing columns and bases that are set on the benches. The capitals and mouldings of the arches are decorated with a variety of carvings, the capitals being predominantly late Romanesque in style and the arches early Gothic. The carvings include depictions of human heads, stiff-leaf foliage and animals. ### Grounds The 38 acres (15 ha) of grounds surrounding the house have been largely restored to include the 18th-century pathways, the stream-glade and the 19th-century rock garden. The foundations exposed in the excavations show the plan of the former church and monastic buildings. In the grounds is a Grade II listed garden loggia in yellow sandstone, possibly designed by James Wyatt. At its front are two Doric columns and two antae, and above these is a cornice with a fluted frieze. The side walls are built in stone, and the back wall is constructed internally of stone and externally of brickwork. Also in the grounds are several modern sculptures, and a sculpture trail has been designed in conjunction with these. In the 1970s the fragments of the mould found in the bell pit were re-assembled and used to create a replica of the bell, which was cast in Widnes and now stands in a frame in the grounds. This was opened at a ceremony performed by Sir Bernard Lovell in 1977. A herb garden was developed as part of the BBC's Hidden Garden programme. This seeks to re-create a herb garden as it would have been during the medieval period, and its plan is based on herb gardens in other monastic sites. The plants grown are those reputed to be of value in treating the diseases revealed in the excavated skeletons. ### Walled gardens The 3.5 acre (1 ha) walled garden was restored in the 1980s. It includes an orchard, fruit and vegetable gardens, ornamental borders and a rose walk, as well as the national collection of tree quince (Cydonia oblonga), with 20 different varieties. Close to the walled garden is a Grade II listed ice house, probably dating from the 18th century, which is constructed in brick covered with a mound of earth. The entrance is surrounded by stone walls, from which a tunnel leads to a circular domed chamber. ### Current activities The museum is a visitor attraction. It arranges a series of events for the general public throughout the year, including guided tours, family fun days and concerts. Its educational programme is aimed at all ages; it includes workshops for the general public, and courses focusing on formal and informal aspects of children's education. An outreach programme is intended for individuals and groups in the community. Since its opening, the museum has won awards for its work in tourism, education, outreach and gardening. In 2004 the museum's Positive Partnerships project, in which people with learning disabilities worked alongside museum staff, was a finalist in the Gulbenkian Prize for museums and galleries. In August 2016 the newly rebuilt and expanded museum opened to the public. This cost £4.5m, of which £3.9m was contributed by the Heritage Lottery Fund. ## See also - List of monastic houses in Cheshire - Listed buildings in Runcorn (urban area) - Grade I and II\* listed buildings in Halton (borough) - List of Scheduled Monuments in Cheshire (1066–1539) - Norman architecture in Cheshire - Brooke baronets of Norton Priory
1,082,724
Sonic X-treme
1,167,228,654
Canceled video game by Sega
[ "1990s in video gaming", "Cancelled Sega 32X games", "Cancelled Sega Genesis games", "Cancelled Sega Saturn games", "Cancelled Windows games", "Sega Technical Institute games", "Single-player video games", "Sonic the Hedgehog video games", "Video games developed in the United States", "Video games scored by Howard Drossin" ]
Sonic X-treme was a platform game developed by Sega Technical Institute from 1994 until its cancellation in 1996. It was planned as the first fully 3D Sonic the Hedgehog game, taking Sonic into the 3D era of video games, and the first original Sonic game for the Sega Saturn. The storyline followed Sonic on his journey to stop Dr. Robotnik from stealing six magic rings from Tiara Boobowski and her father. X-treme featured open levels rotating around a fixed center of gravity and, like previous Sonic games, featured collectible rings and fast-paced gameplay. X-treme was conceived as a side-scrolling platform game for the Sega Genesis to succeed Sonic & Knuckles (1994). Development shifted to the 32X and then the Saturn and Windows, and the game was redesigned as a 3D platform game for the 1996 holiday season. The plan was disrupted by company politics, an unfavorable visit by Japanese Sega executives, and obstacles with the game engines planned for use, including one from Sonic Team for Nights into Dreams (1996). Amid increasing pressure and declining morale, designer Chris Senn and programmer Chris Coffin became ill, prompting producer Mike Wallis to cancel the game. A film tie-in with Metro-Goldwyn-Mayer was also canceled. In place of X-treme, Sega released a port of the Genesis game Sonic 3D Blast, but did not release an original 3D Sonic platform game until Sonic Adventure for the Dreamcast in 1998. The cancellation is considered an important factor in the Saturn's commercial failure, as it left the system with no original Sonic platform game. Elements similar to those in X-treme appeared in later games, such as Sonic Lost World (2013). ## Premise Sonic X-treme was a platform game in which players controlled Sonic the Hedgehog from a third-person perspective. It would have been the first Sonic the Hedgehog game to feature 3D gameplay, predating Sonic Adventure (1998). Gameplay was similar to the Sega Saturn platform game Bug! (1995), though producer Mike Wallis said that X-treme differed in that Sonic was free to roam levels, unconstrained by linear paths. X-treme featured a fisheye camera system, the "Reflex Lens", that gave players a wide-angle view, making levels appear to move around Sonic. Levels would rotate around a fixed center of gravity, meaning Sonic could run up walls, arriving at what was previously the ceiling. Sonic was also able to enter and exit the screen as he moved. Boss battles were set in open, arena-style levels, and the bosses were rendered in polygons instead of sprites. These levels used shading, transparency, and lighting effects to showcase the Saturn's technical potential. The developers wanted to take Sonic into the 3D era while building on its successes. In 1996, Wallis said X-treme featured familiar Sonic gameplay, but "[w]e're giving Sonic new moves, because Sonic is a hedgehog of the times, we're bringing him up to speed." Like previous Sonic games, X-treme emphasized speed and physics, and featured special stages and collectable rings. Additions included the abilities to throw rings at enemies, create a shield from rings, do spinning midair attacks, strike enemies below with a "Power Ball" attack, jump higher with less control than normal, and execute a "Sonic Boom" attack, in concert with the shield, that struck in 360 degrees. Surfing and bungee jumping were included as activities considered cool at the time. Former executive producer Michael Kosaka's design documents envisioned six zones with three levels each. At least four stages were developed before cancellation: Jade Gully, Red Sands, Galaxy Fortress, and Crystal Frost. Lead designer Chris Senn said he modeled and textured four main characters and created designs for 50 enemies and an hour of music. Fang the Sniper and Metal Sonic were planned as bosses. The plot went through several iterations; the one described in promotional materials involved Tiara Boobowski, who was set to become a major character, and her father, Professor Gazebo Boobowski, calling on Sonic to help defend the six magical Rings of Order from Dr. Robotnik. ## Background The original Sonic the Hedgehog was developed by Sonic Team in Japan. Released in 1991, it greatly increased the popularity of the Sega Genesis in North America. After its release, developer Yuji Naka and other Japanese staff relocated to California to join Sega Technical Institute (STI), a development division led by Mark Cerny. Cerny aimed to establish an elite development studio combining the design philosophies of American and Japanese developers. In 1991, STI began developing several games, including Sonic the Hedgehog 2 (1992), which was released the following year. Though Sonic the Hedgehog 2 was successful, the language barrier and cultural differences created a rift between the Japanese and American developers. Once development ended, Cerny departed STI and was replaced by former Atari employee Roger Hector. The American staff developed Sonic Spinball (1993), while the Japanese staff developed Sonic the Hedgehog 3 (1994) and Sonic & Knuckles (1994). According to developer Takashi Iizuka, the Japanese team experimented with 3D computer graphics for Sonic 3, but were unable to implement them with the limited power of the Genesis. After Sonic & Knuckles was completed, Naka returned to Japan to work on Nights into Dreams (1996) with Sonic Team. At the time, Sega of America operated as an independent entity, and relations with the Japanese were not always smooth. Some of this conflict may have been caused by Sega president Hayao Nakayama and his admiration for Sega of America; according to former Sega of America CEO Tom Kalinske, some executives disliked that Nakayama appeared to favor US executives, and "a lot of the Japanese executives were maybe a little jealous, and I think some of that played into the decisions that were made". By contrast, author Steven L. Kent opined that Nakayama bullied American executives and believed the Japanese executives made the best decisions. According to Hector, after the release of the Sony PlayStation in 1994, the atmosphere at Sega became political, with "lots of finger-pointing". ## Development After Naka's return to Japan with his team in late 1994, STI was left with mostly American staff. Early ideas for the next Sonic game included the experimental Sonic Crackers, which became the 32X game Knuckles' Chaotix (1995). Another concept came from STI head Roger Hector, who wanted to develop a game based on the Saturday morning Sonic the Hedgehog cartoon, and took Sonic Team and STI developers to DiC Animation's studios in Burbank, California after the release of Sonic Spinball to demonstrate his idea. STI developer Peter Morawiec designed gameplay from this concept as a side-scrolling video game with more focus on story than previous Sonic games. He called the pitch Sonic-16, intended for release on the Genesis. However, Sega management was not interested in a spin-off and disapproved of the idea as too slow for Sonic's speed. Instead, Morawiec moved on to work on Comix Zone (1995). Development of Sonic X-treme began in late 1994 at STI. Michael Kosaka was executive producer and team leader, and designer and CGI artist Senn created animations to pitch the game to Sega executives. As new consoles and the 32-bit era were on the way, the game was moved to the 32X under the working titles Sonic 32X and Sonic Mars after the "Project Mars" codename used for the 32X. The initial 32X design was an isometric side-scroller, but became a full 3D game with a view set on a floating plane. Kosaka completed design documents before the 32X was released, without a clear picture of the hardware. Some of Kosaka's concepts were new dynamics to the gameplay, including the ability for a second player to play as a character other than Tails. Various playable characters, including some from the cartoon, would be unlocked as they were rescued and have unique moves. Players could also collect Chaos Emeralds via special stages that involved playing a minigame similar to air hockey against Dr. Robotnik, and collecting all seven would unlock the true ending. In mid-1995, Kosaka resigned. According to Senn, "[Kosaka] and the executive producer Dean Lester were not getting along, and I believe Michael felt it was his best option to simply remove himself from what he thought was a politically unhealthy environment." Lester resigned later in 1995 and was replaced by Manny Granillo. Wallis, who had worked on The Ooze (1995) and Comix Zone, was placed in charge of Sonic X-treme. Lead programmer Don Goddard was replaced with Ofer Alon, who some staff found difficult to work with, saying he did not share his work. As the design had changed significantly and the 32X struggled commercially, development moved to a planned Sega cartridge console powered by nVidia 3D hardware, designed to compete with the Nintendo 64. STI technical director Robert Morgan was instructed to explore this possibility, without hardware specifications or development kits. This decision was made because of the planned console's ability to handle 3D graphics and Sega of America senior management's disinterest in the Sega Saturn. After Sega announced that it would focus solely on the Saturn, development shifted again, costing the team several weeks. When Naka visited STI and observed the X-treme development, he simply said "good luck". ### Design `The Saturn version was developed by two teams with two different game engines, starting in the second half of 1995. One team, led by Morgan and including programmer Chris Coffin, developed the free-roaming boss levels. This engine used tools used by Saturn games such as Panzer Dragoon II Zwei and rendered bosses as fully polygonal characters. The other team, led by Senn and Alon, developed the main levels, working on PC with the intent of porting their work to Saturn. Alon and Senn focused on building an editor to construct the main levels. Music and backgrounds could not be coded in the editor, and had to be coded manually for each level. Enemies were created as pre-rendered sprites. Senn lost 25 pounds and became severely ill from overworking on X-treme.` Other staff included composer Howard Drossin, lead artist Ross Harris, artist/designers Fei Cheng and Andrew Probert, and designers Jason Kuo and Richard Wheeler. Hirokazu Yasuhara, who designed the Genesis Sonic games, also contributed. According to Senn, his team was completely different from the STI teams led by Naka; this, combined with their inexperience, "set up seeds of doubt and a political landmine waiting to go off if we didn't produce amazing results quickly." Wallis expressed frustration with the team structure, and felt that internal politics hampered development. Coffin felt the division of responsibilities would ensure every element was perfect. Difficulties arose from the design. According to Wallis, the game would combine 2D side-scrolling with "the ability to have him go into and out of the screen", which created unexpected problems in implementation. Senn said a primary problem was transitioning Sonic's simple and fast controls to a 3D environment: "The simplicity of movement, particularly moving very quickly, was now gone. Seeing far enough into the distance, not getting stuck on obstacles, and trying to maintain that sense of free speed was very difficult." 3D graphics were new, and developers were still learning how they would affect controls and gameplay. Programming for the Saturn proved difficult; as Alon could not get his engine, developed on PC, to run fast enough on Saturn, Morgan outsourced the port to Point of View Software, a third party company. ### Disputes within Sega In March 1996, Sega representatives from Japan visited STI to evaluate progress. At this point, X-treme was already behind schedule. Senn and other sources indicate that the key visitor was president Nakayama, though Wallis recalls executive vice president Shoichiro Irimajiri. The executive was unimpressed by Senn and Alon's work, as the version he saw, ported from PC to Saturn by Point of View, ran at a poor frame rate. Senn, who said the visitor "came storming out practically cursing after seeing what they'd done", and Alon attempted to show their most recent PC version, but he left before they had the opportunity. The visitor was impressed by Coffin's boss engine, and requested that X-treme be reworked around it. Concerned about the need to create essentially a new game before the strict October 1996 deadline, Wallis isolated Coffin's team, preventing outside influence. The team comprised four artists, two programmers, a contractor, and three designers, set up in an old STI location. They worked between sixteen and twenty hours a day. Although neither Senn nor Alon were officially part of the production after the visit, they continued working on their version, hoping to pitch it to Sega as a PC game. In April, Sega executive vice president Bernie Stolar approached STI and asked what he could do to help the game meet its deadline. At Wallis' suggestion, he provided the tools and source code for Sonic Team's 3D Saturn game Nights into Dreams. Two weeks later, Stolar requested that the team stop using the engine, as Naka had reportedly threatened to leave Sega if it were used. Senn dismissed this as speculation, but said that, if true, he understood Naka's interest in maintaining control over the Sonic Team technology and the Sonic franchise. Sonic Team was developing its own 3D Sonic game using the Nights engine, which could have motivated Naka's threat. The loss of the Nights engine cost the Sonic X-treme team weeks of development. In July 2022, Naka denied that he had anything to do with X-treme's use of the Nights engine and said it would have been useless because Nights was coded in assembly and X-treme was in C. He suggested that the developers invented the story to rationalize their failure to finish X-treme. ### Cancellation In May 1996, Sega displayed a playable demo of X-treme at E3 in Los Angeles, and displayed a version of Coffin's engine. At this time, team morale had dropped and turnover was high. By August, Coffin had contracted severe walking pneumonia. Wallis praised Coffin's effort, but acknowledged that without Coffin the team had no chance of meeting its deadline. Around the same time, Senn became so ill that he was told he had six months to live, though he survived. With both teams crippled two months before the deadline, Wallis canceled the game. Sega initially stated that X-treme had been delayed, but in early 1997 announced that it had been canceled. For the 1996 holiday season, Sega instead concentrated on Sonic Team's Nights into Dreams and a port of the Genesis game Sonic 3D Blast by Traveller's Tales, to which Wallis contributed. Morawiec requested that X-treme be reworked into bonus stages in 3D Blast, but Traveller's Tales was unable to properly transfer Sonic's model. Sonic Team's work on a Saturn 3D Sonic game became Sonic Adventure for the Dreamcast. Remnants of their prototype can be seen in the Saturn compilation game Sonic Jam (1997). While Senn felt the version of X-treme he and Alon were developing could have been completed with an additional six to twelve months, Sega's PC division would not pay for its development, and may have been hesitant after the engine had been rejected for X-treme. After the project was rejected, Alon left Sega. Sega of America disbanded STI in 1996 following management changes. Hector believed that the success of PlayStation led to corporate turmoil within Sega that resulted in STI's dissolution. According to Wallis, STI was restructured as Sega of America's product development department after the previous product development department had become SegaSoft. ## Canceled film In August 1994, Sega of America signed a deal with Metro-Goldwyn-Mayer and Trilogy Entertainment to produce a live-action animated film based on Sonic the Hedgehog and tie into Sonic X-treme. In May 1995, the screenwriter Richard Jeffries pitched a treatment titled Sonic: Wonders of the World. It saw Sonic and Dr Robotnik escaping from Sonic X-treme into the real world. The film was canceled as none of the companies could come to an agreement. ## Legacy In place of Sonic X-treme, Sega released a port of the Genesis game Sonic 3D Blast, and Sonic Jam, a compilation of Genesis Sonic games with an additional 3D level. Sonic X-treme's cancellation is cited as a key reason for the Saturn's failure. While Sega controlled up to 55% of the console market in 1994, by August 1997, Sony controlled 47%, Nintendo 40%, and Sega only 12%. Journalists and fans have speculated about the impact X-treme might have had. David Houghton of GamesRadar+ described the prospect of "a good 3D Sonic game" on the Saturn as a "What if..." scenario akin to dinosaurs surviving extinction. IGN's Travis Fahs described X-treme as "an empty vessel for Sega's ambitions and the hopes of their fans" and said it was an important change for Sega, its mascot and the Saturn. Levi Buchanan, also writing for IGN, said while the Saturn's lack of a true Sonic sequel "didn't wholly destroy" its chances, it "sure didn't help matters much". Dave Zdyrko, who operated a prominent Saturn fan site, said: "I don't know if [X-treme] could've saved the Saturn, but ... Sonic helped make the Genesis and it made absolutely no sense why there wasn't a great new Sonic title ready at or near the launch of the [Saturn]". In a 2007 retrospective, producer Wallis said that X-treme would have been able to compete with Nintendo's Super Mario 64. Senn believed that a version of X-treme built by him with Alon's engine could have sold well. Next Generation said that X-treme would have damaged Sega's reputation if it did not compare well to competition such as Super Mario 64 and Crash Bandicoot. Naka was dissatisfied with the game, and in 2012 recalled feeling relief when he learned of its cancellation. Journalists noted similarities in level themes and mechanics between X-treme and the 2013 game Sonic Lost World, although Sonic Team head Iizuka said the resemblance was coincidental. Senn went to work on the Wii U Sonic game Sonic Boom: Rise of Lyric, which was released in 2014 to negative reviews. ### Prototypes and recreations For years, little content from X-treme was released beyond promotional screenshots. Fahs wrote in 2008 that most X-treme developers were unwilling to discuss the game, as "the ordeal remain[ed] a painful memory of unimaginable stress, pressure, and ultimate disappointment". In 2006, a copy of an early test engine was sold at auction for US\$2500 to an anonymous collector. An animated GIF image of gameplay was released, and after a fundraising project by the "Assemblergames" website community purchased the disc from the collector, the disk image was leaked on July 17, 2007. Senn created a website with development history including early footage, a playable character named Tiara, and concept music. Senn considered finishing X-treme himself and used some of its concepts in a Sonic fangame, though his plans never materialized. In February 2015, the fansite Sonic Retro obtained the X-treme source code and created a playable build, featuring the level shown in the E3 1996 demo. Reviewing the build, Hardcore Gamer's writers described it as rough but inventive, lacking speed but retaining the spirit of Sonic's design. They felt it could have been a solid direction for the franchise and a boost for the Saturn had it been completed. A Sonic Retro user began developing a homebrew Saturn game based on X-treme, Sonic Z-treme, in March 2017, and released a build in September 2018. Eurogamer described Z-treme as combining X-treme-style ideas and levels with new concepts from the developer, and said it was an impressive effort. ## See also - Development hell - Crunch
1,123,615
Herbig–Haro object
1,172,228,259
Small patches of nebulosity associated with newly born stars
[ "Articles containing video clips", "Emission nebulae", "Herbig–Haro objects", "Nebulae", "Star formation" ]
Herbig–Haro (HH) objects are bright patches of nebulosity associated with newborn stars. They are formed when narrow jets of partially ionised gas ejected by stars collide with nearby clouds of gas and dust at several hundred kilometres per second. Herbig–Haro objects are commonly found in star-forming regions, and several are often seen around a single star, aligned with its rotational axis. Most of them lie within about one parsec (3.26 light-years) of the source, although some have been observed several parsecs away. HH objects are transient phenomena that last around a few tens of thousands of years. They can change visibly over timescales of a few years as they move rapidly away from their parent star into the gas clouds of interstellar space (the interstellar medium or ISM). Hubble Space Telescope observations have revealed the complex evolution of HH objects over the period of a few years, as parts of the nebula fade while others brighten as they collide with the clumpy material of the interstellar medium. First observed in the late 19th century by Sherburne Wesley Burnham, Herbig–Haro objects were recognised as a distinct type of emission nebula in the 1940s. The first astronomers to study them in detail were George Herbig and Guillermo Haro, after whom they have been named. Herbig and Haro were working independently on studies of star formation when they first analysed the objects, and recognised that they were a by-product of the star formation process. Although HH objects are visible-wavelength phenomena, many remain invisible at these wavelengths due to dust and gas, and can only be detected at infrared wavelengths. Such objects, when observed in near infrared, are called molecular hydrogen emission-line objects (MHOs). ## Discovery and history of observations The first HH object was observed in the late 19th century by Sherburne Wesley Burnham, when he observed the star T Tauri with the 36-inch (910 mm) refracting telescope at Lick Observatory and noted a small patch of nebulosity nearby. It was thought to be an emission nebula, later becoming known as Burnham's Nebula, and was not recognized as a distinct class of object. T Tauri was found to be a very young and variable star, and is the prototype of the class of similar objects known as T Tauri stars which have yet to reach a state of hydrostatic equilibrium between gravitational collapse and energy generation through nuclear fusion at their centres. Fifty years after Burnham's discovery, several similar nebulae were discovered with almost star-like appearance. Both Haro and Herbig made independent observations of several of these objects in the Orion Nebula during the 1940s. Herbig also looked at Burnham's Nebula and found it displayed an unusual electromagnetic spectrum, with prominent emission lines of hydrogen, sulfur and oxygen. Haro found that all the objects of this type were invisible in infrared light. Following their independent discoveries, Herbig and Haro met at an astronomy conference in Tucson, Arizona in December 1949. Herbig had initially paid little attention to the objects he had discovered, being primarily concerned with the nearby stars, but on hearing Haro's findings he carried out more detailed studies of them. The Soviet astronomer Viktor Ambartsumian gave the objects their name (Herbig–Haro objects, normally shortened to HH objects), and based on their occurrence near young stars (a few hundred thousand years old), suggested they might represent an early stage in the formation of T Tauri stars. Studies of the HH objects showed they were highly ionised, and early theorists speculated that they were reflection nebulae containing low-luminosity hot stars deep inside. But the absence of infrared radiation from the nebulae meant there could not be stars within them, as these would have emitted abundant infrared light. In 1975 American astronomer R. D. Schwartz theorized that winds from T Tauri stars produce shocks in the ambient medium on encounter, resulting in generation of visible light. With the discovery of the first proto-stellar jet in HH 46/47, it became clear that HH objects are indeed shock-induced phenomena with shocks being driven by a collimated jet from protostars. An image of a Question Mark associated with the object was reported on 18 August 2023 in The New York Times. ## Formation Stars form by gravitational collapse of interstellar gas clouds. As the collapse increases the density, radiative energy loss decreases due to increased opacity. This raises the temperature of the cloud which prevents further collapse, and a hydrostatic equilibrium is established. Gas continues to fall towards the core in a rotating disk. The core of this system is called a protostar. Some of the accreting material is ejected out along the star's axis of rotation in two jets of partially ionised gas (plasma). The mechanism for producing these collimated bipolar jets is not entirely understood, but it is believed that interaction between the accretion disk and the stellar magnetic field accelerates some of the accreting material from within a few astronomical units of the star away from the disk plane. At these distances the outflow is divergent, fanning out at an angle in the range of 10−30°, but it becomes increasingly collimated at distances of tens to hundreds of astronomical units from the source, as its expansion is constrained. The jets also carry away the excess angular momentum resulting from accretion of material onto the star, which would otherwise cause the star to rotate too rapidly and disintegrate. When these jets collide with the interstellar medium, they give rise to the small patches of bright emission which comprise HH objects. ## Properties Electromagnetic emission from HH objects is caused when their associated shock waves collide with the interstellar medium, creating what is called the "terminal working surfaces". The spectrum is continuous, but also has intense emission lines of neutral and ionized species. Spectroscopic observations of HH objects' doppler shifts indicate velocities of several hundred kilometers per second, but the emission lines in those spectra are weaker than what would be expected from such high-speed collisions. This suggests that some of the material they are colliding with is also moving along the beam, although at a lower speed. Spectroscopic observations of HH objects show they are moving away from the source stars at speeds of several hundred kilometres per second. In recent years, the high optical resolution of the Hubble Space Telescope has revealed the proper motion (movement along the sky plane) of many HH objects in observations spaced several years apart. As they move away from the parent star, HH objects evolve significantly, varying in brightness on timescales of a few years. Individual compact knots or clumps within an object may brighten and fade or disappear entirely, while new knots have been seen to appear. These arise likely because of the precession of their jets, along with the pulsating and intermittent eruptions from their parent stars. Faster jets catch up with earlier slower jets, creating the so-called "internal working surfaces", where streams of gas collide and generate shock waves and consequent emissions. The total mass being ejected by stars to form typical HH objects is estimated to be of the order of 10<sup>−8</sup> to 10<sup>−6</sup> per year, a very small amount of material compared to the mass of the stars themselves but amounting to about 1–10% of the total mass accreted by the source stars in a year. Mass loss tends to decrease with increasing age of the source. The temperatures observed in HH objects are typically about 9,000–12,000 K, similar to those found in other ionized nebulae such as H II regions and planetary nebulae. Densities, on the other hand, are higher than in other nebulae, ranging from a few thousand to a few tens of thousands of particles per cm<sup>3</sup>, compared to a few thousand particles per cm<sup>3</sup> in most H II regions and planetary nebulae. Densities also decrease as the source evolves over time. HH objects consist mostly of hydrogen and helium, which account for about 75% and 24% of their mass respectively. Around 1% of the mass of HH objects is made up of heavier chemical elements, including oxygen, sulfur, nitrogen, iron, calcium and magnesium. Abundances of these elements, determined from emission lines of respective ions, are generally similar to their cosmic abundances. Many chemical compounds found in the surrounding interstellar medium, but not present in the source material, such as metal hydrides, are believed to have been produced by shock-induced chemical reactions. Around 20–30% of the gas in HH objects is ionized near the source star, but this proportion decreases at increasing distances. This implies the material is ionized in the polar jet, and recombines as it moves away from the star, rather than being ionized by later collisions. Shocking at the end of the jet can re-ionise some material, giving rise to bright "caps". ## Numbers and distribution HH objects are named approximately in order of their identification; HH 1/2 being the earliest such objects to be identified. More than a thousand individual objects are now known. They are always present in star-forming H II regions, and are often found in large groups. They are typically observed near Bok globules (dark nebulae which contain very young stars) and often emanate from them. Several HH objects have been seen near a single energy source, forming a string of objects along the line of the polar axis of the parent star. The number of known HH objects has increased rapidly over the last few years, but that is a very small proportion of the estimated up to 150,000 in the Milky Way, the vast majority of which are too far away to be resolved. Most HH objects lie within about one parsec of their parent star. Many, however, are seen several parsecs away. HH 46/47 is located about 450 parsecs (1,500 light-years) away from the Sun and is powered by a class I protostar binary. The bipolar jet is slamming into the surrounding medium at a velocity of 300 kilometers per second, producing two emission caps about 2.6 parsecs (8.5 light-years) apart. Jet outflow is accompanied by a 0.3 parsecs (0.98 light-years) long molecular gas outflow which is swept up by the jet itself. Infrared studies by Spitzer Space Telescope have revealed a variety of chemical compounds in the molecular outflow, including water (ice), methanol, methane, carbon dioxide (dry ice) and various silicates. Located around 460 parsecs (1,500 light-years) away in the Orion A molecular cloud, HH 34 is produced by a highly collimated bipolar jet powered by a class I protostar. Matter in the jet is moving at about 220 kilometers per second. Two bright bow shocks, separated by about 0.44 parsecs (1.4 light-years), are present on the opposite sides of the source, followed by series of fainter ones at larger distances, making the whole complex about 3 parsecs (9.8 light-years) long. The jet is surrounded by a 0.3 parsecs (0.98 light-years) long weak molecular outflow near the source. ## Source stars The stars from which HH jets are emitted are all very young stars, a few tens of thousands to about a million years old. The youngest of these are still protostars in the process of collecting from their surrounding gases. Astronomers divide these stars into classes 0, I, II and III, according to how much infrared radiation the stars emit. A greater amount of infrared radiation implies a larger amount of cooler material surrounding the star, which indicates it is still coalescing. The numbering of the classes arises because class 0 objects (the youngest) were not discovered until classes I, II and III had already been defined. Class 0 objects are only a few thousand years old; so young that they are not yet undergoing nuclear fusion reactions at their centres. Instead, they are powered only by the gravitational potential energy released as material falls onto them. They mostly contain molecular outflows with low velocities (less than a hundred kilometres per second) and weak emissions in the outflows. Nuclear fusion has begun in the cores of Class I objects, but gas and dust are still falling onto their surfaces from the surrounding nebula, and most of their luminosity is accounted for by gravitational energy. They are generally still shrouded in dense clouds of dust and gas, which obscure all their visible light and as a result can only be observed at infrared and radio wavelengths. Outflows from this class are dominated by ionized species and velocities can range up to 400 kilometres per second. The in-fall of gas and dust has largely finished in Class II objects (Classical T Tauri stars), but they are still surrounded by disks of dust and gas, and produce weak outflows of low luminosity. Class III objects (Weak-line T Tauri stars) have only trace remnants of their original accretion disk. About 80% of the stars giving rise to HH objects are binary or multiple systems (two or more stars orbiting each other), which is a much higher proportion than that found for low mass stars on the main sequence. This may indicate that binary systems are more likely to generate the jets which give rise to HH objects, and evidence suggests the largest HH outflows might be formed when multiple–star systems disintegrate. It is thought that most stars originate from multiple star systems, but that a sizable fraction of these systems are disrupted before their stars reach the main sequence due to gravitational interactions with nearby stars and dense clouds of gas. The first and currently only (as of May 2017) large-scale Herbig-Haro object around a proto-brown dwarf is HH 1165, which is connected to the proto-brown dwarf Mayrit 1701117. HH 1165 has a length of 0.8 light-years (0.26 parsec) and is located in the vicinity of the sigma Orionis cluster. Previously only small mini-jets (≤0.03 parsec) were found around proto-brown dwarfs. ## Infrared counterparts HH objects associated with very young stars or very massive protostars are often hidden from view at optical wavelengths by the cloud of gas and dust from which they form. The intervening material can diminish the visual magnitude by factors of tens or even hundreds at optical wavelengths. Such deeply embedded objects can only be observed at infrared or radio wavelengths, usually in the frequencies of hot molecular hydrogen or warm carbon monoxide emission. In recent years, infrared images have revealed dozens of examples of "infrared HH objects". Most look like bow waves (similar to the waves at the head of a ship), and so are usually referred to as molecular "bow shocks". The physics of infrared bow shocks can be understood in much the same way as that of HH objects, since these objects are essentially the same – supersonic shocks driven by collimated jets from the opposite poles of a protostar. It is only the conditions in the jet and surrounding cloud that are different, causing infrared emission from molecules rather than optical emission from atoms and ions. In 2009 the acronym "MHO", for Molecular Hydrogen emission-line Object, was approved for such objects, detected in near infrared, by the International Astronomical Union Working Group on Designations, and has been entered into their on-line Reference Dictionary of Nomenclature of Celestial Objects. The MHO catalog contains over 2000 objects. ## Ultraviolet Herbig-Haro objects HH objects have been observed in the ultraviolet spectrum. ## See also - Protoplanetary disk
67,610,108
Coventry City 2–2 Bristol City (1977)
1,170,565,686
Football match
[ "1976–77 in English football", "Bristol City F.C. matches", "Coventry City F.C. matches", "Football League First Division matches", "May 1977 sports events in the United Kingdom" ]
On 19 May 1977, the English association football clubs Coventry City and Bristol City contested a match in the Football League First Division at Highfield Road, Coventry. It was the final game of the 1976–77 Football League season for both clubs, and both faced potential relegation to the Second Division. A third club, Sunderland, were also in danger of relegation and were playing their final game at the same time, against Everton at Goodison Park. As a result of many Bristol City supporters being delayed in traffic as they travelled to the game, the kick-off in the Coventry–Bristol City game was delayed by five minutes, to avoid crowd congestion. Coventry took a 2–0 lead with goals in the 15th and 51st minutes, both scored by midfielder Tommy Hutchison. Bristol City then scored through Gerry Gow and Donnie Gillies to level the match at 2–2 after 79 minutes. With five minutes remaining, the supporters and players received the news that Sunderland had lost to Everton and that a draw would be sufficient for both Coventry and Bristol City to escape relegation at Sunderland's expense. As a result, the last five minutes were played out with neither team's players attempting to score and the match finished as a 2–2 draw. Sunderland made a complaint about the incident, and the Football League conducted an investigation, but both Coventry and Bristol City were eventually cleared of any wrongdoing. ## Background Coventry City were playing their tenth season in the Football League First Division, the then-highest tier in English football, having achieved promotion under former manager Jimmy Hill in 1966–67. Hill left the club after only a few games in the top flight, having decided to pursue a career in broadcasting with London Weekend Television, and the club survived relegation battles on the final day of the season in both of their first two seasons. They had also achieved some success with a top-six finish in 1969–70, which earned them a place in the European Fairs Cup for the 1970–71 season. Hill had returned to the club as managing director in 1975 but he sold several key players and both bookmakers and the club's supporters believed that Coventry were favourites for relegation prior to the 1976–77 campaign. They lost the opening two games, but a victory against Leeds United, with Coventry's line-up featuring new signings Terry Yorath, Ian Wallace and Bobby McDonald, as well as a breakthrough performance by young striker Mick Ferguson, marked the start of a better run of form. By early December, they had risen to 10th position. A series of poor results followed after the new year, however, leaving the team in the bottom three going into the final game. Bristol City had been promoted to the top flight from the Second Division in the 1975–76 season, finishing second behind Sunderland. They started the 1976–77 campaign with a surprise win against Arsenal at Highbury followed by a draw against Stoke City and a victory over Sunderland. The good start was tainted by a career-ending injury to striker Paul Cheesley against Stoke, and a 2–1 defeat against Manchester City marked the start of a dramatic fall down the table from second to twentieth between September and October. Lacking a quality forward, Bristol City failed to score goals and their slide down the table included a run of six defeats with only two goals scored. Their manager Alan Dicks was unable to find a striker on the transfer market, but his signing of veteran Leeds United defender Norman Hunter briefly revived the club's fortunes. Wins over Tottenham Hotspur and Norwich City took them briefly out of the relegation zone to 17th place, but Bristol City's form was poor after Christmas. Although they achieved a second win of the season against Arsenal, they suffered defeat to then-bottom-placed Sunderland at Roker Park, and a run of just one win in nine games up to early April left Bristol City themselves at the bottom of the table. A better run followed, including another win over Tottenham, and a surprise win over Liverpool at Ashton Gate in the penultimate game left Bristol City needing only a draw against Coventry to guarantee survival. In addition to Coventry and Bristol City, Sunderland were the third team involved in the last-day relegation fight. They had been promoted from the Second Division as champions the previous season, but they performed poorly in the first half of the campaign and were bottom of the table in mid-January. They performed much better thereafter, and by the last week of the season had secured nine wins and seven draws from their previous eighteen games. Coventry and Bristol City had played each other twice in the 1976–77 season. The first meeting was at Ashton Gate in late August in the second round of the Football League Cup, the fourth cup match between the two clubs in just three years. For the fourth time in those encounters, it was Coventry who prevailed, winning the game 1–0 with a Ferguson goal after 41 minutes. Bristol City had numerous chances to score throughout the game, but Coventry kept a clean sheet as a result of a string of saves by goalkeeper Jim Blyth. The sides met again at Ashton Gate in the league fixture on 6 November 1976. It was a match of few shots on goal as both sides failed to establish sustained attacks. The limited chances that did materialise were wasted, and the game finished 0–0. The league fixture at Coventry's Highfield Road ground was originally scheduled for New Year's Day, but was postponed until the end of the season due to a frozen pitch. ## Pre-match Tottenham and Stoke had completed all their league fixtures by the previous Saturday and Monday respectively. Tottenham were already confirmed as relegated, while Stoke's goal difference was so inferior to that of Coventry, Bristol City and Sunderland, that pundits regarded their chances of survival as nonexistent. West Ham United had also finished all their matches, but were mathematically safe. This left Bristol City, Coventry and Sunderland battling to avoid the final relegation position. A draw would have been sufficient for Sunderland to achieve safety, by finishing ahead of at least one of the other two clubs. Similarly, Bristol City could avoid relegation by drawing the game, as that would guarantee their finishing above Coventry. Coventry needed a win to guarantee their safety, but they could also survive by drawing the game if Sunderland were to lose. Sunderland's final game of the season was away against Everton, at Goodison Park, and was to be played at the same time as Coventry City's match against Bristol City. Approximately 10,000 of the 36,892 supporters were Bristol City fans, many of whom were delayed in traffic as they travelled to Coventry. As a result of this, to avoid crowd congestion, the kick-off was put back by five minutes. This was to prove very significant as the evening progressed, although club historians are not certain whether it was initiated by Coventry City, by the West Midlands Police or by the referee, Ron Challis. Hill later wrote in his autobiography that the decision had been made by the referee, whereas The Guardian's Rob Smyth maintained in a 2012 article that it was "generally perceived that [the delay] was the doing of Hill". ## Match ### Summary Coventry began the match in attacking style, seeking to secure the win which for them was the only way to be certain of survival. Committing several players to attack left Coventry vulnerable, and Bristol City twice found themselves with the ball behind Coventry's defence. The two chances fell to Chris Garland and Jimmy Mann, but neither was able to beat Coventry goalkeeper Les Sealey. Two minutes after Mann's miss, Coventry took the lead. A free kick by Mick Coop was parried weakly by Bristol City goalkeeper John Shaw and fell to Tommy Hutchison, who scored his second goal of the season with a powerful shot. Bristol City had several chances to equalise just before half-time – first through a goal-line clearance by McDonald, then through Trevor Tainton, whose 20-yard shot was saved by Sealey. The final Bristol City chance of the half resulted from a Coventry defensive mix-up; Yorath allowed a pass from Donnie Gillies through to Sealey, but the goalkeeper was not expecting it and the ball only narrowly missed the Coventry goal. The score remained 1–0 to Coventry at half-time. Seven minutes into the second half, Coventry scored again to double their lead to 2–0. Barry Powell hit the goalpost with a shot, and when it rebounded, Hutchison scored his second goal of the game with a shot which went in off the crossbar. Bristol City's historian David Woods wrote that "it looked all up" for them at this point, with the club apparently heading for relegation, but he noted that "fortunately, the players did not give up the ghost". They pulled a goal back just a few minutes after Coventry's second, when Gerry Gow received the ball from Gillies and fired a shot past Sealey from 12 yards. From that moment, Bristol City began to dominate the game, doing all the attacking as Coventry's defence struggled. Peter Cormack came on as a substitute to replace the injured Clive Whitehead, and Bristol City continued to seek the equaliser. That arrived in the 79th minute, when Garland headed the ball across to Gillies who struck it into the far corner of the Coventry goal. With the match level, it was once again Coventry who needed to score again to be certain of survival, but their players were exhausted and it was Bristol City who continued to press, looking for a winner. With five minutes remaining, news reached the Coventry directors' box that the game at Goodison Park was over, the earlier finish a consequence of the delayed start in the Coventry–Bristol City game. Everton had beaten Sunderland 2–0, which meant that should the game at Highfield Road remain a draw, both sides would be safe at Sunderland's expense. Conversely, if either side were to lose, that side would be relegated. Jimmy Hill immediately went to speak to the scoreboard operator, asking for the Everton–Sunderland score to be displayed across the ground. Seeing this, and realising its significance, the two sides called an unofficial truce. Coventry retreated to their own half, making no further attempt to gain the ball or to score, while Bristol City passed the ball around between their defence and goalkeeper, similarly making no attempt to advance up the field. The final five minutes were played out in this fashion, in what authors Geoff Harvey and Vanessa Strowger later described as "a good-natured kickabout". Referee Challis called a halt to the game without playing any injury time, and it finished as a 2–2 draw. ### Details Source: ## Post-match and legacy When the match concluded, the players embraced each other, while the supporters of both teams began to celebrate their mutual survival together. Hundreds of supporters invaded the pitch after the game, while some climbed onto the roofs of the executive boxes. Supporters of both teams went to Coventry City centre after the game to continue the celebrations, with some causing damage to infrastructure. Seventeen Bristol City and three Coventry supporters were arrested for assaulting police officers, threatening behaviour and drunkenness. At Goodison Park, many Sunderland supporters had remained in the ground after the conclusion of their match to await news from Coventry. The result was announced on the public-address system, bringing the news that their team would be relegated. Sunderland made a complaint about the incident, and the Football League conducted an investigation. Coventry were eventually cleared of any wrong-doing, although the secretary Alan Hardaker sent a letter to the club "reprimanding Coventry City for their actions". Supporters of Sunderland maintained a grudge against Hill and Coventry City for decades after the match. At a 2008 game between Sunderland and Fulham – a club for which Hill had worked as both player and chairman – the visiting Sunderland fans made angry chants towards Hill when he entered the pitch as part of a pre-match tribute to Johnny Haynes. Hill waved to the fans in response, but he had to receive a police escort for his safety. Coventry and Sunderland were involved in another last-day relegation battle 20 years later, at the end of the 1996–97 FA Premier League season. Coventry, managed at the time by Gordon Strachan, required a win against Tottenham Hotspur at White Hart Lane to survive, in addition to favourable results in games involving Sunderland and Middlesbrough. David Lacey of The Guardian mentioned the 1977 events in advance of the game, commenting that "should Sunderland survive at Coventry's expense ... Wearside will feel that an ancient wrong ... has been put right". As in 1977, Coventry's game started late, by 15 minutes, as a result of their travelling fans being delayed in traffic following an accident. Sunderland lost their game, while Middlesbrough drew, at which point Coventry were leading 2–1 with 15 minutes remaining. Manchester United manager Alex Ferguson later labelled this situation a "disgrace", but Strachan thought that the delay had hindered his players. He told reporters that knowing the outcome was in their hands, and that conceding a goal would relegate them, caused them to lose control of a game they had been dominating. Coventry held on for the win, consigning both Sunderland and Middlesbrough to relegation. Discussing the late kick-off, The Independent journalist Glenn Moore commented that it evoked "memories of the notorious escape of 1977". ## See also - West Germany 1–0 Austria, 1982 World Cup result which saw both teams proceed at the expense of Algeria - 2021 Los Angeles Chargers–Las Vegas Raiders game, the final NFL game where both teams would have reached the playoffs with a tie
10,258
Enid Blyton
1,169,874,766
English children's writer (1897–1968)
[ "1897 births", "1968 deaths", "20th-century English novelists", "20th-century English women writers", "Deaths from Alzheimer's disease", "Deaths from dementia in England", "English children's writers", "English women novelists", "Enid Blyton", "Golders Green Crematorium", "People from Beaconsfield", "People from East Dulwich", "Women mystery writers", "Writers from Hampstead" ]
Enid Mary Blyton (11 August 1897 – 28 November 1968) was an English children's writer, whose books have been worldwide bestsellers since the 1930s, selling more than 600 million copies. Her books are still enormously popular and have been translated into ninety languages. As of June 2019, Blyton held 4th place for the most translated author. She wrote on a wide range of topics, including education, natural history, fantasy, mystery, and biblical narratives. She is best remembered today for her Noddy, Famous Five, Secret Seven, the Five Find-Outers, and Malory Towers books, although she also wrote many others, including the St. Clare's, The Naughtiest Girl, and The Faraway Tree series. Her first book, Child Whispers, a 24-page collection of poems, was published in 1922. Following the commercial success of her early novels, such as Adventures of the Wishing-Chair (1937) and The Enchanted Wood (1939), Blyton went on to build a literary empire, sometimes producing fifty books a year in addition to her prolific magazine and newspaper contributions. Her writing was unplanned and sprang largely from her unconscious mind; she typed her stories as events unfolded before her. The sheer volume of her work and the speed with which she produced it led to rumours that Blyton employed an army of ghost writers, a charge she vigorously denied. Blyton's work became increasingly controversial among literary critics, teachers, and parents beginning in the 1950s due to the alleged unchallenging nature of her writing and her themes, particularly in the Noddy series. Some libraries and schools banned her works, and from the 1930s until the 1950s, the BBC refused to broadcast her stories because of their perceived lack of literary merit. Her books have been criticised as elitist, sexist, racist, xenophobic, and at odds with the more progressive environment that was emerging in post-World War II Britain, but they have continued to be bestsellers since her death in 1968. She felt she had a responsibility to provide her readers with a strong moral framework, so she encouraged them to support worthy causes. In particular, through the clubs she set up or supported, she encouraged and organised them to raise funds for animal and paediatric charities. The story of Blyton's life was dramatised in Enid, a BBC television film featuring Helena Bonham Carter in the title role. It was first broadcast in the UK on BBC Four in 2009. ## Early life and education Enid Blyton was born on 11 August 1897 in East Dulwich, South London, United Kingdom, the eldest of three children, to Thomas Carey Blyton (1870–1920), a cutlery salesman (recorded in the 1911 census with the occupation of "Mantle Manufacturer dealer [in] women's suits, skirts, etc.") and his wife Theresa Mary (née Harrison; 1874–1950). Enid's younger brothers, Hanly (1899–1983) and Carey (1902–1976), were born after the family had moved to a semi-detached house in Beckenham, then a village in Kent. A few months after her birth, Enid almost died from whooping cough, but was nursed back to health by her father, whom she adored. Thomas Blyton ignited Enid's interest in nature; in her autobiography she wrote that he "loved flowers and birds and wild animals, and knew more about them than anyone I had ever met". He also passed on his interest in gardening, art, music, literature, and theatre, and the pair often went on nature walks, much to the disapproval of Enid's mother, who showed little interest in her daughter's pursuits. Enid was devastated when her father left the family shortly after her 13th birthday to live with another woman. Enid and her mother did not have a good relationship, and Enid did not attend either of her parents' funerals. From 1907 to 1915, Blyton attended St Christopher's School in Beckenham, where she enjoyed physical activities and became school tennis champion and lacrosse captain. She was not keen on all the academic subjects, but excelled in writing and, in 1911, entered Arthur Mee's children's poetry competition. Mee offered to print her verses, encouraging her to produce more. Blyton's mother considered her efforts at writing to be a "waste of time and money", but she was encouraged to persevere by Mabel Attenborough, the aunt of school friend Mary Potter. Blyton's father taught her to play the piano, which she mastered well enough for him to believe she might follow in his sister's footsteps and become a professional musician. Blyton considered enrolling at the Guildhall School of Music, but decided she was better suited to becoming a writer. After finishing school, in 1915, as head girl, she moved out of the family home to live with her friend Mary Attenborough, before going to stay with George and Emily Hunt at Seckford Hall, near Woodbridge, in Suffolk. Seckford Hall, with its allegedly haunted room and secret passageway, provided inspiration for her later writing. At Woodbridge Congregational Church, Blyton met Ida Hunt, who taught at Ipswich High School and suggested she train there as a teacher. Blyton was introduced to the children at the nursery school and, recognising her natural affinity with them, enrolled in a National Froebel Union teacher training course at the school in September 1916. By this time, she had nearly terminated all contact with her family. Blyton's manuscripts were rejected by publishers on many occasions, which only made her more determined to succeed, saying, "it is partly the struggle that helps you so much, that gives you determination, character, self-reliance –all things that help in any profession or trade, and most certainly in writing." In March 1916, her first poems were published in Nash's Magazine. She completed her teacher training course in December 1918 and, the following month, obtained a teaching appointment at Bickley Park School, a small, independent establishment for boys in Bickley, Kent. Two months later, Blyton received a teaching certificate with distinctions in zoology and principles of education; first class in botany, geography, practice and history of education, child hygiene, and classroom teaching; and second class in literature and elementary mathematics. In 1920, she moved to Southernhay, in Hook Road Surbiton, as nursery governess to the four sons of architect Horace Thompson and his wife Gertrude, with whom Blyton spent four happy years. With the shortage of area schools, neighboring children soon joined her charges, and a small school developed at the house. ## Early writing career In 1920, Blyton moved to Chessington and began writing in her spare time. The following year, she won the Saturday Westminster Review writing competition with her essay "On the Popular Fallacy that to the Pure All Things are Pure". Publications such as The Londoner, Home Weekly and The Bystander began to show an interest in her short stories and poems. Blyton's first book, Child Whispers, a 24-page collection of poems, was published in 1922. Its illustrator, Enid's schoolfriend Phyllis Chase collaborated on several of her early works. Also in that year, Blyton began writing in annuals for Cassell and George Newnes, and her first piece of writing, "Peronel and his Pot of Glue", was accepted for publication in Teachers' World. Further boosting her success, in 1923, her poems appeared alongside those of Rudyard Kipling, Walter de la Mare, and G. K. Chesterton in a special issue of Teachers' World. Blyton's educational texts were influential in the 1920s and 1930s, with her most sizable being the three-volume The Teacher's Treasury (1926), the six-volume Modern Teaching (1928), the eight-volume Pictorial Knowledge (1930), and the four-volume Modern Teaching in the Infant School (1932). In July 1923, Blyton published Real Fairies, a collection of thirty-three poems written especially for the book with the exception of "Pretending", which had appeared earlier in Punch magazine. The following year, she published The Enid Blyton Book of Fairies, illustrated by Horace J. Knowles, and in 1926 the Book of Brownies. Several books of plays appeared in 1927, including A Book of Little Plays and The Play's the Thing with the illustrator Alfred Bestall. In the 1930s, Blyton developed an interest in writing stories related to various myths, including those of ancient Greece and Rome; The Knights of the Round Table, Tales of Ancient Greece and Tales of Robin Hood were published in 1930. In Tales of Ancient Greece Blyton retold 16 well-known ancient Greek myths, but used the Latin rather than the Greek names and invented conversations between characters. The Adventures of Odysseus, Tales of the Ancient Greeks and Persians and Tales of the Romans followed in 1934. ## Commercial success ### New series: 1934–1948 The first of twenty-eight books in Blyton's Old Thatch series, The Talking Teapot and Other Tales, was published in 1934, the same year as Brer Rabbit Retold; (note that Brer Rabbit originally featured in Uncle Remus stories by Joel Chandler Harris), her first serial story and first full-length book, Adventures of the Wishing-Chair, followed in 1937. The Enchanted Wood, the first book in the Faraway Tree series, published in 1939, is about a magic tree inspired by the Norse mythology that had fascinated Blyton as a child. According to Blyton's daughter Gillian the inspiration for the magic tree came from "thinking up a story one day and suddenly she was walking in the enchanted wood and found the tree. In her imagination she climbed up through the branches and met Moon-Face, Silky, the Saucepan Man and the rest of the characters. She had all she needed." As in the Wishing-Chair series, these fantasy books typically involve children being transported into a magical world in which they meet fairies, goblins, elves, pixies and other mythological creatures. Blyton's first full-length adventure novel, The Secret Island, was published in 1938, featuring the characters of Jack, Mike, Peggy and Nora. Described by The Glasgow Herald as a "Robinson Crusoe-style adventure on an island in an English lake", The Secret Island was a lifelong favourite of Gillian's and spawned the Secret series. The following year Blyton released her first book in the Circus series and her initial book in the Amelia Jane series, Naughty Amelia Jane! According to Gillian the main character was based on a large handmade doll given to her by her mother on her third birthday. During the 1940s Blyton became a prolific author, her success enhanced by her "marketing, publicity and branding that was far ahead of its time". In 1940 Blyton published two books – Three Boys and a Circus and Children of Kidillin – under the pseudonym of Mary Pollock (middle name plus first married name), in addition to the eleven published under her own name that year. So popular were Pollock's books that one reviewer was prompted to observe that "Enid Blyton had better look to her laurels". But Blyton's readers were not so easily deceived and many complained about the subterfuge to her and her publisher, with the result that all six books published under the name of Mary Pollock – two in 1940 and four in 1943 – were reissued under Blyton's name. Later in 1940 Blyton published the first of her boarding school story books and the first novel in the Naughtiest Girl series, The Naughtiest Girl in the School, which followed the exploits of the mischievous schoolgirl Elizabeth Allen at the fictional Whyteleafe School. The first of her six novels in the St. Clare's series, The Twins at St. Clare's, appeared the following year, featuring the twin sisters Patricia and Isabel O'Sullivan. In 1942 Blyton released the first book in the Mary Mouse series, Mary Mouse and the Dolls' House, about a mouse exiled from her mousehole who becomes a maid at a dolls' house. Twenty-three books in the series were produced between 1942 and 1964; 10,000 copies were sold in 1942 alone. The same year, Blyton published the first novel in the Famous Five series, Five on a Treasure Island, with illustrations by Eileen Soper. Its popularity resulted in twenty-one books between then and 1963, and the characters of Julian, Dick, Anne, George (Georgina) and Timmy the dog became household names in Britain. Matthew Grenby, author of Children's Literature, states that the five were involved with "unmasking hardened villains and solving serious crimes", although the novels were "hardly 'hard-boiled' thrillers". Blyton based the character of Georgina, a tomboy she described as "short-haired, freckled, sturdy, and snub-nosed" and "bold and daring, hot-tempered and loyal", on herself. Blyton had an interest in biblical narratives, and retold Old and New Testament stories. The Land of Far-Beyond (1942) is a Christian parable along the lines of John Bunyan's The Pilgrim's Progress (1698), with contemporary children as the main characters. In 1943 she published The Children's Life of Christ, a collection of fifty-nine short stories related to the life of Jesus, with her own slant on popular biblical stories, from the Nativity and the Three Wise Men through to the trial, the crucifixion and the resurrection. Tales from the Bible was published the following year, followed by The Boy with the Loaves and Fishes in 1948. The first book in Blyton's Five Find-Outers series, The Mystery of the Burnt Cottage, was published in 1943, as was the second book in the Faraway series, The Magic Faraway Tree, which in 2003 was voted 66th in the BBC's Big Read poll to find the UK's favourite book. Several of Blyton's works during this period have seaside themes; John Jolly by the Sea (1943), a picture book intended for younger readers, was published in a booklet format by Evans Brothers. Other books with a maritime theme include The Secret of Cliff Castle and Smuggler Ben, both attributed to Mary Pollock in 1943; The Island of Adventure, the first in the Adventure series of eight novels from 1944 onwards; and various novels of the Famous Five series such as Five on a Treasure Island (1942), Five on Kirrin Island Again (1947) and Five Go Down to the Sea (1953). Capitalising on her success, with a loyal and ever-growing readership, Blyton produced a new edition of many of her series such as the Famous Five, the Five Find-Outers and St. Clare's every year in addition to many other novels, short stories and books. In 1946 Blyton launched the first in the Malory Towers series of six books based around the schoolgirl Darrell Rivers, First Term at Malory Towers, which became extremely popular, particularly with girls. ### Peak output: 1949–1959 The first book in Blyton's Barney Mysteries series, The Rockingdown Mystery, was published in 1949, as was the first of her fifteen Secret Seven novels. The Secret Seven Society consists of Peter, his sister Janet, and their friends Colin, George, Jack, Pam and Barbara, who meet regularly in a shed in the garden to discuss peculiar events in their local community. Blyton rewrote the stories so they could be adapted into cartoons, which appeared in Mickey Mouse Weekly in 1951 with illustrations by George Brook. The French author Evelyne Lallemand continued the series in the 1970s, producing an additional twelve books, nine of which were translated into English by Anthea Bell between 1983 and 1987. Blyton's Noddy, about a little wooden boy from Toyland, first appeared in the Sunday Graphic on 5 June 1949, and in November that year Noddy Goes to Toyland, the first of at least two dozen books in the series, was published. The idea was conceived by one of Blyton's publishers, Sampson, Low, Marston and Company, who in 1949 arranged a meeting between Blyton and the Dutch illustrator Harmsen van der Beek. Despite having to communicate via an interpreter, he provided some initial sketches of how Toyland and its characters would be represented. Four days after the meeting Blyton sent the text of the first two Noddy books to her publisher, to be forwarded to van der Beek. The Noddy books became one of her most successful and best-known series, and were hugely popular in the 1950s. An extensive range of sub-series, spin-offs and strip books were produced throughout the decade, including Noddy's Library, Noddy's Garage of Books, Noddy's Castle of Books, Noddy's Toy Station of Books and Noddy's Shop of Books. In 1950 Blyton established the company Darrell Waters Ltd to manage her affairs. By the early 1950s she had reached the peak of her output, often publishing more than fifty books a year, and she remained extremely prolific throughout much of the decade. By 1955 Blyton had written her fourteenth Famous Five novel, Five Have Plenty of Fun, her fifteenth Mary Mouse book, Mary Mouse in Nursery Rhyme Land, her eighth book in the Adventure series, The River of Adventure, and her seventh Secret Seven novel, Secret Seven Win Through. She completed the sixth and final book of the Malory Towers series, Last Term at Malory Towers, in 1951. Blyton published several further books featuring the character of Scamp the terrier, following on from The Adventures of Scamp, a novel she had released in 1943 under the pseudonym of Mary Pollock. Scamp Goes on Holiday (1952) and Scamp and Bimbo, Scamp at School, Scamp and Caroline and Scamp Goes to the Zoo (1954) were illustrated by Pierre Probst. She introduced the character of Bom, a stylish toy drummer dressed in a bright red coat and helmet, alongside Noddy in TV Comic in July 1956. A book series began the same year with Bom the Little Toy Drummer, featuring illustrations by R. Paul-Hoye, and followed with Bom and His Magic Drumstick (1957), Bom Goes Adventuring and Bom Goes to Ho Ho Village (1958), Bom and the Clown and Bom and the Rainbow (1959) and Bom Goes to Magic Town (1960). In 1958 she produced two annuals featuring the character, the first of which included twenty short stories, poems and picture strips. ### Final works Many of Blyton's series, including Noddy and The Famous Five, continued to be successful in the 1960s; by 1962, 26 million copies of Noddy had been sold. Blyton concluded several of her long-running series in 1963, publishing the last books of The Famous Five (Five Are Together Again) and The Secret Seven (Fun for the Secret Seven); she also produced three more Brer Rabbit books with the illustrator Grace Lodge: Brer Rabbit Again, Brer Rabbit Book, and Brer Rabbit's a Rascal. In 1962 many of her books were among the first to be published by Armada Books in paperback, making them more affordable to children. After 1963 Blyton's output was generally confined to short stories and books intended for very young readers, such as Learn to Count with Noddy and Learn to Tell Time with Noddy in 1965, and Stories for Bedtime and the Sunshine Picture Story Book collection in 1966. Her declining health and a falling off in readership among older children have been put forward as the principal reasons for this change in trend. Blyton published her last book in the Noddy series, Noddy and the Aeroplane, in February 1964. In May the following year she published Mixed Bag, a song book with music written by her nephew Carey, and in August she released her last full-length books, The Man Who Stopped to Help and The Boy Who Came Back. ## Magazine and newspaper contributions Blyton cemented her reputation as a children's writer when in 1926 she took over the editing of Sunny Stories, a magazine that typically included the re-telling of legends, myths, stories and other articles for children. That same year she was given her own column in Teachers' World, entitled "From my Window". Three years later she began contributing a weekly page in the magazine, in which she published letters from her fox terrier dog Bobs. They proved to be so popular that in 1933 they were published in book form as Letters from Bobs, and sold ten thousand copies in the first week. Her most popular feature was "Round the Year with Enid Blyton", which consisted of forty-eight articles covering aspects of natural history such as weather, pond life, how to plant a school garden and how to make a bird table. Among Blyton's other nature projects was her monthly "Country Letter" feature that appeared in The Nature Lover magazine in 1935. Sunny Stories was renamed Enid Blyton's Sunny Stories in January 1937, and served as a vehicle for the serialisation of Blyton's books. Her first Naughty Amelia Jane story, about an anti-heroine based on a doll owned by her daughter Gillian, was published in the magazine. Blyton stopped contributing in 1952, and it closed down the following year, shortly before the appearance of the new fortnightly Enid Blyton Magazine written entirely by Blyton. The first edition appeared on 18 March 1953, and the magazine ran until September 1959. Noddy made his first appearance in the Sunday Graphic in 1949, the same year as Blyton's first daily Noddy strip for the London Evening Standard. It was illustrated by van der Beek until his death in 1953. ## Writing style and technique Blyton worked in a wide range of fictional genres, from fairy tales to animal, nature, detective, mystery, and circus stories, but she often "blurred the boundaries" in her books, and encompassed a range of genres even in her short stories. In a 1958 article published in The Author, she wrote that there were a "dozen or more different types of stories for children", and she had tried them all, but her favourites were those with a family at their centre. In a letter to the psychologist Peter McKellar, Blyton describes her writing technique: > I shut my eyes for a few minutes, with my portable typewriter on my knee – I make my mind a blank and wait – and then, as clearly as I would see real children, my characters stand before me in my mind's eye ... The first sentence comes straight into my mind, I don't have to think of it – I don't have to think of anything. In another letter to McKellar she describes how in just five days she wrote the 60,000-word book The River of Adventure, the eighth in her Adventure Series, by listening to what she referred to as her "under-mind", which she contrasted with her "upper conscious mind". Blyton was unwilling to conduct any research or planning before beginning work on a new book, which coupled with the lack of variety in her life according to Druce almost inevitably presented the danger that she might unconsciously, and clearly did, plagiarise the books she had read, including her own. Gillian has recalled that her mother "never knew where her stories came from", but that she used to talk about them "coming from her 'mind's eye'", as did William Wordsworth and Charles Dickens. Blyton had "thought it was made up of every experience she'd ever had, everything she's seen or heard or read, much of which had long disappeared from her conscious memory" but never knew the direction her stories would take. Blyton further explained in her biography that "If I tried to think out or invent the whole book, I could not do it. For one thing, it would bore me and for another, it would lack the 'verve' and the extraordinary touches and surprising ideas that flood out from my imagination." Blyton's daily routine varied little over the years. She usually began writing soon after breakfast, with her portable typewriter on her knee and her favourite red Moroccan shawl nearby; she believed that the colour red acted as a "mental stimulus" for her. Stopping only for a short lunch break she continued writing until five o'clock, by which time she would usually have produced 6,000–10,000 words. A 2000 article in The Malay Mail considers Blyton's children to have "lived in a world shaped by the realities of post-war austerity", enjoying freedom without the political correctness of today, which serves modern readers of Blyton's novels with a form of escapism. Brandon Robshaw of The Independent refers to the Blyton universe as "crammed with colour and character", "self-contained and internally consistent", noting that Blyton exemplifies a strong mistrust of adults and figures of authority in her works, creating a world in which children govern. Gillian noted that in her mother's adventure, detective and school stories for older children, "the hook is the strong storyline with plenty of cliffhangers, a trick she acquired from her years of writing serialised stories for children's magazines. There is always a strong moral framework in which bravery and loyalty are (eventually) rewarded". Blyton herself wrote that "my love of children is the whole foundation of all my work". Victor Watson, Assistant Director of Research at Homerton College, Cambridge, believes that Blyton's works reveal an "essential longing and potential associated with childhood", and notes how the opening pages of The Mountain of Adventure present a "deeply appealing ideal of childhood". He argues that Blyton's work differs from that of many other authors in its approach, describing the narrative of The Famous Five series for instance as "like a powerful spotlight, it seeks to illuminate, to explain, to demystify. It takes its readers on a roller-coaster story in which the darkness is always banished; everything puzzling, arbitrary, evocative is either dismissed or explained". Watson further notes how Blyton often used minimalist visual descriptions and introduced a few careless phrases such as "gleamed enchantingly" to appeal to her young readers. From the mid-1950s rumours began to circulate that Blyton had not written all the books attributed to her, a charge she found particularly distressing. She published an appeal in her magazine asking children to let her know if they heard such stories and, after one mother informed her that she had attended a parents' meeting at her daughter's school during which a young librarian had repeated the allegation, Blyton decided in 1955 to begin legal proceedings. The librarian was eventually forced to make a public apology in open court early the following year, but the rumours that Blyton operated "a 'company' of ghost writers" persisted, as some found it difficult to believe that one woman working alone could produce such a volume of work. ## Charitable work Blyton felt a responsibility to provide her readers with a positive moral framework, and she encouraged them to support worthy causes. Her view, expressed in a 1957 article, was that children should help animals and other children rather than adults: > [children] are not interested in helping adults; indeed, they think that adults themselves should tackle adult needs. But they are intensely interested in animals and other children and feel compassion for the blind boys and girls, and for the spastics who are unable to walk or talk. Blyton and the members of the children's clubs she promoted via her magazines raised a great deal of money for various charities; according to Blyton, membership of her clubs meant "working for others, for no reward". The largest of the clubs she was involved with was the Busy Bees, the junior section of the People's Dispensary for Sick Animals, which Blyton had actively supported since 1933. The club had been set up by Maria Dickin in 1934, and after Blyton publicised its existence in the Enid Blyton Magazine it attracted 100,000 members in three years. Such was Blyton's popularity among children that after she became Queen Bee in 1952 more than 20,000 additional members were recruited in her first year in office. The Enid Blyton Magazine Club was formed in 1953. Its primary objective was to raise funds to help those children with cerebral palsy who attended a centre in Cheyne Walk, in Chelsea, London, by furnishing an on-site hostel among other things. The Famous Five series gathered such a following that readers asked Blyton if they might form a fan club. She agreed, on condition that it serve a useful purpose, and suggested that it could raise funds for the Shaftesbury Society Babies' Home in Beaconsfield, on whose committee she had served since 1948. The club was established in 1952, and provided funds for equipping a Famous Five Ward at the home, a paddling pool, sun room, summer house, playground, birthday and Christmas celebrations, and visits to the pantomime. By the late 1950s Blyton's clubs had a membership of 500,000, and raised £35,000 in the six years of the Enid Blyton Magazine'''s run. By 1974 the Famous Five Club had a membership of 220,000, and was growing at the rate of 6,000 new members a year. The Beaconsfield home it was set up to support closed in 1967, but the club continued to raise funds for other paediatric charities, including an Enid Blyton bed at Great Ormond Street Hospital and a mini-bus for disabled children at Stoke Mandeville Hospital. ## Jigsaw puzzle and games Blyton capitalised upon her commercial success as an author by negotiating agreements with jigsaw puzzle and games manufacturers from the late 1940s onwards; by the early 1960s some 146 different companies were involved in merchandising Noddy alone. In 1948 Bestime released four jigsaw puzzles featuring her characters, and the first Enid Blyton board game appeared, Journey Through Fairyland, created by BGL. The first card game, Faraway Tree, appeared from Pepys in 1950. In 1954 Bestime released the first four jigsaw puzzles of the Secret Seven, and the following year a Secret Seven card game appeared. Bestime released the Little Noddy Car Game in 1953 and the Little Noddy Leap Frog Game in 1955, and in 1956 American manufacturer Parker Brothers released Little Noddy's Taxi Game, a board game which features Noddy driving about town, picking up various characters. Bestime released its Plywood Noddy Jigsaws series in 1957 and a Noddy jigsaw series featuring cards appeared from 1963, with illustrations by Robert Lee. Arrow Games became the chief producer of Noddy jigsaws in the late 1970s and early 1980s. Whitman manufactured four new Secret Seven jigsaw puzzles in 1975, and produced four new Malory Towers ones two years later. In 1979 the company released a Famous Five adventure board game, Famous Five Kirrin Island Treasure. Stephen Thraves wrote eight Famous Five adventure game books, published by Hodder & Stoughton in the 1980s. The first adventure game book of the series, The Wreckers' Tower Game, was published in October 1984. ## Personal life On 28 August 1924, Blyton married Major Hugh Alexander Pollock, DSO (1888–1971) at Bromley Register Office, without inviting her family. They married shortly after his divorce from his first wife, with whom he had two sons, one of them already deceased. Pollock was editor of the book department in the publishing firm George Newnes, which became Blyton's regular publisher. It was he who requested her to write a book about animals, resulting in The Zoo Book, completed in the month before their marriage. They initially lived in a flat in Chelsea before moving to Elfin Cottage in Beckenham in 1926 and then to Old Thatch in Bourne End (called Peterswood in her books) in 1929. Blyton's first daughter, Gillian, was born on 15 July 1931, and, after a miscarriage in 1934, she gave birth to a second daughter, Imogen, on 27 October 1935. In 1938, she and her family moved to a house in Beaconsfield, named Green Hedges by Blyton's readers, following a competition in her magazine. By the mid-1930s, Pollock had become a secret alcoholic, withdrawing increasingly from public life—possibly triggered through his meetings, as a publisher, with Winston Churchill, which may have reawakened the trauma Pollock suffered during World War I. With the outbreak of World War II, he became involved in the Home Guard and also re-encountered Ida Crowe, an aspiring writer 19 years his junior, whom he had first met years earlier. He made her an offer to join him as secretary in his posting to a Home Guard training center at Denbies, a Gothic mansion in Surrey belonging to Lord Ashcombe, and they began a romantic relationship. Blyton's marriage to Pollock was troubled for years, and according to Crowe's memoir, she had a series of affairs, including a lesbian relationship with one of the children's nannies. In 1941, Blyton met Kenneth Fraser Darrell Waters, a London surgeon with whom she began a serious affair. Pollock discovered the liaison, and threatened to initiate divorce proceedings. Due to fears that exposure of her adultery would ruin her public image, it was ultimately agreed that Blyton would instead file for divorce against Pollock. According to Crowe's memoir, Blyton promised that if he admitted to infidelity, she would allow him parental access to their daughters; but after the divorce, he was denied contact with them, and Blyton made sure he was subsequently unable to find work in publishing. Pollock, having married Crowe on 26 October 1943, eventually resumed his heavy drinking and was forced to petition for bankruptcy in 1950. Blyton and Darrell Waters married at the City of Westminster Register Office on 20 October 1943. She changed the surname of her daughters to Darrell Waters and publicly embraced her new role as a happily married and devoted doctor's wife. After discovering she was pregnant in the spring of 1945, Blyton miscarried five months later, following a fall from a ladder. The baby would have been Darrell Waters's first child and the son for which they both longed. Her love of tennis included playing naked, with nude tennis "a common practice in those days among the more louche members of the middle classes". Blyton's health began to deteriorate in 1957, when, during a round of golf, she started to feel faint and breathless, and, by 1960, she was displaying signs of dementia. Her agent, George Greenfield, recalled that it was "unthinkable" for the "most famous and successful of children's authors with her enormous energy and computerlike memory" to be losing her mind and suffering from what is now known as Alzheimer's disease in her mid-60s. Worsening Blyton's situation was her husband's declining health throughout the 1960s; he suffered from severe arthritis in his neck and hips, deafness, and became increasingly ill-tempered and erratic until his death on 15 September 1967. The story of Blyton's life was dramatised in a BBC film entitled Enid, which aired in the United Kingdom on BBC Four on 16 November 2009. Helena Bonham Carter, who played the title role, described Blyton as "a complete workaholic, an achievement junkie and an extremely canny businesswoman" who "knew how to brand herself, right down to the famous signature". ## Death and legacy During the months following her husband's death, Blyton became increasingly ill and moved into a nursing home three months before her death. She died in her sleep of Alzheimer's disease at the Greenways Nursing Home, Hampstead, North London, on 28 November 1968, aged 71. A memorial service was held at St James's Church, Piccadilly and she was cremated at Golders Green Crematorium, where her ashes remain. Blyton's home, Green Hedges, was auctioned on 26 May 1971 and demolished in 1973; the site is now occupied by houses and a street named Blyton Close. An English Heritage blue plaque commemorates Blyton at Hook Road in Chessington, where she lived from 1920 to 1924. In 2014, a plaque recording her time as a Beaconsfield resident from 1938 until her death in 1968 was unveiled in the town hall gardens, next to small iron figures of Noddy and Big Ears. Since her death and the publication of her daughter Imogen's 1989 autobiography, A Childhood at Green Hedges, Blyton has emerged as an emotionally immature, unstable and often malicious figure. Imogen considered her mother to be "arrogant, insecure, pretentious, very skilled at putting difficult or unpleasant things out of her mind, and without a trace of maternal instinct. As a child, I viewed her as a rather strict authority. As an adult I pitied her." Blyton's eldest daughter Gillian remembered her rather differently however, as "a fair and loving mother, and a fascinating companion". The Enid Blyton Trust for Children was established in 1982, with Imogen as its first chairman, and in 1985 it established the National Library for the Handicapped Child. Enid Blyton's Adventure Magazine began publication in September 1985 and, on 14 October 1992, the BBC began publishing Noddy Magazine and released the Noddy CD-Rom in October 1996. The first Enid Blyton Day was held at Rickmansworth on 6 March 1993 and, in October 1996, the Enid Blyton award, The Enid, was given to those who have made outstanding contributions towards children. The Enid Blyton Society was formed in early 1995, to provide "a focal point for collectors and enthusiasts of Enid Blyton" through its thrice-annual Enid Blyton Society Journal, its annual Enid Blyton Day and its website. On 16 December 1996, Channel 4 broadcast a documentary about Blyton, Secret Lives. To celebrate her centenary in 1997, exhibitions were put on at the London Toy & Model Museum (now closed), Hereford and Worcester County Museum and Bromley Library and, on 9 September, the Royal Mail issued centenary stamps. The London-based entertainment and retail company Trocadero plc purchased Blyton's Darrell Waters Ltd in 1995 for £14.6 million and established a subsidiary, Enid Blyton Ltd, to handle all intellectual properties, character brands and media in Blyton's works. The group changed its name to Chorion in 1998 but, after financial difficulties in 2012, sold its assets. Hachette UK acquired from Chorion world rights in the Blyton estate in March 2013, including The Famous Five series but excluding the rights to Noddy, which had been sold to DreamWorks Classics (formerly Classic Media, now a subsidiary of DreamWorks Animation) in 2012. Blyton's granddaughter, Sophie Smallwood, wrote a new Noddy book to celebrate the character's 60th birthday, 46 years after the last book was published; Noddy and the Farmyard Muddle (2009) was illustrated by Robert Tyndall. In February 2011, the manuscript of a previously unknown Blyton novel, Mr Tumpy's Caravan, was discovered by the archivist at Seven Stories, National Centre for Children's Books in a collection of papers belonging to Blyton's daughter Gillian, purchased by Seven Stories in 2010 following her death. It was initially thought to belong to a comic strip collection of the same name published in 1949, but it appears to be unrelated and is believed to be something written in the 1930s, which had been rejected by a publisher. In a 1982 survey of 10,000 eleven-year-old children, Blyton was voted their most popular writer. She is the world's fourth most-translated author, behind Agatha Christie, Jules Verne and William Shakespeare with her books being translated into 90 languages. From 2000 to 2010, Blyton was listed as a Top Ten author, selling almost 8 million copies (worth £31.2 million) in the UK alone. In 2003, The Magic Faraway Tree was voted 66th in the BBC's Big Read, a year-long survey of the UK's best-loved novels. In a 2008 poll conducted by the Costa Book Awards, Blyton was voted the UK's best-loved author ahead of Roald Dahl, J. K. Rowling, Jane Austen and Shakespeare. Her books continue to be very popular among children in Commonwealth nations such as India, Pakistan, Sri Lanka, Singapore, Malta, New Zealand and Australia, and around the world. They have also seen a surge of popularity in China, where they are "big with every generation". In March 2004, Chorion and the Chinese publisher Foreign Language Teaching and Research Press negotiated an agreement over the Noddy franchise, which included bringing the character to an animated series on television, with a potential audience of a further 95 million children under the age of five. Chorion spent around £10 million digitising Noddy and, as of 2002, had made television agreements with at least 11 countries worldwide. Novelists influenced by Blyton include the crime writer Denise Danks, whose fictional detective Georgina Powers is based on George from the Famous Five. Peter Hunt's A Step off the Path (1985) is also influenced by the Famous Five, and the St. Clare's and Malory Towers series provided the inspiration for Jacqueline Wilson's Double Act (1996) and Adèle Geras's Egerton Hall trilogy (1990–92) respectively. Blyton was important to Stieg Larsson. "The series Stieg Larsson most often mentioned were the Famous Five and the Adventure books." ## Critical backlash A.H. Thompson, who compiled an extensive overview of censorship efforts in the United Kingdom's public libraries, dedicated an entire chapter to "The Enid Blyton Affair", and wrote of her in 1975: > "No single author has caused more controversy among librarians, literary critics, teachers, and other educationalists and parents during the last thirty years, than Enid Blyton. How is it that the books of this tremendously popular writer for children should have given rise to accusations of censorship against librarians in Australia, New Zealand, and the United Kingdom?" Blyton's range of plots and settings has been described as limited, repetitive and continually recycled. Many of her books were critically assessed by teachers and librarians, deemed unfit for children to read, and removed from syllabuses and public libraries. Responding to claims that her moral views were "dependably predictable", Blyton commented that "most of you could write down perfectly correctly all the things that I believe in and stand for – you have found them in my books, and a writer's books are always a faithful reflection of himself". From the 1930s to the 1950s the BBC operated a de facto ban on dramatising Blyton's books for radio, considering her to be a "second-rater" whose work was without literary merit. The children's literary critic Margery Fisher likened Blyton's books to "slow poison", and Jean E. Sutcliffe of the BBC's schools broadcast department wrote of Blyton's ability to churn out "mediocre material", noting that "her capacity to do so amounts to genius ... anyone else would have died of boredom long ago". Michael Rosen, Children's Laureate from 2007 until 2009, wrote that "I find myself flinching at occasional bursts of snobbery and the assumed level of privilege of the children and families in the books." The children's author Anne Fine presented an overview of the concerns about Blyton's work and responses to them on BBC Radio 4 in November 2008, in which she noted the "drip, drip, drip of disapproval" associated with the books. Blyton's response to her critics was that she was uninterested in the views of anyone over the age of 12, stating that half the attacks on her work were motivated by jealousy and the rest came from "stupid people who don't know what they're talking about because they've never read any of my books". Despite criticism by contemporaries that her work's quality began to suffer in the 1950s at the expense of its increasing volume, Blyton nevertheless capitalised on being generally regarded at the time as "a more 'savoury', English alternative" to what some considered an "invasion" of Britain by American culture, in the form of "rock music, horror comics, television, teenage culture, delinquency, and Disney". According to British academic Nicholas Tucker, the works of Enid Blyton have been "banned from more public libraries over the years than is the case with any other adult or children's author", though such attempts to quell the popularity of her books over the years seem to have been largely unsuccessful, and "she still remains very widely read". ### Simplicity Some librarians felt that Blyton's restricted use of language, a conscious product of her teaching background, was prejudicial to an appreciation of more literary qualities. In a scathing article published in Encounter in 1958, the journalist Colin Welch remarked that it was "hard to see how a diet of Miss Blyton could help with the 11-plus or even with the Cambridge English Tripos", but reserved his harshest criticism for Blyton's Noddy, describing him as an "unnaturally priggish ... sanctimonious ... witless, spiritless, snivelling, sneaking doll." The author and educational psychologist Nicholas Tucker notes that it was common to see Blyton cited as people's favourite or least favourite author according to their age, and argues that her books create an "encapsulated world for young readers that simply dissolves with age, leaving behind only memories of excitement and strong identification". Fred Inglis considers Blyton's books to be technically easy to read, but to also be "emotionally and cognitively easy". He mentions that the psychologist Michael Woods believed that Blyton was different from many other older authors writing for children in that she seemed untroubled by presenting them with a world that differed from reality. Woods surmised that Blyton "was a child, she thought as a child, and wrote as a child ... the basic feeling is essentially pre-adolescent ... Enid Blyton has no moral dilemmas ... Inevitably Enid Blyton was labelled by rumour a child-hater. If true, such a fact should come as no surprise to us, for as a child herself all other children can be nothing but rivals for her." Inglis argues though that Blyton was clearly devoted to children and put an enormous amount of energy into her work, with a powerful belief in "representing the crude moral diagrams and garish fantasies of a readership". Blyton's daughter Imogen has stated that she "loved a relationship with children through her books", but real children were an intrusion, and there was no room for intruders in the world that Blyton occupied through her writing. ### Accusations of racism, xenophobia and sexism Accusations of racism in Blyton's books were first made by Lena Jeger in a Guardian article published in 1966. In the context of discussing possible moves to restrict publications inciting racial hatred, Jeger was critical of Blyton's The Little Black Doll, originally published in 1937. Sambo, the black doll of the title, is hated by his owner and other toys owing to his "ugly black face", and runs away. A shower of "magic rain" washes his face clean, after which he is welcomed back home with his now pink face. Jamaica Kincaid also considers the Noddy books to be "deeply racist" because of the blonde children and the black golliwogs. In Blyton's 1944 novel The Island of Adventure, a black servant named Jo-Jo is very intelligent, but is particularly cruel to the children. Accusations of xenophobia were also made. As George Greenfield observed, "Enid was very much part of that between the wars middle class which believed that foreigners were untrustworthy or funny or sometimes both". The publisher Macmillan conducted an internal assessment of Blyton's The Mystery That Never Was, submitted to them at the height of her fame in 1960. The review was carried out by the author and books editor Phyllis Hartnoll, in whose view "There is a faint but unattractive touch of old-fashioned xenophobia in the author's attitude to the thieves; they are 'foreign' ... and this seems to be regarded as sufficient to explain their criminality." Macmillan rejected the manuscript, but it was published by William Collins in 1961, and then again in 1965 and 1983. Blyton's depictions of boys and girls are considered by many critics to be sexist. In a Guardian article published in 2005 Lucy Mangan proposed that The Famous Five series depicts a power struggle between Julian, Dick and George (Georgina), in which the female characters either act like boys or are talked down to, as when Dick lectures George: "it's really time you gave up thinking you're as good as a boy". ### Revisions to later editions To address criticisms levelled at Blyton's work, some later editions have been altered to reflect more politically progressive attitudes towards issues such as race, gender, violence between young persons, the treatment of children by adults, and legal changes in Britain as to what is allowable for young children to do in the years since the stories were originally written (e.g. purchasing fireworks); modern reprints of the Noddy series substitute teddy bears or goblins for golliwogs, for instance. The golliwogs who steal Noddy's car and dump him naked in the Dark Wood in Here Comes Noddy Again are replaced by goblins in the 1986 revision, who strip Noddy only of his shoes and hat and return at the end of the story to apologise. The Faraway Tree's Dame Slap, who made regular use of corporal punishment, was changed to Dame Snap who no longer did so, and the names of Dick and Fanny in the same series were changed to Rick and Frannie. Characters in the Malory Towers and St. Clare's series are no longer spanked or threatened with a spanking, but are instead scolded. References to George's short hair making her look like a boy were removed in revisions to Five on a Hike Together, reflecting the idea that girls need not have long hair to be considered feminine or normal. Anne of The Famous Five stating that boys cannot wear pretty dresses or like girls' dolls was removed. In The Adventurous Four, the names of the young twin girls were changed from Jill and Mary to Pippa and Zoe. In 2010 Hodder, the publisher of the Famous Five series, announced its intention to update the language used in the books, of which it sold more than half a million copies a year. The changes, which Hodder described as "subtle", mainly affect the dialogue rather than the narrative. For instance, "school tunic" becomes "uniform", "mother and father" and "mother and daddy" (this latter one used by young female characters and deemed sexist) become "mum and dad", "bathing" is replaced by "swimming", and "jersey" by "jumper". Some commentators see the changes as necessary to encourage modern readers, whereas others regard them as unnecessary and patronising. In 2016 Hodder's parent company Hachette announced that they would abandon the revisions as, based on feedback, they had not been a success. ## Stage, film and television adaptations In 1954 Blyton adapted Noddy for the stage, producing the Noddy in Toyland pantomime in just two or three weeks. The production was staged at the 2660-seat Stoll Theatre in Kingsway, London at Christmas. Its popularity resulted in the show running during the Christmas season for five or six years. Blyton was delighted with its reception by children in the audience, and attended the theatre three or four times a week. TV adaptations of Noddy since 1954 include one in the 1970s narrated by Richard Briers. In 1955 a stage play based on the Famous Five was produced, and in January 1997 the King's Head Theatre embarked on a six-month tour of the UK with The Famous Five Musical, to commemorate Blyton's centenary. On 21 November 1998 The Secret Seven Save the World was first performed at the Sherman Theatre in Cardiff. There have also been several film and television adaptations of the Famous Five: by the Children's Film Foundation in 1957 and 1964, Southern Television in 1978–79, and Zenith Productions in 1995–97. The series was also adapted for the German film Fünf Freunde, directed by Mike Marzuk and released in 2011. The Comic Strip, a group of British comedians, produced two extreme parodies of the Famous Five for Channel 4 television: Five Go Mad in Dorset, broadcast in 1982, and Five Go Mad on Mescalin, broadcast the following year. A third in the series, Five Go to Rehab, was broadcast on Sky in 2012. Blyton's The Faraway Tree series of books has also been adapted to television and film. On 29 September 1997 the BBC began broadcasting an animated series called The Enchanted Lands, based on the series. It was announced in October 2014 that a deal had been signed with publishers Hachette for "The Faraway Tree" series to be adapted into a live-action film by director Sam Mendes' production company. Marlene Johnson, head of children's books at Hachette, said: "Enid Blyton was a passionate advocate of children's storytelling, and The Magic Faraway Tree is a fantastic example of her creative imagination." Blyton's Malory Towers has been adapted into a musical of the same name by Emma Rice's theatre company. It was scheduled to do a UK spring tour in 2020 which has been postponed due to the COVID-19 pandemic. In 2020, Malory Towers was adapted as a 13 part TV series for the BBC. It is made partly in Toronto and partly in the UK in association with Canada's Family Channel. The series went to air in the UK from April 2020 and has been renewed for three more series. ## Papers Seven Stories, the National Centre for Children's Books in Newcastle upon Tyne, holds the largest public collection of Blyton's papers and typescripts. The Seven Stories collection contains a significant number of Blyton's typescripts, including the previously unpublished novel, Mr Tumpy's Caravan'', as well as personal papers and diaries. The purchase of the material in 2010 was made possible by special funding from the Heritage Lottery Fund, the MLA/V&A Purchase Grant Fund, and two private donations. ## See also - Enid Blyton bibliography - Enid Blyton Society - Enid Blyton's illustrators - Ruskin Bond
1,966,096
Emanuel Moravec
1,171,276,473
Czech military officer, writer, and politician
[ "1893 births", "1945 deaths", "1945 suicides", "Czech Freemasons", "Czech anti-communists", "Czech collaborators with Nazi Germany", "Czech fascists", "Czechoslovak Army officers", "Czechoslovak politicians who committed suicide", "Government ministers of Czechoslovakia", "Military personnel from Prague", "National Partnership politicians", "Nazis who committed suicide", "People from the Kingdom of Bohemia", "Suicides by firearm", "Suicides by firearm in Czechoslovakia", "Suicides by firearm in the Czech Republic" ]
Emanuel Moravec (; 17 April 1893 – 5 May 1945) was a Czech army officer and writer who served as the collaborationist Minister of Education of the Protectorate of Bohemia and Moravia between 1942 and 1945. He was also chair of the Board of Trustees for the Education of Youth, a fascist youth organisation in the protectorate. In World War I, Moravec served in the Austro-Hungarian Army, but following capture by the Russians he changed sides to join Russian-backed Serbian forces and then the Czechoslovak Legion, which went on to fight on the side of the White Army in the Russian Civil War. During the interwar period he commanded an infantry battalion in the Czechoslovak Army. As a proponent of democracy during the 1930s, Moravec was outspoken in his warnings about the expansionist plans of Germany under Adolf Hitler and appealed for armed action rather than capitulation to German demands for the Sudetenland. In the aftermath of the German occupation of the rump Czechoslovakia, he became an enthusiastic collaborator, realigning his political worldview towards fascism. He committed suicide in the final days of World War II. Unlike some officials of the short-lived protectorate government, whose reputations were rehabilitated in whole or in part after the war, Moravec's good reputation did not survive his tenure in office and he has been widely derided as a "Czech Quisling". ## Early life and education Emanual Moravec was born in Prague, the son of a modest merchant family originally from Kutná Hora. He graduated from a vocational school and found employment as a clerk at a Prague company. At the outbreak of World War I, Moravec was conscripted into the Austro-Hungarian Army and dispatched with his unit to the Carpathian Front. Moravec was captured by the Imperial Russian Army in 1915 and held at a prisoner-of-war camp in Samarkand. He was subsequently paroled and given command of a machine-gun platoon in the First Serbian Volunteer Division; a unit consisting of former prisoners of war, including Serbs and other Slavs from the countries of the Austro-Hungarian Empire, fighting on the Russian side. In September 1916, following fierce action against Bulgarian forces along the Dobrudzha Front, Moravec was hospitalized with shell shock. Upon his release, he joined the Czechoslovak Legion, falsely claiming to hold an engineering degree to receive an officer's commission. The Czechoslovak Legion, a volunteer unit composed of diaspora Czechs and Slovaks as well as deserters from the Austro-Hungarian Army, had been formed in 1917 to support the Allies; it later became involved in the Russian Civil War, fighting on the side of the White Russians. Over the next two years, Moravec saw combat with the Legion in Russia. ## Career ### First Czechoslovak Republic Moravec returned to a newly independent Czechoslovakia at the end of World War I with the legionary rank of captain. He was accepted into Prague's War School and, upon graduation, commissioned as a major in the Czechoslovak Army. He ultimately came to command the 1st Field Battalion of the 21st Infantry Regiment in Znojmo. Simultaneous with his military career, Moravec contributed to newspapers and magazines, including Lidové noviny, on political and military matters. Writing under the pen name Stanislav Yester, he won the Baťa Prize for Journalism. In 1931 Moravec was appointed an instructor at the War School and promoted to colonel. In his writings, Moravec had become increasingly emphatic about the growing ambitions of Nazi Germany. He called for Czechoslovakia to form an alliance with Poland and Italy against what he saw as a rising German threat. Moravec came to be seen as one of Czechoslovakia's leading geopolitical strategists and caught the attention of President Tomáš Garrigue Masaryk. Moravec wrote the preface to a printed edition of one of Masaryk's addresses to the Czechoslovak Army. In it, he signaled his support for the creation of the democratic state of Czechoslovakia that had come out of World War I, as well as his personal loyalty to Masaryk: > The age of democracy has given us a new man, who has spoken and demanded to be heard in every field of human activity. This new man has given us also a new soldier with new tasks and duties ... No one ... has said so much healthy about the new soldier as President Masaryk. When Masaryk died in 1935, Moravec served as one of the pallbearers at his funeral. In 1938 Moravec warned that "if Czechoslovakia should fall, France would find herself politically on the European periphery". Moravec argued that the head of the Danube Basin was guarded by what he described as the "fortress of Bohemia" – the land barrier that marked the natural border between eastern and western Europe. If a state were to take Czechoslovakia it would, therefore, control the head of the Danube basin and be free to strike against either France or Poland with ease. Although Moravec was concerned with German political and military aims he generally rejected some of the more extreme aspects of anti-German thought, taking a cautiously receptive approach to Emanuel Rádl's thesis which posited the existence of an irrational Czech racism towards Germans. #### Munich Agreement In 1938 German demands for the Sudetenland came to a head. In September, General Jan Syrový, inspector-general of the Czechoslovak Army, was installed by President Edvard Beneš as prime minister. In response to the German ultimatum, Syrový declared that "further concessions from our side are no longer possible"; 42 Czechoslovak divisions were mobilized in preparation for an expected German invasion. By the end of September, with Czechoslovakia abandoned by France and Britain, and territorial demands piled on from Poland, Beneš backtracked on Czechoslovakia's refusal to accept further German requests. At this time, as well as holding his army post, Moravec was serving as a member of the Committee for the Defense of the Republic, a nationalist pressure group led by the son of the former Czechoslovak finance minister Alois Rašín. In that capacity he sought an audience with Beneš during the last week of September, on the eve of the Munich ratification. During a two-hour confrontation with Beneš, Moravec pleaded with the president to declare war against Germany, and not capitulate to German demands. His pleas went unheeded. ### Second Czechoslovak Republic The Munich Agreement left Moravec disillusioned with both Western democracies and Beneš' diplomatic competence. According to Moravec, "apostles without courage" had led Czechoslovakia to capitulation. He expressed anger at the government's evocation of national ideals in its announcement of the agreement, declaring that a state unwilling to defend its ideals should not boast of them in the same way "a whore has no right to boast of her honor". As a further expression of his contempt for the government, he mockingly requested leave to join the army of El Salvador. During the short-lived Second Czechoslovak Republic, with Prague actively seeking to appease Germany to avoid further territorial losses, Moravec was forced to quit teaching at the military academy. Moreover, he found himself prohibited by the government from writing for newspapers due to concerns that the incendiary, anti-German nature of his editorials would be unduly provocative. ### Protectorate of Bohemia and Moravia On 16 March 1939, Germany occupied the rump Czechoslovak state and the German-controlled Protectorate of Bohemia and Moravia was declared. According to František Moravec, Emanuel Moravec attempted to leave the Czech lands prior to the German arrival and join the military cadre being sent abroad. His offer of service was rejected. Moravec was particularly concerned that his earlier denunciations of Germany, and his reputation as a strident anti-German polemicist, might make him a target of the new regime. He was surprised, therefore, when the new German authorities informed him he could resume writing books and newspaper columns. Moravec returned to writing with gusto and a reoriented editorial line, declaring "our nation could have died in war [with Germany]. Now the whole nation will die of fright and fear". Writing in V úloze mouřenína – Československá tragedie 1938, the most popular of his works, Moravec sought to more fully reconcile his support for the Germans with his earlier calls for resistance. He indicted Beneš and the intelligentsia for Czechoslovakia's defeat and declared it was the unwillingness of the elite to confront Germany militarily that demonstrated democracy's moral decay, thereby ultimately justifying its termination: > ... the mottoes humanism and democracy were fluttering about everywhere, but the Czech nation was actually living off its great military tradition of Hussitism and revolutionary armies. All attempts to smother the old-soldierly character that was in the blood of this people led nowhere. The soppy lemonade of moribund pacifism offered in the fragile glass of the League of Nations (that was after 1919 already cracked) was enjoyed only by a group of the intelligentsia that had a particularly girlish character. In 1941 Moravec helped found the Board of Trustees for the Education of Youth, a fascist youth group, and served as its chairman. The following year, Reinhard Heydrich who, in his role as the Berlin-appointed Deputy Protector of Bohemia and Moravia held penultimate day-to-day authority in the protectorate, compelled President Emil Hácha to appoint Moravec as the protectorate's education minister. Unlike other protectorate ministries, the education ministry under Moravec was given a measure of autonomy and not required to report to an overseer in the office of the Reich Protector. As with all protectorate ministers, Moravec's mandate to hold office was at the pleasure of the Reich Protector, as set-out in the 16 March 1939 decree of the German government. #### Policies and initiatives as Minister of Education By the time Moravec was given authority for the education ministry, Czech universities had been closed, school textbooks revised, and more than 1,000 student leaders deported to Sachsenhausen concentration camp. In his new post as minister of education, Moravec instituted the study of German as a compulsory subject in schools, explaining that it would become a lingua franca of Europe: "[e]very Czech who desires to excel in the future must acquire the German language so that work opportunities in all fields are open to them not only in the Reich, but also in Europe and the whole world ... learn German in order that the Czechs' good reputation can spread way beyond the frontiers of Bohemia and Moravia". He also promoted the idea of Czech culture as a historic component of Germanic culture. The Czech Women's Center, originally founded during the Second Republic as a group of professional women seeking greater gender equality, operated under Moravec's auspices. It was at his suggestion that it became a leading advocate for nutritional education. Moravec did not limit himself to educational questions. In 1943 he advanced a proposal to deploy the Army of the Protectorate to the Eastern Front in support of German operations. Hácha discussed the proposal with Reich Minister for Bohemia and Moravia Karl Hermann Frank who ultimately decided not to forward it to Adolf Hitler. Moravec reportedly offered the noted Czech journalist Ferdinand Peroutka release from Buchenwald concentration camp in exchange for accepting a position writing for the newspaper Lidové noviny, an offer Peroutka declined. #### Anti-Semitism During his tenure as education minister, Moravec adopted an anti-Semitic worldview that largely mirrored that of the Nazi Party. It positioned Germany as fighting a war to save humanity from Judaism. Moravec publicly blamed Jews for pre-war tensions between the former Czechoslovakia and Germany. He claimed that "Jewish capitalist interests hitched themselves to Anglo-French strategic interests and, entirely artificially and cunningly, escalated Czech hatred of the German nation to a state of unbounded fury." In Tatsachen und Irrtümer ("Facts and Errors") Moravec declared that the annexation of the Czech lands to Germany would benefit Czechs "because the Jews have been excluded from the German nation, the agents of capitalism have been rendered powerless in Germany". #### Assassination target In the winter of 1939–40, the Czechoslovak resistance group known as the Three Kings attempted to kill Moravec with a letter bomb. The Czechoslovak government-in-exile also considered targeting Moravec for assassination, but decided to go after Heydrich instead in what became known as Operation Anthropoid. Following Heydrich's death, Moravec was a keynote speaker at several mass rallies throughout the Protectorate. These were intended to demonstrate the opposition of ordinary Czechs to Heydrich's killing. ## Death During the Prague Uprising of May 1945, Moravec attempted to drive to a radio station under German control in the hope of broadcasting an appeal for calm. When the vehicle he was traveling in ran out of fuel Moravec dismounted and shot himself in the head with a pistol, presumably to avoid capture. He died four days before the liberation of Prague. ## Personal life Moravec was a master Mason, a fact that earned him contempt from some in the pre-Protectorate Czech fascist community such as the Vlajka. Both Moravec and his private secretary, Franz Stuchlik, were keen rock collectors. After Moravec's death, his collection was confiscated by the Czechoslovak state and donated to the National Museum. As of 2015, 107 mineral samples from Moravec's private collection were still held by the museum. ### Marriages Moravec was married three times. His first wife, Helena Georgijevna Beka, whom he met while a prisoner of war in Samarkand, was a close relative of the prominent Bolshevik Alexei Rykov. With her he had two sons, Igor (1920-1947) and Jurij (1923-1964). In 1932 he was divorced and, in April that year, married Pavla Szondy, who gave birth to Moravec's third son, Pavel. This marriage also ended in separation, Moravec and Szondy divorcing in 1938. In 1942 Moravec married Jolana Emmerová, his housemaid, who was only sixteen when their relationship caused the end of his previous marriage. ### Children Igor fought on the Eastern Front as a volunteer with the 3rd SS Panzer Division Totenkopf. His younger brother, Jurij, served in the Wehrmacht's 137th Infantry Division and unlike Igor was privately critical of his father's political views. While serving in France, Jurij was caught drunk on guard duty and sentenced to six months imprisonment and a one grade demotion. At the request of Emanuel Moravec, Frank personally appealed to OKW operations chief Generaloberst Alfred Jodl in the matter and the younger Moravec's sentence was quashed. Pavel was sent to school in Salzburg after the establishment of the Protectorate and died in an air raid in 1944. Igor was arrested and executed by hanging at the end of the war on charges of murder and treason. Jurij was arrested and sentenced to a prison term at the end of the war and upon release emigrated to West Germany. ## Legacy Denounced by the Allies and the Czech government-in-exile during World War II as a "Czech Quisling", Moravec has been described by John Laughland as "an enthusiastic collaborator" with Nazi Germany. This contrasts with other protectorate-era officials like Emil Hácha, whom Laughland calls "a tragic figure", or Jaroslav Eminger, who was later completely exonerated for his service in the Protectorate government. During the 2006 presentation of the Gratias Agit Award, given annually by the Czech Ministry of Foreign Affairs to recognize those who promote the Czech Republic, foreign minister Cyril Svoboda declared that "... we are also a country of those who have deformed our good name, people [such] as Emanuel Moravec, Klement Gottwald". Czech historian Jiří Pernes has argued that had Moravec died before March 1939 he would have been remembered as a well-regarded Bohemian patriot; his pre-war record was sufficiently distinguished to earn him a place in history. In 1997 Pernes published a biography of Moravec. He was later criticized for the volume which, it was alleged, was heavily plagiarized from a doctoral dissertation on Moravec's life written by Josef Vytlačil. In 2013, Daniel Landa portrayed Moravec in an episode of the Czech historical drama television series České století [cs] (Czech Century), titled "Den po Mnichovu (1938)" ("The Day after Munich"). The episode centres on Moravec' fierce opposition to president Beneš' capitulation to the invading German forces, ending with their face-to-face discussion. A mid-credits scene then reveals Moravec' later conversion to Nazi ideology. The episode suggests an ambivalence between genuine and pathological national pride, also by casting Landa, himself a known and controversial nationalist. ## Publications - Moravec, Emanuel. (1936). Obrana státu ("National Defence"). Prague: Svaz čs. důstojnictva. - Moravec, Emanuel. (1936). Válečné možnosti ve střední Evropě a válka v Habeši ("Military Capabilities in Central Europe and the Abyssinian War") Prague: Svaz čs. důstojnictva. - Moravec, Emanuel. (1939). Válečné možnosti ve střední Evropě a válka v Habeši ("Military Capabilities in Central Europe and the Abyssinian War") Prague: Orbis Verlag. - Moravec, Emanuel. (1940). Děje a bludy ("Ideas and Delusions") Prague: Orbis Verlag. - Moravec, Emanuel. (1941). O smyslu dnešní valky; cesty současné strategie. ("The Meaning of Today's War") Prague: Orbis Verlag. - Moravec, Emanuel. (1941). V úloze mouřenína: Československá tragedie 1938 ("In the Role of the Moor: the Czechoslovak Tragedy of 1938") Prague: Orbis Verlag. - Moravec, Emanuel. (1942). Tři roky před mikrofonem. ("Three Years in Front of the Microphone") Prague: Orbis Verlag. - Moravec, Emanuel. (1942). Tatsachen und Irrtümer ("Facts and Errors") Prague: Orbis Verlag. - Moravec, Emanuel. (1943). O český zítřek. ("About Tomorrow's Czechia") Prague: Orbis Verlag. ## See also - Josef Ježek, interior minister of the Protectorate of Bohemia and Morava
1,257,177
Central Coast Mariners FC
1,173,797,387
Association football club in Gosford, Australia
[ "2004 establishments in Australia", "A-League Men teams", "Association football clubs established in 2004", "Central Coast Mariners FC", "Soccer clubs on the Central Coast, New South Wales" ]
Central Coast Mariners Football Club is an Australian professional football club based in Gosford, on the Central Coast of New South Wales. It competes in the A-League Men, under licence from the Australian Professional Leagues (APL). The Mariners were founded in 2004 and are one of the eight original A-League teams. It is the first professional sports club from the Gosford region to compete in a national competition. Despite being considered one of the smallest-market clubs in the league, the Central Coast Mariners have claimed two A-League Championships from five Grand Final appearances and topped the table to win the A-League Premiership twice. The club has also appeared in the AFC Champions League five times. The club plays matches at Central Coast Stadium, a 20,059-seat stadium in Gosford; its purpose-built training facility, Mariners Centre of Excellence, is located in the suburb of Tuggerah. The facility is also home to a youth team that competes in the A-League Youth. The Mariners' main supporters' group is known as the Yellow Army, for the colour of the club's home kit. The club shares a rivalry with Newcastle Jets, known as the F3 Derby, after the previous name of the highway that connects the cities of the teams. Matt Simon is the Mariners' all-time leading goalscorer as of May 2022, with 66 goals in all competitions. The team record for matches played is held by John Hutchinson, who has appeared in 263 games for the Mariners. ## History ### Formation (2004) Central Coast Mariners' bid for a franchise in the Football Federation Australia's new A-League competition was aimed at filling the one spot for a regional team that was designated by the FFA. Media speculation prior to the announcement of the franchises in the new league suggested that the Mariners' bid may be favourable due to its new blood. Backing from former Australian international player and club technical director Alex Tobin, as well as Clean Up Australia personality Ian Kiernan—who would act as inaugural club chairman—also strengthened its proposal. As the only regional bidder, Central Coast was expected to make it into the league by default. Following a reported signed deal with the FFA, the club signed former Northern Spirit coach Lawrie McKinna as manager and Ian Ferguson, a former Rangers and Northern Spirit player, as coach. To aid the FFA's goals of building the profile of the sport, the Mariners created formal links with local state league team Central Coast United. On 1 November 2004, after much expectation, the club was announced as one of eight teams to become part of FFA's domestic competition, the A-League. The decision made Central Coast Mariners the first Gosford-based professional sports team to play in a national competition. At the time of the formation of the new league in 2004, the club was owned by Spirits Sports and Leisure Group. The club announced its search for a star player under the league's allowance for one star player outside of the \$1.5 million salary cap, insisting that the player should not look at the position as a retirement fund. Coach Lawrie McKinna sought interest from Australia national football team players Ante Milicic and Simon Colosimo, and announced that he may sign more than the three under-20 players required by league rules. Early concerns for the club focussed on concerns over financial stability, but after forming a partnership with technology company Toshiba and a cash injection from local businessman John Singleton, the club's financial worries were eased. McKinna was keen to sign local player Damien Brown of Bateau Bay, formerly of the Newcastle Jets. In a decision which prompted the player to declare that he was "over the moon", Brown became the first player to sign with the club. Club chairman Lyall Gorman was pleased that a local had become a "foundation player" and part of Brown's role would be to assist with selection of younger players from the local area. By early December 2004, the club had created a steady foundation of player signings and began negotiations with former Perth Glory striker Nik Mrdja, signing him later in the month as its star attacker. Mrjda was one of the most prominent players in the last season of the National Soccer League, scoring the final goal to secure Perth Glory's finals win. The club's management was reluctant to sign a star player outside of the \$1.5 million salary cap, stipulating that they "would have to contribute on the pitch and get people to come to the ground." ### Lawrie McKinna era (2004–2010) The Mariners' inaugural season was considered a resounding success by most; the team reached the 2006 A-League Grand Final after finishing third during the regular season. Central Coast was defeated by Sydney FC 1–0 in front of a crowd of 41,689—a competition record at the time. The Mariners also won the 2005 Pre-Season Cup, defeating Perth Glory in the final 1–0. Before the 2006–07 A-League season, the Mariners secured the services of then-Australian international Tony Vidmar from NAC Breda for two years. This was the club's first marquee signing, following the lead of Sydney FC (Dwight Yorke) and Adelaide United (Qu Shengqing). Central Coast again reached the grand final in the 2006 Pre-Season Cup, losing to Adelaide United 5–4 on penalties after the score was tied 1–1 after extra time. The Mariners then participated in the 2006–07 A-League season, but was unable to gain a spot in the final series, finishing sixth after the regular season. Club captain Noel Spencer was released by the Mariners, then signed to participate in the Asian Champions League by Sydney FC after the 2006–07 season, and Alex Wilkinson was appointed the new captain. Only 22 years of age at the time, Wilkinson had played every possible competitive match for the Mariners up to his appointment. In February 2008, Central Coast Mariners signed an arrangement with English Football League Championship side Sheffield United. The partnership was one of several connections the Mariners made with foreign clubs; other partner clubs included Ferencváros of Hungary, Chengdu Blades of China and São Paulo of Brazil. The agreement benefits the club by providing an opportunity for the youth programme and senior side to draw from the roster of Sheffield United through transfers. The teams also formed a property development joint venture, in the hopes that Central Coast could use its share of income to expand and bolster their Mariners Youth Academy. The 2007–08 season saw Central Coast win its first premiership on goal difference ahead of Newcastle, following a final round that began with Central Coast and three other clubs level on 31 points. The final series began with a 2–0 loss to Newcastle in the first leg of its major semi-final, but the Mariners forced the tie to extra time by holding a 2–0 lead in the second leg after 90 minutes. A 94th-minute goal by Sasho Petrovski, who had scored earlier to level the tie, gave Central Coast a 3–2 win on aggregate, putting the Mariners through to the 2008 A-League Grand Final. In a rematch with Newcastle, the Jets defeated Central Coast 1–0 in the Grand Final, which ended in controversy due to an uncalled handball against Newcastle in Central Coast Mariners penalty box during the closing seconds of the match. If called, the foul would have given Central Coast a penalty kick and a chance to equalise. As Mariners players disputed referee Mark Shield's decision, goalkeeper Danny Vuković struck Shield on the arm, resulting in an immediate sending off and later suspension. Vuković was suspended from both domestic and international competition for nine months, with an additional six-months' suspended ban; the latter period was reduced to three months on appeal. Despite further appeals, the ban was eventually confirmed by FIFA in June, to include banning the young keeper from competing at the 2008 Olympic Games. The ban lasted into October; in response, Central Coast signed former Manchester United and Australian international keeper Mark Bosnich on a seven-week contract. Before the 2008–09 season, Central Coast was predicted to be among the A-League leaders, but had a run of three losses in a row to end the regular season. Even with the losing streak, the club narrowly qualified for the finals, finishing in fourth, two points ahead of Sydney F.C. and Wellington Phoenix. Central Coast lost 4–1 on aggregate in their minor semi-final against Queensland Roar, ending the team's season. ### Graham Arnold and Phil Moss era (2010–2015) In February 2010, following the club's 2009–10 season, McKinna chose to move into a new role, becoming Central Coast's Football and Commercial Operations Manager. Socceroos assistant manager Graham Arnold was appointed as the club's new manager, becoming its second manager. In the lead-up to the 2010–11 season, numerous transfers resulted in changes to the club's squad. The Mariners announced the signing of 2005 Under 20s World Cup winner Patricio Pérez of Argentina in June 2010, followed by Dutch defender Patrick Zwaanswijk. In July 2010, it was announced that the Mariners' women's team would not compete in the 2010–11 W-League competition. The club stated that financial reasons were behind the decision, after Football NSW withdrew its funding. In spite of relatively low expectations in the lead up to the season, the 2010–11 season was more successful for the club than 2009–10; the A-League and youth league teams both finished second in their respective leagues in the regular season. The senior team was then defeated by the premiers, Brisbane Roar, 4–2 on aggregate over two legs in the major semi-final, before defeating Gold Coast United 1–0 in the Preliminary Final to qualify for the 2011 A-League Grand Final against Brisbane. By reaching the Grand Final, the club also qualified for the 2012 AFC Champions League. In a championship match that the A-League's website called "classic", Central Coast was defeated 4–2 in a penalty shootout after leading 2–0 with three minutes remaining in extra time to finish runners-up for the third time. The 2011–12 season was similarly successful, as the club won the premiership for the second time in its history with 51 points, two more than second-place Brisbane. The club failed to qualify for a second successive Grand Final, though, losing 5–2 on aggregate to Brisbane in the major semi-final and 5–3 on penalties after a 1–1 draw with Perth Glory in the Grand Final Qualifier. On 21 April 2013, after three losses in Grand Finals, Central Coast won its first A-League title, defeating first-year side Western Sydney Wanderers 2–0 in the Grand Final at Allianz Stadium. Arnold re-signed with the club for a further two seasons on 30 August 2013, but on 14 November it was confirmed that he had signed a two-year contract to become manager of J. League Division 1 side Vegalta Sendai, starting in January 2014. Former assistant manager Phil Moss was named the new head coach. Mariners general manager Peter Turnbull left the club as well, and New Zealand international Michael McGlinchey moved to the J. League to play for Arnold's new side. Central Coast finished the 2013–14 A-League regular season in third place, behind runner-up Western Sydney on goal difference. In the semi-final, the Mariners' championship hopes ended with a 2–0 loss to Western Sydney; the game came three days after the team was eliminated from the 2014 AFC Champions League after losing to Japanese club Sanfrecce Hiroshima 1–0 to finish last in their group. In what was Moss's first pre-season as coach, he did little to change what Arnold had built at the club. The only major changes in the side were with the addition of Senegalese international Malick Mané and Hungarian Richárd Vernes, and Marcos Flores leaving the club, with Mile Sterjovski retiring. Mariners began the season on a high, progressing to the semi-finals of the 2014 FFA Cup and defeating local rivals Newcastle Jets 1–0 at home in the opening round of the A-League. However the season soon turned with the team failing to secure a win for the remainder of the year. After their elimination from the 2015 AFC Champions League qualifying play-off by Chinese side Guangzhou R&F and a continued poor league record after a short mid-season break, the club stood down Moss as head coach. The decision was made on 6 March 2015, with Mariners appointing technical director Tony Walmsley in an interim capacity and captain John Hutchinson in a dual player-coach role, until the end of the season. Portuguese player Fábio Ferreira also joined the team at the tail end of the season. On 15 April Walmsley was announced as Mariners' permanent technical director and head coach for the 2015–16 season. The announcement came despite an end to the season in which the club finished the league in eighth position. ### Post-Arnold Era (2015–2020) The Mariners had their equal-worst A-League performance to date in the 2015–16 season. Their 13 points, the fewest in club history, resulted in a last-place finish, and they set a league record by losing 20 games while winning only 3, a record low for the Mariners. Central Coast allowed 70 goals, the most in league history, and had a goal difference of −37, the worst by an A-League team. The Mariners' totals of goals conceded at home and away (32 and 38 respectively) were also A-League records, and they went the entire season without a clean sheet. In the 2016 FFA Cup, the Mariners suffered a 2–1 loss to Green Gully SC at Green Gully Reserve, becoming just the second A-League team to be eliminated by a state league team in the FFA Cup. Following this loss the club sacked Walmsley on 8 August 2016, with coaching duties in the leadup to the 2016–17 season taken up by assistant coach John Hutchinson in a caretaker role. On 29 August 2016, Paul Okon was hired as Central Coast's full-time coach, succeeding the sacked Tony Walmsley. In Okon's debut as Central Coast manager, the Mariners drew 3–3 with Perth Glory at Nib Stadium, after coming back from 3–0 down at half time. Okon achieved his first win as Central Coast manager in his fifth game in charge: a 2–1 win over defending champions Adelaide United at Hindmarsh Stadium on 6 November 2016. However, the Mariners ended the season in eighth. On 2 August 2017, for the second consecutive year, the Mariners were knocked out of the FFA Cup by a state league team in the first round, after losing 3–2 to Blacktown City. During the 2017–18 A-League season, the Mariners were in the top four at one stage, but after a run of 11 games without a win the club dropped down the table. Okon resigned as manager with Central Coast in ninth entering the last four rounds of the regular season; Wayne O'Sullivan served as an interim manager following Okon's departure. With a six-game losing streak at the end of the season, the team finished last for the second time in three years. Former Brisbane manager Mike Mulvey was hired by Central Coast in 2018. In the first 21 matches of the 2018–19 A-League season, the Mariners won only once. Mulvey was replaced as manager by Alen Stajcic, the former head coach of the Australia women's national team. Despite two wins in his six games as a caretaker manager, the Mariners were unable to avoid finishing at the bottom of the table again. Stajcic was given a three-year contract after the season. On August 4, 2020, after playing their last game of the 2019-20 season, the Mariners were put up for sale by owner Michael Charlesworth, putting the club at risk of leaving the Central Coast. If no buyer is found, the Mariners' A-League license will be handed back to the FFA. ### Resurgence (2020–present) In his second full season at the club, Stajcic made some large signings, re-acquiring the services of former player Oliver Bozanic on October 21, 2020 after he had left Scottish club Hearts and signing Costa Rican international Marco Urena on December 22, 2020 after he had left South Korean club Gwangju FC. The season had begun well with the Mariners beating local rivals Sydney FC in Sydney for the first time in 7 years. In somewhat of a fairy-tale story, the Mariners sat in first place after 16 rounds but would drop points during the later rounds to finish in third place. This qualified the club for their first finals appearance in 7 years. They would then lose to Macarthur FC 2–0 in the elimination finals on 12 June, 2021. On 17 June 2021, Stajcic decided to resign from the club. His replacement, Nick Montgomery, was announced on July 7, 2021. Montgomery's first season continued on the success of last season. He brought the club to its first ever FFA Cup Final where they lost 2–1 to Melbourne Victory on 5 February, 2022. The Mariners also finished fifth in the A-League which qualified the club for a second consecutive finals series. They were again knocked out in the elimination finals, this time by Adelaide United, losing 3–1 on 15 May, 2022. On 10 June, 2022, the club announced that it had retained the services of Montgomery and assistant coach Sergio Raimundo until at least 2025. On the 3rd of June, 2023 the Mariners played against Melbourne City FC in the 2023 A-League Men Grand Final. The Mariners defeated Melbourne City 6-1 to win their second A-League Championship, and their first in a decade, with Jason Cummings winning the Joe Marston Medal for best on ground. ## Colours and badge The home jersey worn by the Mariners is mostly yellow with sleeves that are navy blue. The away uniform is a mostly plain navy blue jersey with yellow as a secondary colour. In the 2011–12 season, the club had its kits manufactured by Hummel, as the A-League's Reebok deal had expired at the conclusion of the 2010–11 season. In September 2012, it was announced that the Mariners had signed a two-year deal with Kappa for them to be the official apparel supplier. The team logo is a yellow football at the centre of a blue curling wave, which symbolises the beaches of the Central Coast. Since 2012, the Mariners have worn special pink kits for one match in October to raise money and awareness for Pink Ribbon Day, part of National Breast Cancer Awareness Month. The Mariners club collected donations at the ground, as well as auctioning the match-worn kits on online auction site eBay with proceeds going to the charity. ### Kit Evolution - Home ## Sponsorship ## Stadium Central Coast Mariners plays home games at Central Coast Stadium, Gosford. It is located in Grahame Park, between the Gosford Central Business District and the Brisbane Water foreshore. It is constructed to make the most of its location, being open at the southern end, giving filtered views of Brisbane Water through a row of large palm trees. It is within walking distance of Gosford railway station and is adjacent to the Central Coast Leagues Club. The stadium has a capacity of 20,059, and the highest attendance for a Mariners game was a sold-out 20,059 against Adelaide United on the second leg of the 2022-23 Semi-Final. Difficulties in drawing spectators led the Mariners to schedule matches in the 2013–14 and 2014–15 seasons away from Central Coast Stadium, at North Sydney Oval and Brookvale Oval. The club's goal was to play closer to its fan base in north Sydney, which majority owner Michael Charlesworth estimated to be about 20% of its total supporters. Following attendances at North Sydney Oval that were similar to those at Central Coast Stadium, Football Federation Australia CEO David Gallop suggested in December 2014 that it would be unlikely that the club would be permitted to continue playing in north Sydney. ## Supporters and rivalries The active supporters' group for the Mariners is called the Yellow Army, who sit in bay 16 of Central Coast Stadium during home games. In addition to the Yellow Army, there is a Central Coast Mariners Official Supporters Club, which was established during 2013. The Central Coast region has about 300,000 residents, which gives the Mariners the A-League's smallest local fan base. Accordingly, the Mariners acquired a small-market image among commentators. The Mariners developed a strong rivalry with Newcastle Jets throughout their first season, often referred to as the F3 Derby. The naming is a title previously used for the Sydney–Newcastle Freeway, the major motorway which joins the two clubs' cities. The rivalry's origins date back to before the teams played against each other in the A-League. A May 2005 Oceania Club Championship qualification match, which went to a penalty shootout that the Mariners won, helped create hostility between the sides. In the game, a tackle by Central Coast's Mrdja broke one of Newcastle player Andrew Durante's legs, causing him to miss the following A-League season; Mrdja offered no apology for the tackle, upsetting Jets players. Fans of the clubs battled verbally before and after one 2011 derby match, leading the Newcastle Herald'''s Josh Leeson to call their actions "immature and laughable." In more recent seasons, the F3 Derby has gained less attention in the press than the derbies in Melbourne and Sydney, but Central Coast player Nicholas Fitzgerald maintains that "the players and fans still take it very seriously." Central Coast also have a rivalry with Sydney FC. Like Newcastle, Sydney FC is close in proximity to Central Coast. In 2006, the Central Coast Express Advocates Richard Noone called the Central Coast–Sydney rivalry "Arguably A-League's fiercest". ## Affiliated clubs Through an investment in the Mariners by Sheffield United the club has the following international affiliations: - Sheffield United - São Paulo - Ferencváros In addition, the club has a player development partnership with the following international clubs: - Everton - Southern The club previously has formal relationships with the following organisations in Australia: - Northbridge (as North Shore Mariners Academy 2014-2020) ## Players ### First-team squad ### Youth ### Retired numbers - 19 – Matt Simon (forward, 2006–12, 2013–15, 2018–22) ## Club officials ### Management ### Technical staff ### Managers ## Club captains ## Records John Hutchinson currently holds the team record for number of total games played with 271 matches in all competitions. Former captain Matt Simon has the second most appearances for the club with 238 matches. Alex Wilkinson is the third most capped player with 206 appearances. As of 2020, Central Coast's all-time highest goalscorers in all competitions is Matt Simon with 66 goals, twenty-three more than Adam Kwasnik. Daniel McBreen has scored the third most goals for the club with 30. Central Coast's highest attendance at its home stadium, Central Coast Stadium, is 19,238 against Newcastle Jets in their round 19 match of the 2007–08 season. This was the second highest crowd at the ground for any sport since the first match at Central Coast Stadium in February 2000. ## Continental record ## Honours ### A-League - A-League Men Championship - Winners (2) : 2013, 2023 - Runners-up (3): 2006, 2008, 2011 - A-League Men Premiership - Winners (2) : 2007–08, 2011–12 - Runners-up (3): 2010–11, 2012–13, 2022–23 ### Cups - FFA Cup - Runners-up (1): 2021 - A-League Pre-Season Challenge Cup - Winners (1) :''' 2005 - Runners-Up (1): 2006 ### The Mariners Medal (Player of the Year) ## Team of the decade ## See also - Central Coast Mariners FC (W-League) - Central Coast Mariners Academy - List of Central Coast Mariners FC seasons
33,466,711
United States Assay Commission
1,168,263,771
Agency of the US government (1792–1980)
[ "1792 establishments in the United States", "1980 disestablishments in the United States", "Organizations disestablished in 1980", "Organizations established in 1792", "Product-testing organizations", "United States Mint" ]
The United States Assay Commission was an agency of the U.S. federal government from 1792 to 1980. Its function was to supervise the annual testing of the gold, silver, and (in its final years) base metal coins produced by the United States Mint to ensure that they met specifications. Although some members were designated by statute, for the most part the commission, which was freshly appointed each year, consisted of prominent Americans, including numismatists. Appointment to the Assay Commission was eagerly sought after, in part because commissioners received a commemorative medal. These medals, different each year, are extremely rare, with the exception of the 1977 issue, which was sold to the general public. The Mint Act of 1792 authorized the Assay Commission. Beginning in 1797, it met in most years at the Philadelphia Mint. Each year, the president appointed unpaid members, who would gather in Philadelphia to ensure the weight and fineness of silver and gold coins issued the previous year were to specifications. In 1971, the commission met, but for the first time had no gold or silver to test, with the end of silver coinage. Beginning in 1977, President Jimmy Carter appointed no members of the public to the commission, and in 1980, he signed legislation abolishing it. ## History ### Founding and early days (1792–1873) In January 1791, Treasury Secretary Alexander Hamilton submitted a report to Congress proposing the establishment of a mint. Hamilton concluded his report: > The remedy for errors in the weight and alloy of the coins, must necessarily form a part, in the system of a mint; and the manner of applying it will require to be regulated. The following account is given of the practice in England, in this particular: A certain number of pieces are taken promiscuously out of every fifteen pounds of gold, coined at the Mint, which are deposited, for safe keeping, in a strong box, called the pix [sic, more commonly "pyx"]. This box, from time to time, is opened in the presence of the Lord Chancellor, the officers of the Treasury, and others, and portions are selected from the pieces of each coinage, which are melted together, and the mass assayed by a jury of the Company of Goldsmiths ... The expediency of some similar regulation seems to be manifest. In response to Hamilton's report, Congress passed the Mint Act of 1792. In addition to setting the standards for the new nation's coinage, Congress provided for an American version of the British Trial of the Pyx: > That from every separate mass of standard gold or silver, which shall be made into coins at the said Mint, there shall be taken, set apart by the Treasurer and reserved in his custody a certain number of pieces, not less than three, and that once in every year the pieces so set apart and reserved, shall be assayed under the inspection of the Chief Justice of the United States, the Secretary and Comptroller of the Treasury, the Secretary for the Department of State, and the Attorney General of the United States, (who are hereby required to attend for that purpose at the said Mint, on the last Monday in July in each year) ... and if it shall be found that the gold and silver so assayed, shall not be inferior to their respective standards herein before declared more than one part in one hundred and forty-four parts, the officer or officers of the said Mint whom it may concern shall be held excusable; but if any greater inferiority shall appear, it shall be certified to the President of the United States, and the said officer or officers shall be deemed disqualified to hold their respective offices. The following January, Congress passed legislation changing the date on which the designated officials met to the second Monday in February. Meetings did not take place immediately; the Mint was not yet striking gold or silver. Minting of silver began in 1794 and gold in 1795, and some coins were saved for assay: the first Mint document mentioning assay pieces is from January 1796 and indicates that exactly \$80 in silver had been put aside. The first assay commissioners did not meet until Monday, March 20, 1797, a month later than the prescribed date. Once they did, annual meetings took place each year until 1980, except in 1817 as there had been no gold or silver struck since the last meeting (until 1837, the commission examined the coins since the last testing, rather than for a particular calendar year). In 1801, the usual meeting was delayed, causing Mint Director Elias Boudinot to complain to President John Adams that depositors were anxious for an audit so the Mint could release coins struck from their bullion. Numismatist Fred Reed suggested that the delay was probably due to poor weather, making it difficult for officials to travel from the new capital of Washington, D.C., to Philadelphia for the assay. In response, on March 3, 1801, Congress changed the designation of officials required to attend to "the district judge of Pennsylvania, the attorney for the United States in the district of Pennsylvania, and the commissioner of loans for the State of Pennsylvania". The meeting finally took place on April 27, 1801. The 1806 and 1815 sessions were delayed because of outbreaks of disease in Philadelphia; the one in 1812 was held a month late because of a heavy snowstorm which prevented the commissioners from reaching the Mint. No meeting took place in 1817; a fire had damaged the Philadelphia Mint in January 1816, and no gold or silver awaited the commission. In 1818, Congress substituted the Collector of the Port of Philadelphia for the Pennsylvania loans commissioner as a member of the Assay Commission. With the Coinage Act of 1834, Congress removed the automatic disqualification of Mint officers in the event of an unfavorable assay, leaving the decision to the president. The Mint Act of 1837 established the Assay Commission in the form it would have for most of the remainder of its existence. It provided that "an annual trial shall be made of the pieces reserved for this purpose [i.e., set aside for the assay] at the Mint and its branches, before the judge of the district court of the United States, for the eastern district of Pennsylvania, the attorney of the United States, for the eastern district of Pennsylvania, and the collector of the port of Philadelphia, and such other persons as the President shall, from time to time, designate for that purpose, who shall meet as commissioners, for the performance of this duty, on the second Monday in February, annually". The usual procedure for members of the public to be named to the commission after public appointments began was for the Mint Director to send the president a list of candidates for his approval. According to Jesse P. Watson in his monograph on the Bureau of the Mint, the admission of members of the public to the Assay Commission meant "that permanency and high official dignity were no longer characteristic of the commission". In 1861, as the American Civil War broke out, North Carolina joined the Confederate States. The Charlotte Mint, taken over by the Confederacy, eventually closed as the dies that had been shipped from the Philadelphia Mint wore out, and it could obtain no more. Nevertheless, 12 half eagles (\$5 gold coins) were sent from Charlotte to Philadelphia, through enemy lines, in October 1861. They were duly tested by the 1862 Assay Commission, and were found to be correct. In 1864, with the metal nickel, used in the cent in short supply, Mint Director James Pollock asked that year's commission to opine on a substitute for the copper-nickel used in the cent. The members endorsed French bronze (95% copper and 5% tin or zinc) as a metal to be used in the cent and a proposed two-cent piece. Pollock sent the conclusions to Treasury Secretary Salmon P. Chase, who forwarded them (and draft legislation) to Maine Senator William P. Fessenden, chairman of the Senate Finance Committee. The Coinage Act of 1864 was signed by President Abraham Lincoln on April 22, 1864. ### Later years (1873–1949) The Coinage Act of 1873 revised the laws relating to coinage and the Mint and retired several denominations including the two-cent piece. The act also changed the officers required to serve on the Assay Commission: > That to secure a due conformity in the gold and silver coins to their respective standards of fineness and weight, the judge of the district court of the United States for the eastern district of Pennsylvania, the Comptroller of the Currency, the assayer of the assay-office at New York, and such other persons as the President shall, from time to time, designate, shall meet as assay-commissioners, at the mint in Philadelphia, to examine and test, in the presence of the Director of the Mint, the fineness and weight of the coins reserved by the several mints for this purpose, on the second Wednesday in February, annually. The act also required the Mint to put aside one of every thousand gold coins struck, and one of every two thousand silver coins for the assay. It provided the procedure for putting the coins aside, sealing them in envelopes, and placing them in a pyx to be opened by the assay commissioners. The 1881 Assay Commission found that approximately 3,000 silver dollars struck at the Carson City Mint (1881-CC) had been struck in .892 silver rather than the legally mandated .900. It is unclear if the Treasury took any steps to attempt to recover the issued pieces. The 1885 commission detected a single silver dollar which was 1.51 grains (0.098 g) below specifications, the permitted tolerance being 1.50 grains (0.097 g). In 1921, the Assay Commission found that some coins struck at the Denver Mint were struck in .905 or .906 silver, above the legal .900 by more than the permitted tolerance. Investigation found that ingots which had been rejected and were intended for melting had instead been used for coin. In the early 20th century, the San Francisco Mint struck silver coins for the Philippines, then a US possession; those pieces were included in the assay. Proof coins struck by the Mint for collectors were included in the assay; pieces struck under contract with foreign governments were not. The pyx was a rosewood box, 3 feet (0.91 m) square, of European work, and sealed by heavy padlocks. It was not filled by the coins put aside for the 1934 Assay Commission, of which there were 759 with a total face value of \$12,050. This had increased by 1940 to 79,847 coins, all silver as gold coins were no longer being struck, and by 1941, many reserved coins could not be kept in the pyx, instead being placed in packing boxes, overflowing with sealed envelopes. By the late 1940s, more than ten million coins were being struck each day at Philadelphia alone; in 1947, Congress reduced the number of silver coins required to be put aside for assay from one in 2,000 to one in 10,000. This was done at the urging of the Department of the Treasury, as having to store so many assay coins was a burden to the Mint, and it felt that the number of coins available to the commission would still be sufficient. ### Final years and abolition (1950–1980) By the 1950s, there was considerable competition among numismatists to be appointed an assay commissioner. Appointees received no compensation, but the appointment was prestigious and carried with it a prized assay medal. The procedure was changed so that the Mint Director submitted the names of more individuals than would actually be appointed to the White House, where the final choices were made. It remained possible for the director to ask for special consideration for certain individuals. Later nominations were also screened by the Federal Bureau of Investigation and by the IRS. The Mint Director received nominations for assay commissioner from legislators, political organizations, government officials, and from members of the public. In 1971, for the first time, the Assay Commission had no silver coins to test; none were struck by the Mint for circulation in 1970. Although part-silver Kennedy half dollars were struck in 1970, they were only for collectors and were not put aside for assay. Commissioners could instead test 21,975 dimes and 11,098 quarters, all made from copper-nickel clad, though as the Associated Press, reporting on the 1973 Assay Commission, put it, "a discovery of a bum coin hasn't occurred in years." Only one in every 100,000 clad or silver-clad pieces was put aside for the Assay Commission, and only one in every 200,000 dimes. At the 1974 meeting, one copper-nickel Eisenhower dollar was discovered which weighed 15 grains (0.97 g) below specification; after reference to the rules, the coin was deemed barely within guidelines. Numismatist Charles Logan, in his 1979 article about the impending end of the Assay Commission, stated that this incident pointed out "the basic problem with the annual trial. First, the members were not exactly sure how their job was done, or what the requirements were. Second, they really did not want to report a fault in the coinage. Finally, even if the one dollar coin had been found faulty, [it would have had] little consequence, except to prompt greater vigilance at the Mint." In early 1977, outgoing Mint Director Mary Brooks sent a list of 117 nominees to the new president, Jimmy Carter, from which it was expected that about two or three dozen names would be selected. Carter refused to make any public appointments, feeling the Assay Commission was unneeded given that the Mint performed the same work through routine internal checks and that the \$2,500 appropriated each year was a poor use of taxpayer money. Only government members served on the Assay Commission in 1977–1980. Even so, hundreds of numismatists applied to be on the 1978 commission. Carter made no appointments that year; the only members were those designated by statute. The 1979 meeting, attended by the government-employed commission members and Mint Director Stella Hackel Sims, was held eight days late on February 22 due to schedule conflicts. In June 1979, Carter's Presidential Reorganization Project recommended the abolition of the Assay Commission and two other small agencies. The report estimated that having an Assay Commission cost the federal government about \$20,000 and that the work was done better by vending machine manufacturers to avoid having their machines jam. In August, columnist Jack Anderson deemed the commission an example of wasteful spending in Washington, characterizing its activities, "more than a decade ago, the government stopped putting either gold or silver in its coins—but the commission continues to hold its annual luncheon meeting. Solemnly, the commissioners measure the amounts of nonprecious metals in U.S. coins, and strike a medal to commemorate their activities. This useless exercise costs the taxpayers about \$20,000 a year." As coin collector and columnist Gary Palmer put it in 1979, "who really cares if the weight of a cupro-nickel quarter is off by a grain or two?" On March 14, 1980, Carter approved legislation abolishing the Assay Commission, as well as the other two agencies, as recommended by his Reorganization Project. The President wrote in a signing statement that with the end of gold and silver coinage, the need for the commission had diminished. Numismatic leaders objected to the ending of the commission, considering the expense small and the tradition worth keeping, although they concurred the commission "had become an anachronism". At the time of its abolition, the Assay Commission was the oldest existing government commission. In 2000 and 2001, New Jersey Congressman Steven Rothman introduced legislation to revive the Assay Commission, stating that re-establishing the commission would assure public confidence in the gold, silver, and platinum bullion coins struck by the Mint. The bills died in committee. ## Functions and activities The general function of the Assay Commission was to examine the gold and silver coins of the Mint and ensure they met the proper specifications. Assay commissioners were placed on one of three committees in most years: the Counting, Weighing, and Assaying Committees. The Counting Committee verified that the number of each type of coin in packets selected from the pyx matched what Mint records said should be there. The Weighing Committee measured the weight of coins from the pyx, checking them against the weight required by law. The Assaying Committee worked with the Philadelphia Mint's assayer as he measured the precious metal content of some of the coins. In some years there was a Committee on Resolutions—in 1912, it urged that a leaflet be published for visitors to the Mint's coin collection, and that a medal be struck to commemorate the collection. The full Assay Commission adopted that committee's report. Congress in 1828 had required that the weights kept by the Mint Director be tested for accuracy in the presence of the assay commissioners each year. By statute passed in 1911, the commission was required to inspect the weights and balances used in assaying at the Philadelphia Mint, and to report on their accuracy. This included the government's official standard pound weight that had been brought from the United Kingdom. According to a description of the 1948 meeting, silver coins selected for assay were first placed between steel rollers until the thickness was reduced to .0001 inches (0.0025 mm), and then were chopped into fine pieces and dissolved in nitric acid. The fineness of the silver in the coin could be determined by the amount of salt solution needed to precipitate all the silver in the liquid. Numismatist Francis Pessolano-Filos described the work of the Assay Commission: > Using balances and weights, the commission weighed several examples of each type of coin, then used calipers to examine them for proper thickness, and finally, using various acids and solvents, determined the amount of alloy used in manufacture of the planchets. Ledgers and journal books on the mint were also examined. If there were any imperfections or deviations from the legal standards in the coins examined, the information was immediately sent to the president of the United States. The commission operated under rules first adopted by the 1856 commission, and then passed down, year to year, and amendable by any Assay Commission, although in practice little change was made. Under the rules, the Director of the Mint called the assay commissioners to order, then introduced the federal judge who was an ex officio member, who presided over the meetings; if the judge was absent, the members elected a chairman. The chairman divided the members into the committees. If there had been a change of officers at a mint, commissioners examined coins from before and after. After the committees completed their work, the members re-assembled to report their findings and to vote on their report. Every Assay Commission passed the coinage that it was called upon to examine. If pieces varying from the standard were found, that was also noted; the 1885 Assay Commission reported the one substandard silver coin, which came from the Carson City Mint, but urged the president to take no action, noting that the coin was underweight by an amount too small to be measured by the scales at Carson City. Remains of coins used in the assay were melted by the Mint; those put aside for the Assay Commission that were not used were placed in circulation from Philadelphia, and were not marked or distinguished in any way. There were thousands of coins for the commission, of which only a few were assayed. Commissioners often purchased some of the remaining pieces as souvenirs, although commemorative coins could not be purchased if Congress had given the exclusive right to sell them to a sponsoring organization—they were instead destroyed. ## Commissioners Appointments of members of the public to the Assay Commission by the president are known to have been made as early as 1841; the final ones were made in 1976. Many early commissioners were chosen for their scientific or intellectual attainments. Such qualifications were not required of later public appointees, who included such prominent figures as Ellin Berlin, wife of songwriter Irving Berlin. The first women to be appointed to the Assay Commission were Mrs. Kellogg Fairbanks of Chicago and Mrs. B.B. Munford of Richmond, Virginia, both in 1920. The recordholder for service as a commissioner is Herbert Gray Torrey, 36 times an assay commissioner between 1874 and 1910 (missing only 1879) by virtue of his office as assayer of the New York Assay Office. The recordholder as a presidential appointee is Dr. James Lewis Howe, head of the Department of Chemistry at Washington and Lee University, 18 times an assay commissioner, serving in 1907 and then each year from 1910 to 1926. An employee of the National Bureau of Standards was included in the presidential appointments each year; he brought with him the weights used in the assay, which were checked by the agency in advance. Although no future president served as an assay commissioner, Comptroller of the Currency Charles G. Dawes was a commissioner in 1899 and 1900; he was Vice President of the United States from 1925 to 1929. Among those appointed was coin collector and Congressman William A. Ashbrook, 14 times an assay commissioner between 1908 and 1934. Ashbrook's presence on the 1934 Assay Commission has led to speculation that he might have used his position as an assay commissioner (he left Congress in 1921) to secure one or more 1933 Saint-Gaudens double eagles, almost all of which were melted due to the end of gold coinage for circulation. Assay commissioners were traditionally allowed to purchase coins from the pyx that were not assayed, and numismatic historian Roger Burdette speculates that Ashbrook, generally well-treated by the Treasury Department due to his onetime congressional position, might have exchanged other gold pieces for the 1933 coins. The three known specimens of the 1873-CC quarter, without arrows by the date, and the only known dime of that description, may have been salvaged from assay pieces, as the remainder of those coins had been ordered melted as underweight. A similar mystery attends the 1894 Barber dime struck at San Francisco (1894-S) of which the published mintage is 24, although it is not certain whether this total includes the one sent to Philadelphia to await the 1895 Assay Commission. The fact that one of the 1895 assay commissioners was Robert Barnett, chief clerk of the San Francisco Mint, has led numismatic writers Nancy Oliver and Richard Kelly to speculate that he may have been made an assay commissioner in order to retrieve the dime. The 1895 Assay Commission report confirms that the dime was there, as it was counted by the Counting Committee. The dime is not mentioned as having been either weighed or assayed; Oliver and Kelly, in a May 2011 article in The Numismatist, suggest that Barnett used that privilege of assay commissioners to obtain the rarity. He is not known, however, to have written or spoken of the matter before his murder in 1904. In 1964, former assay commissioners formed the Old Time Assay Commissioners Society (OTACS). When President Carter stopped appointing public members to the commission in 1977, the OTACS fundraised in an unsuccessful attempt to induce the government to continue that tradition. The society met annually through 2012, usually at the site of the yearly convention of the American Numismatic Association (ANA). With the number of surviving OTACS members at less than three dozen, the society plans no further meetings; its 2012 session in conjunction with the ANA convention in Philadelphia included an event at the mint. ## Medals Assay Commission medals were struck from a variety of metals, including copper, silver, bronze, and pewter. The first Assay Commission medals were struck in 1860 at the direction of Mint Director James Ross Snowden. The initial purpose in having medals struck was not principally to provide keepsakes to the assay commissioners, but to advertise the Mint's medal-striking capabilities. The nascent custom lapsed when Snowden left office in 1861. Numismatists R.W. Julian and Ernest E. Keusch, in their work on Assay Commission medals, theorize that the resumption of Assay Commission medal striking in 1867 was at the request of Mint Engraver James B. Longacre to new Mint Director William Millward. Medals to be given to assay commissioners were struck each year after that until public members ceased to be appointed to the Assay Commission in 1977. Early assay medals featured on the obverse a bust of Liberty or figure of Columbia, and on the reverse a wreath surrounding the words "annual assay" and the year. The 1870 obverse, by Longacre's successor William Barber, features Moneta surrounded by implements of the assay, such as scales and the pyx. The distinctive designs for each year would sometimes be topical—the 1876 medal bears a design for the centennial of American Independence, and 1879's depicted the recently deceased Mint Director Henry Linderman. Beginning in 1880, they most often featured the president or Treasury Secretary. The medals in 1901 and from 1903 to 1909 were rectangular, a style popular at the time. The 1920 reverse, by Engraver George T. Morgan, had a design which symbolized the ending of World War I; in 1921, an extra medal was struck in gold, given by the assay commissioners to outgoing President Woodrow Wilson as a mark of respect. The 1936 issuance was a mule of the Mint's medals for the president at the time, Franklin Roosevelt, and the first president, George Washington. Bearing the words, "annual assay 1936" on the edge, the medal was prepared in this manner by order of Mint Director Nellie Tayloe Ross after Mint officials realized that they had forgotten to prepare a special design for an assay medal. The 1950 medal illustrates a meeting of three 1792 officeholders (Secretary of the Treasury Hamilton, Secretary of State Thomas Jefferson and Chief Justice John Jay). Although they were officeholders designated by the Mint Act of 1792, no assay took place until 1797, by which time all three had left those offices. There was no specially designed medal in 1954; instead, the assay commissioners, who met in Philadelphia on Lincoln's Birthday, February 12, 1954, chose to receive the Mint's standard presidential medal depicting Abraham Lincoln, with the commissioner's name on the edge. The final medals, 1976 and 1977, were oval and of pewter. The 1977 medal, depicting Martha Washington, was not needed for presentation, as no public assay commissioners were appointed. They were presented to various Mint and other Treasury officials, and when there was public objection, more were struck and were placed on sale for \$20 at the mints and other Treasury outlets in 1978. Material was available for about 1,500 medals, and they were initially not available by mail. They were still available in person, and by mail order, in 1980. All Assay Commission medals are extremely rare. Except for the 1977 medal, none is believed to have been issued in a quantity of greater than 200, and in most years fewer than 50 were struck. Additional copies of several 19th-century issues are known to have been illicitly struck; the Mint ended such practices in the early 20th century. The obverse of the 1909 issue, depicting Treasury Secretary George Cortelyou, was reused as Cortelyou's entry in the Mint's series of medals honoring Secretaries of the Treasury. The later pieces were struck with a blank reverse, but in the early 1960s, the reverse design from the Assay Commission issue was used with the Cortelyou obverse, and an unknown number sold to the public. The restrikes are said to be less distinctly struck than the originals.
318,677
George W. Romney
1,171,340,183
American business executive and politician (1907–1995)
[ "1907 births", "1995 deaths", "20th-century American businesspeople", "20th-century American politicians", "20th-century Mormon missionaries", "Activists for African-American civil rights", "American Mormon missionaries in the United Kingdom", "American Motors people", "American automotive pioneers", "American chief executives in the automobile industry", "American leaders of the Church of Jesus Christ of Latter-day Saints", "American lobbyists", "American nonprofit executives", "Automotive businesspeople", "Burials in Michigan", "Businesspeople from Salt Lake City", "Candidates in the 1964 United States presidential election", "Candidates in the 1968 United States presidential election", "Centrism in the United States", "Delegates to the 1961–1962 Michigan Constitutional Convention", "George W. Romney", "Latter Day Saints from Idaho", "Latter Day Saints from Michigan", "Latter Day Saints from Utah", "Mexican Latter Day Saints", "Mexican emigrants to the United States", "Mexican people of American descent", "Mitt Romney", "Nixon administration cabinet members", "Patriarchs (LDS Church)", "People from Bloomfield Hills, Michigan", "People from Colonia Dublán", "People from Oakley, Idaho", "Politicians from Salt Lake City", "Regional representatives of the Twelve", "Republican Party governors of Michigan", "Romney family", "United States Secretaries of Housing and Urban Development" ]
George Wilcken Romney (July 8, 1907 – July 26, 1995) was an American businessman and politician. A member of the Republican Party, he served as chairman and president of American Motors Corporation from 1954 to 1962, the 43rd governor of Michigan from 1963 to 1969, and 3rd secretary of Housing and Urban Development from 1969 to 1973. He was the father of Mitt Romney, former governor of Massachusetts and 2012 Republican presidential nominee who currently serves as United States senator from Utah; the husband of 1970 U.S. Senate candidate Lenore Romney; and the paternal grandfather of current Republican National Committee chair Ronna McDaniel. Romney was born to American parents living in the Mormon colonies in Mexico; events during the Mexican Revolution forced his family to flee back to the United States when he was a child. The family lived in several states and ended up in Salt Lake City, Utah, where they struggled during the Great Depression. Romney worked in a number of jobs, served as a Mormon missionary in the United Kingdom, and attended several colleges in the U.S. but did not graduate from any of them. In 1939, he moved to Detroit and joined the American Automobile Manufacturers Association, where he served as the chief spokesman for the automobile industry during World War II and headed a cooperative arrangement in which companies could share production improvements. He joined Nash-Kelvinator Corporation in 1948, and became the chief executive of its successor, American Motors, in 1954. There he turned around the struggling firm by focusing all efforts on the compact Rambler car. Romney mocked the products of the "Big Three" automakers as "gas-guzzling dinosaurs" and became one of the first high-profile, media-savvy business executives. Devoutly religious, he presided over the Detroit stake of the Church of Jesus Christ of Latter-day Saints. Having entered politics in 1961 by participating in a state constitutional convention to rewrite the Michigan Constitution, Romney was elected Governor of Michigan in 1962. Re-elected by increasingly large margins in 1964 and 1966, he worked to overhaul the state's financial and revenue structure, greatly expanding the size of state government and introducing Michigan's first state income tax. Romney was a strong supporter of the American Civil Rights Movement. He briefly represented moderate Republicans against conservative Republican Barry Goldwater during the 1964 U.S. presidential election. He requested the intervention of federal troops during the 1967 Detroit riot. Initially a front runner for the Republican nomination for president of the United States in the 1968 election cycle, he proved an ineffective campaigner and fell behind Richard Nixon in polls. After a mid-1967 remark that his earlier support for the Vietnam War had been due to a "brainwashing" by U.S. military and diplomatic officials in Vietnam, his campaign faltered even more and he withdrew from the contest in early 1968. After being elected president, Nixon appointed Romney as Secretary of Housing and Urban Development. Romney's ambitious plans, which included housing production increases for the poor and open housing to desegregate suburbs, were modestly successful but often thwarted by Nixon. Romney left the administration at the start of Nixon's second term in 1973. Returning to private life, he advocated volunteerism and public service and headed the National Center for Voluntary Action and its successor organizations from 1973 through 1991. He also served as a regional representative of the Twelve within his church. ## Early life and background Romney's grandparents were polygamous Mormons who fled the United States with their children owing to the federal government's prosecution of polygamy. His maternal grandfather was Helaman Pratt (1846–1909), who presided over the Mormon mission in Mexico City before moving to the Mexican state of Chihuahua and who was the son of original Mormon apostle Parley P. Pratt (1807–1857). In the 1920s, Romney's uncle Rey L. Pratt (1878–1931) played a major role in the preservation and expansion of the Mormon presence in Mexico and in its introduction to South America. A more distant kinsman was George Romney (1734–1802), a noted portrait painter in Britain during the last quarter of the 18th century. Romney's parents, Gaskell Romney (1871–1955) and Anna Amelia Pratt (1876–1926), were United States citizens and natives of the Territory of Utah. They married in 1895 in Mexico and lived in Colonia Dublán in Nuevo Casas Grandes in the state of Chihuahua (one of the Mormon colonies in Mexico), where George was born on July 8, 1907. They practiced monogamy (polygamy having been abolished by the 1890 Manifesto, although it persisted in places, especially Mexico). George had three older brothers, two younger brothers, and a younger sister. Gaskell Romney was a successful carpenter, house builder, and farmer who headed the most prosperous family in the colony, which was situated in an agricultural valley below the Sierra Madre Occidental. The family chose U.S. citizenship for their children, including George. The Mexican Revolution broke out in 1910 and the Mormon colonies were endangered in 1911–1912 by raids from marauders, including "Red Flaggers" Pascual Orozco and José Inés Salazar. Young George heard the sound of distant gunfire and saw rebels walking through the village streets. The Romney family fled and returned to the United States in July 1912, leaving their home and almost all of their property behind. Romney later said, "We were the first displaced persons of the 20th century." In the United States, Romney grew up in humble circumstances. The family subsisted with other Mormon refugees on government relief in El Paso, Texas, benefiting from a \$100,000 fund for refugees that the U.S. Congress had set up. After a few months they moved to Los Angeles, California, where Gaskell Romney worked as a carpenter. In kindergarten, other children mocked Romney's national origin by calling him "Mex". In 1913, the family moved to Oakley, Idaho, and bought a farm, where they grew and subsisted largely on Idaho potatoes. The farm was not on good land and failed when potato prices fell. The family moved to Salt Lake City, Utah, in 1916, where Gaskell Romney resumed construction work, but the family remained generally poor. In 1917, they moved to Rexburg, Idaho, where Gaskell became a successful home and commercial builder in a growing area due to high World War I commodities prices. George started working in wheat and sugar beet fields at the age of eleven and was the valedictorian at his grammar school graduation in 1921 (by the sixth grade he had attended six schools). The Depression of 1920–1921 brought a collapse in prices, and local building was abandoned. His family returned to Salt Lake City in 1921, and while his father resumed construction work, George became skilled at lath-and-plaster work. The family was again prospering when the Great Depression hit in 1929 and ruined them. George watched his parents fail financially in Idaho and Utah and having to take a dozen years to pay off their debts. Seeing their struggles influenced his life and business career. In Salt Lake City, Romney worked while attending Roosevelt Junior High School and, beginning in 1922, Latter-day Saints High School. There he played halfback on the football team, guard on the basketball team, and right field on the baseball team, all with more persistence than talent, but in an effort to uphold the family tradition of athleticism, he earned varsity letters in all three sports. In his senior year, he and junior Lenore LaFount became high school sweethearts; she was from a more well-assimilated Mormon family. Academically, Romney was steady but undistinguished. He graduated from high school in 1925; his yearbook picture caption was "Serious, high minded, of noble nature – a real fellow". Partly to stay near Lenore, Romney spent the next year as a junior college student at the co-located Latter-day Saints University, where he was elected student body president. He was also president of the booster club and played on the basketball team that won the Utah–Idaho Junior College Tournament. ## Missionary work After becoming an elder, Romney earned enough money working to fund himself as a Mormon missionary. In October 1926, he sailed to Great Britain and was first assigned to preach in a slum in Glasgow, Scotland. The abject poverty and hopelessness he saw there affected him greatly, but he was ineffective in gaining converts and temporarily suffered a crisis of faith. In February 1927, he was shifted to Edinburgh and in February 1928 to London, where he kept track of mission finances. He worked under renowned Quorum of the Twelve Apostles intellectuals James E. Talmage and John A. Widtsoe; the latter's admonitions to "Live mightily today, the greatest day of all time is today" made a lasting impression on him. Romney experienced British sights and culture and was introduced to members of the peerage and the Oxford Group. In August 1928, Romney became president of the Scottish missionary district. Operating in a whisky-centric region was difficult, and he developed a new "task force" approach of sending more missionaries to a single location at a time; this successfully drew local press attention and several hundred new recruits. Romney's frequent public proselytizing – from Edinburgh's Mound and in London from soap boxes at Speakers' Corner in Hyde Park and from a platform at Trafalgar Square – developed his gifts for debate and sales, which he would use the rest of his career. Three decades later, Romney said that his missionary time had meant more to him in developing his career than any other experience. ## Early career, marriage and children Romney returned to the U.S. in late 1928 and studied briefly at the University of Utah and LDS Business College. He followed LaFount to Washington, D.C., in fall 1929, after her father, Harold A. Lafount, had accepted an appointment by President Calvin Coolidge to serve on the Federal Radio Commission. He worked for Massachusetts Democratic U.S. Senator David I. Walsh during 1929 and 1930, first as a stenographer using speedwriting, then, when his abilities at that proved limited, as a staff aide working on tariffs and other legislative matters. Romney researched aspects of the proposed Smoot-Hawley tariff legislation and sat in on committee meetings; the job was a turning point in his career and gave him lifelong confidence in dealing with Congress. With one of his brothers, Romney opened a dairy bar in nearby Rosslyn, Virginia, during this time. The business soon failed, in the midst of the Great Depression. He also attended George Washington University at night. Based upon a connection he made working for Walsh, Romney was hired as an apprentice for Alcoa in Pittsburgh in June 1930. When LaFount, an aspiring actress, began earning bit roles in Hollywood movies, Romney arranged to be transferred to Alcoa's Los Angeles office for training as a salesman. There he took night classes at the University of Southern California. (Romney did not attend for long, or graduate from, any of the colleges in which he was enrolled, accumulating only 21⁄2 years of credits; instead he has been described as an autodidact.) LaFount had the opportunity to sign a \$50,000, three-year contract with Metro-Goldwyn-Mayer studios, but Romney convinced her to return to Washington with him as he was assigned a position there with Alcoa as a lobbyist. She later said she had never had a choice of both marriage and an acting career, because the latter would have upstaged him, but expressed no regrets about having chosen the former. Romney would later consider wooing her his greatest sales achievement. The couple married on July 2, 1931, at Salt Lake City Temple. They would have four children: Margo Lynn (born 1935), Jane LaFount (born 1938), George Scott (born 1941), and Willard Mitt (born 1947). The couple's marriage reflected aspects of their personalities and courtship. George was devoted to Lenore, and tried to bring her a flower every day, often a single rose with a love note. George was also a strong, blunt personality used to winning arguments by force of will, but the more self-controlled Lenore was unintimidated and willing to push back against him. The couple quarreled so much as a result that their grandchildren would later nickname them "the Bickersons", but in the end, their closeness would allow them to settle arguments amicably. As a lobbyist, Romney frequently competed on behalf of the aluminum industry against the copper industry, and defended Alcoa against charges of being a monopoly. He also represented the Aluminum Wares Association. In the early 1930s, he helped get aluminum windows installed in the U.S. Department of Commerce Building, at the time the largest office building in the world. Romney joined the National Press Club and the Burning Tree and Congressional Country Clubs; one reporter watching Romney hurriedly play golf at the last said, "There is a young man who knows where he is going." Lenore's cultural refinement and hosting skills, along with her father's social and political connections, helped George in business, and the couple met the Hoovers, the Roosevelts, and other prominent Washington figures. He was chosen by Pyke Johnson, a Denver newspaperman and automotive industry trade representative he met at the Press Club, to join the newly formed Trade Association Advisory Committee to the National Recovery Administration. The committee's work continued even after the agency was declared unconstitutional in 1935. During 1937 and 1938, Romney was also president of the Washington Trade Association Executives. ## Automotive industry representative After nine years with Alcoa, Romney's career had stagnated; there were many layers of executives to climb through and a key promotion he had wanted was given to someone with more seniority. Pyke Johnson was vice president of the Automobile Manufacturers Association, which needed a manager for its new Detroit office. Romney got the job and moved there with his wife and two daughters in 1939. An association study found Americans using their cars more for short trips and convinced Romney that the trend was towards more functional, basic transportation. In 1942, he was promoted to general manager of the association, a position he held until 1948. Romney also served as president of the Detroit Trade Association in 1941. In 1940, as World War II raged overseas, Romney helped start the Automotive Committee for Air Defense, which coordinated planning between the automobile and aircraft industries. Immediately following the December 1941 attack on Pearl Harbor that drew the U.S. into the war, Romney helped turn that committee into, and became managing director of, the Automotive Council for War Production. This organization established a cooperative arrangement in which companies could share machine tools and production improvements, thus maximizing the industry's contribution to the war production effort. It embodied Romney's notion of "competitive cooperative capitalism". With labor leader Victor Reuther, Romney led the Detroit Victory Council, which sought to improve conditions for Detroit workers under wartime stress and deal with the causes of the Detroit race riot of 1943. Romney successfully appealed to the Federal Housing Administration to make housing available to black workers near the Ford Willow Run plant. He also served on the labor-management committee of the Detroit section of the War Manpower Commission. Romney's influence grew while he positioned himself as chief spokesman of the automobile industry, often testifying before Congressional hearings about production, labor, and management issues; he was mentioned or quoted in over 80 stories in The New York Times during this time. By war's end, 654 manufacturing companies had joined the Automotive Council for War Production, and produced nearly \$29 billion in output for the Allied military forces. This included over 3 million motorized vehicles, 80 percent of all tanks and tank parts, 75 percent of all aircraft engines, half of all diesel engines, and a third of all machine guns. Between a fifth and a quarter of all U.S. wartime production was accounted for by the automotive industry. As peacetime production began, Romney persuaded government officials to forgo complex contract-termination procedures, thus freeing auto plants to quickly produce cars for domestic consumption and avoid large layoffs. Romney was director of the American Trade Association Executives in 1944 and 1947, and managing director of the National Automobile Golden Jubilee Committee in 1946. From 1946 to 1949, he represented U.S. employers as a delegate to the Metal Trades Industry conference of the International Labor Office. By 1950, Romney was a member of the Citizens Housing and Planning Council, and criticized racial segregation in Detroit's housing program when speaking before the Detroit City Council. Romney's personality was blunt and intense, giving the impression of a "man in a hurry", and he was considered a rising star in the industry. ## American Motors Corporation chief executive As managing director of the Automobile Manufacturers Association, Romney became good friends with then-president George W. Mason. When Mason became chairman of the manufacturing firm Nash-Kelvinator in 1948, he invited Romney along "to learn the business from the ground up" as his roving assistant, and the new executive spent a year working in different parts of the company. At a Detroit refrigerator plant of the Kelvinator appliance division, Romney battled the Mechanics Educational Society of America union to institute a new industrial–labor relations program that forestalled the whole facility being shut down. He appealed to the workers by saying, "I am no college man. I've laid floors, I've done lathing. I've thinned beets and shocked wheat." As Mason's protégé, Romney assumed executive assignment for the development of the Rambler. Mason had long sought a merger of Nash-Kelvinator with one or more other companies, and on May 1, 1954, it merged with Hudson Motor Car to become the American Motors Corporation (AMC). It was the largest merger in the history of the industry, and Romney became an executive vice president of the new firm. In October 1954, Mason suddenly died of acute pancreatitis and pneumonia. Romney was named AMC's president and chairman of the board the same month. When Romney took over, he canceled Mason's plan to merge AMC with Studebaker-Packard Corporation (or any other automaker). He reorganized upper management, brought in younger executives, and pruned and rebuilt AMC's dealer network. Romney believed that the only way to compete with the "Big Three" (General Motors, Ford, and Chrysler) was to stake the future of AMC on a new smaller-sized car line. Together with chief engineer Meade Moore, by the end of 1957 Romney had completely phased out the Nash and Hudson brands, whose sales had been lagging. The Rambler brand was selected for development and promotion, as AMC pursued an innovative strategy: manufacturing only compact cars. The company struggled badly at first, losing money in 1956, more in 1957, and experiencing defections from its dealer network. Romney instituted company-wide savings and efficiency measures, and he and other executives reduced their salaries by up to 35 percent. Though AMC was on the verge of being taken over by corporate raider Louis Wolfson in 1957, Romney was able to fend him off. Then sales of the Rambler finally took off, leading to unexpected financial success for AMC. It posted its first quarterly profit in three years in 1958, was the only car company to show increased sales during the recession of 1958, and moved from thirteenth to seventh place among worldwide auto manufacturers. In contrast with the Hudson's NASCAR racing success in the early 1950s, the Ramblers were frequent winners in the coast-to-coast Mobil Economy Run, an annual event on U.S. highways. Sales remained strong during 1960 and 1961; the Rambler was America's third most popular car both years. A believer in "competitive cooperative consumerism", Romney was effective in his frequent appearances before Congress. He discussed what he saw as the twin evils of "big labor" and "big business", and called on Congress to break up the Big Three. As the Big Three automakers introduced ever-larger models, AMC undertook a "gas-guzzling dinosaur fighter" strategy, and Romney became the company spokesperson in print advertisements, public appearances, and commercials on the Disneyland television program. Known for his fast-paced, short-sleeved management style that ignored organizational charts and levels of responsibility, he often wrote the ad copy himself. Romney became what automotive writer Joe Sherman termed "a folk hero of the American auto industry" and one of the first high-profile media-savvy business executives. His focus on small cars as a challenge to AMC's domestic competitors, as well as the foreign-car invasion, was documented in the April 6, 1959, cover story of Time magazine, which concluded that "Romney has brought off singlehanded one of the most remarkable selling jobs in U.S. industry." A full biography of him was published in 1960; the company's resurgence made Romney a household name. The Associated Press named Romney its Man of the Year in Industry for four consecutive years, 1958 through 1961. The company's stock rose from \$7 per share to \$90 per share, making Romney a millionaire from stock options. However, whenever he felt his salary and bonus was excessively high for a year, he gave the excess back to the company. After initial wariness, he developed a good relationship with United Automobile Workers leader Walter Reuther, and AMC workers also benefited from a then-novel profit-sharing plan. Romney was one of only a few Michigan corporate chiefs to support passage and implementation of the state Fair Employment Practices Act. ## Local church and civic leadership Religion was a paramount force in Romney's life. In a 1959 essay for the Detroit Free Press he said, "My religion is my most precious possession. ... Except for my religion, I easily could have become excessively occupied with industry, social and recreational activities. Sharing personal responsibility for church work with my fellow members has been a vital counterbalance in my life." Following LDS Church practices, he did not drink alcohol or caffeinated beverages, smoke, or swear. Romney and his wife tithed, and from 1955 to 1965, gave 19 percent of their income to the church and another 4 percent to charity. Romney was a high priest in the Melchizedek priesthood of the LDS, and beginning in 1944 he headed the Detroit church branch (which initially was small enough to meet in a member's house). By the time he was AMC chief, he presided over the Detroit stake, which included not only all of Metro Detroit, Ann Arbor, and the Toledo area of Ohio but also the western edge of Ontario along the Michigan border. In this role, Romney oversaw the religious work of some 2,700 church members, occasionally preached sermons, and supervised the construction of the first stake tabernacle east of the Mississippi River in 100 years. Because the stake covered part of Canada, he often interacted with Canadian Mission President Thomas S. Monson. Romney's rise to a leadership role in the church reflected the church's journey from a fringe pioneer religion to one that was closely associated with mainstream American business and values. Due in part to his prominence, the larger Romney family tree would become viewed as "LDS royalty". Romney and his family lived in affluent Bloomfield Hills, having moved there from Detroit around 1953. He became deeply active in Michigan civic affairs. He was on the board of directors of the Children's Hospital of Michigan and the United Foundation of Detroit, and was chairman of the executive committee of the Detroit Round Table of Catholics, Jews, and Protestants. In 1959, he received the Anti-Defamation League of B'nai B'rith's Americanism award. Starting in 1956, Romney headed a citizen-based committee for improved educational programs in Detroit's public schools. The 1958, final report of the Citizens Advisory Committee on School Needs was largely Romney's work and received considerable public attention; it made nearly 200 recommendations for economy and efficiency, better teacher pay, and new infrastructure funding. Romney helped a \$90-million education-related bond issue and tax increase win an upset victory in an April 1959 statewide referendum. He organized Citizens for Michigan in 1959, a nonpartisan group that sought to study Detroit's problems and build an informed electorate. Citizens for Michigan built on Romney's belief that assorted interest groups held too much influence in government, and that only the cooperation of informed citizens acting for the benefit of all could counter them. Based on his fame and accomplishments in a state where automobile making was a central topic of conversation, Romney was seen as a natural to enter politics. He first became directly involved in politics in 1959, when he was a key force in the petition drive calling for a constitutional convention to rewrite the Michigan Constitution. Romney's sales skills made Citizens for Michigan one of the most effective organizations among those calling for the convention. Previously unaffiliated politically, Romney declared himself a member of the Republican Party and gained election to the convention. By early 1960, many in Michigan's somewhat moribund Republican Party were touting Romney as a possible candidate for governor, U.S. senator, or even U.S. vice president. Also in early 1960, Romney served on the Fair Campaign Practices Committee, a group also having Jewish, Catholic, mainline and evangelical Protestant, and Orthodox Christian members. It issued a report whose guiding principles were that no candidate for elected office should be supported or opposed due to their religion and that no campaign for office should be seen as an opportunity to vote for one religion against another. This statement helped pave the way for John F. Kennedy's famous speech on religion and public office later that year. Romney briefly considered a run in the 1960 Senate election, but instead became a vice president of the constitutional convention that revised the Michigan constitution during 1961 and 1962. ## Governor of Michigan After a period of pained indecision and a 24-hour prayer fast, Romney stepped down from AMC in February 1962 to enter electoral politics (given an indefinite leave of absence, he was succeeded as president of AMC by Roy Abernethy). Romney's position as the leader of the moderate Republicans at the constitutional convention helped gain him the Republican nomination for governor of Michigan. He ran against the incumbent Democratic Governor John B. Swainson in the general election. Romney campaigned on revising the state's tax structure, increasing its appeal to businesses and the general public, and getting it "rolling again". Romney decried both the large influence of labor unions within the Democratic Party and the similarly large influence of big business within the Republican Party. His campaign was among the first to exploit the capabilities of electronic data processing. Romney won by some 80,000 votes and ended a fourteen-year stretch of Democratic rule in the state executive spot. His win was attributed to his appeal to independent voters and to those from the increasingly influential suburbs of Detroit, who by 1962 were more likely to vote Republican than the heavily Democratic residents of the city itself. Additionally, Romney had found a level of support among labor union members that was unusual for a Republican. Democrats won all the other statewide executive offices in the election, including Democratic incumbent T. John Lesinski in the separate election for lieutenant governor of Michigan. Romney's success caused immediate mention of him as a presidential possibility for 1964, and President John F. Kennedy said privately in 1963 that, "The one fellow I don't want to run against is Romney." Romney was sworn in as governor on January 1, 1963. His initial concern was the implementation of the overhaul of the state's financial and revenue structure that had been authorized by the constitutional convention. In 1963, he proposed a comprehensive tax revision package that included a flat-rate state income tax, but general economic prosperity alleviated pressure on the state budget and the Michigan Legislature rejected the measure. Romney's early difficulties with the legislature helped undermine an attempted push that year of Romney as a national political figure by former Richard Nixon associates. One Michigan Democrat said of Romney, "He has not yet learned that things in government are not necessarily done the moment the man at the top gives an order. He is eager and sometimes impatient." But over his first two years in office, Romney was able to work with Democrats – who often had at least partial control of the legislature – and an informal bipartisan coalition formed which allowed Romney to accomplish many of his goals and initiatives. Romney held a series of Governor's Conferences, which sought to find new ideas from public services professionals and community activists who attended. He opened his office in the Michigan State Capitol to visitors, spending five minutes with every citizen who wanted to speak with him on Thursday mornings, and was always sure to shake the hands of schoolchildren visiting the capitol. He almost always eschewed political activities on Sunday, the Mormon Sabbath. His blunt and unequivocal manner sometimes caused friction, and family members and associates used the idiom "bull in a china shop" to describe him. He took a theatrical approach to governance, staging sudden appearances in settings where he might be politically unwelcome. One former aide later said that willful was too weak a word to describe him, and chose messianic instead. Romney saw a moral dimension in every issue and his political views were held with as much fervor as religious ones; writer Theodore H. White said "the first quality that surfaced, as one met and talked with George Romney over a number of years, was a sincerity so profound that, in conversation, one was almost embarrassed." Romney supported the American Civil Rights Movement while governor. Although he belonged to a church that did not allow black people in its lay clergy, Romney's hardscrabble background and subsequent life experiences led him to support the movement. He reflected, "It was only after I got to Detroit that I got to know Negroes and began to be able to evaluate them and I began to recognize that some Negroes are better and more capable than lots of whites." During his first State of the State address in January 1963, Romney declared that "Michigan's most urgent human rights problem is racial discrimination—in housing, public accommodations, education, administration of justice, and employment." Romney helped create the state's first civil rights commission. When Martin Luther King Jr. came to Detroit in June 1963 and led the 120,000-strong Great March on Detroit, Romney designated the occasion Freedom March Day in Michigan, and sent state senator Stanley Thayer to march with King as his emissary, but did not attend himself because it was on Sunday. Romney did participate in a much smaller march protesting housing discrimination the following Saturday in Grosse Pointe, after King had left. Romney supported the Civil Rights Act of 1964 then under consideration by Congress, and his support for it and advocacy of civil rights, in general, brought him criticism from some in his own church. In January 1964, Quorum of the Twelve Apostles member Delbert L. Stapley wrote him that a proposed civil rights bill was "vicious legislation" and told him that "the Lord had placed the curse upon the Negro" and men should not seek its removal. Romney refused to change his position and increased his efforts towards civil rights. Regarding the church policy itself, Romney was among those liberal Mormons who hoped the church leadership would revise the theological interpretation that underlay it, but Romney did not believe in publicly criticizing the church, subsequently saying that fellow Mormon Stewart Udall's 1967 published denunciation of the policy "cannot serve any useful religious purpose". In the 1964 U.S. presidential election, Senator Barry Goldwater quickly became the likely Republican Party nominee. Goldwater represented a new wave of American conservatism, of which the moderate Romney was not a part. Romney also felt that Goldwater would be a drag on Republicans running in all the other races that year, including Romney's own (at the time, Michigan had two-year terms for its governor). Finally, Romney disagreed strongly with Goldwater's views on civil rights; he would later say, "Whites and Negroes, in my opinion, have got to learn to know each other. Barry Goldwater didn't have any background to understand this, to fathom them, and I couldn't get through to him." Romney declared at a dinner held in his honor at Salt Lake City that by appealing to the Southerners who supported racial segregation in order to win the presidency, the Republican Party would forever lose its association as the party of Abraham Lincoln. During the June 1964 National Governors' Conference, 13 of 16 Republican governors present were opposed to Goldwater; their leaders were Jim Rhodes of Ohio, Nelson Rockefeller of New York (whose own campaign had just stalled out with a loss to Goldwater in the California primary), William Scranton of Pennsylvania, and Romney. In an unusual appearance at a Sunday press conference, Romney declared that the nomination of Goldwater could lead to the "suicidal destruction of the Republican Party", and that "If [Goldwater's] views deviate as indicated from the heritage of our party, I will do everything within my power to keep him from becoming the party's presidential nominee." Romney had, however, previously vowed to Michigan voters that he would not run for president in 1964. Detroit newspapers indicated they would not support him in any such bid, and Romney quickly decided to honor his pledge to stay out of the contest. Scranton entered instead, but Goldwater prevailed decisively at the 1964 Republican National Convention. Romney's name was entered into nomination as a favorite son by U.S. Representative Gerald Ford of Michigan (who had not wanted to choose between candidates during the primary campaign) and he received the votes of 41 delegates in the roll call (40 of Michigan's 48 and one from Kansas). At the convention, Romney fought for a strengthened civil rights plank in the party platform that would pledge action to eliminate discrimination at the state, local, and private levels, but it was defeated on a voice vote. He also failed to win support for a statement that condemned both left- and right-wing extremism without naming any organizations, which lost a standing vote by a two-to-one margin. Both of Romney's positions were endorsed by former President Dwight Eisenhower, who had an approach to civic responsibilities similar to Romney's. As the convention concluded, Romney neither endorsed nor repudiated Goldwater and vice presidential nominee William E. Miller, saying he had reservations about Goldwater's lack of support for civil rights and the political extremism that Goldwater embodied. For the fall 1964 elections, Romney cut himself off from the national ticket, refusing to even appear on the same stage with them and continuing to feud with Goldwater privately. He campaigned for governor in mostly Democratic areas, and when pressed at campaign appearances about whether he supported Goldwater, he replied, "You know darn well I'm not!" Romney was re-elected in 1964 by a margin of over 380,000 votes over Democratic Congressman Neil Staebler, despite Goldwater's landslide defeat to President Lyndon B. Johnson that swept away many other Republican candidates. Romney won 15 percent of Michigan's black vote, compared to Goldwater's two percent. In 1965, Romney visited South Vietnam for 31 days and said that he was continuing his strong support for U.S. military involvement there. During 1966, while son Mitt was away in France on missionary work, George Romney guided Mitt's fiancée Ann Davies in her conversion to Mormonism. Governor Romney continued his support of civil rights; after violence broke out during the Selma to Montgomery marches in 1965, he marched at the front of a Detroit parade in solidarity with the marchers. In 1966, Romney had his biggest electoral success, winning re-election again by some 527,000 votes over Democratic lawyer Zolton Ferency (this time to a four-year term, after a change in Michigan law). His share of the black vote rose to over 30 percent, a virtually unprecedented accomplishment for a Republican. By 1967, a looming deficit prompted the legislature to overhaul Michigan's tax structure. Personal and corporate state income taxes were created while business receipts and corporation franchise taxes were eliminated. Passage of an income levy had eluded past Michigan governors, no matter which party controlled of the legislature. Romney's success convincing Democratic and Republican factions to compromise on the details of the measure was considered a key test of his political ability. The massive 12th Street riot in Detroit began during the predawn hours of July 23, 1967, precipitated by a police raid of a speakeasy in a predominantly black neighborhood. As the day wore on and looting and fires got worse, Romney called in the Michigan State Police and the Michigan National Guard. At 3 a.m. on July 24, Romney and Detroit Mayor Jerome Cavanagh called U.S. Attorney General Ramsey Clark and requested that federal troops be sent. Clark indicated that to do so, Romney would have to declare a state of civil insurrection, which the governor was loath to do from fear that insurance companies would seize upon it as a reason to not cover losses owing to the riot. Elements of the 82nd and 101st U.S. Army Airborne Divisions were mobilized outside of the city. As the situation in Detroit worsened, Romney told Deputy Secretary of Defense Cyrus Vance, "We gotta move, man, we gotta move." Near midnight on July 24, President Johnson authorized thousands of paratroopers to enter Detroit. Johnson went on national television to announce his actions and made seven references to Romney's inability to control the riot using state and local forces. Thousands of arrests took place and the rioting continued until July 27. The final toll was the largest of any American civil disturbance in fifty years: 43 dead, over a thousand injured, 2,500 stores looted, hundreds of homes burned, and some \$50 million overall in property damage. There were strong political implications in the handling of the riot, as Romney was seen as a leading Republican contender to challenge Johnson's presidential re-election the following year; Romney believed the White House had intentionally slowed its response and he charged Johnson with having "played politics" in his actions. The riot notwithstanding, by the end of Romney's governorship the state had made strong gains in civil rights related to public employment, government contracting, and access to public accommodations. Lesser improvements were made in combating discrimination in private employment, housing, education, and law enforcement. Considerable state and federal efforts were made during this time to improve the situation of Michigan's migrant farm workers and Native Americans, without much progress for either. Of the assassination of Martin Luther King Jr. on April 4, 1968, Romney said it was "a great national tragedy at a time when we need aggressive nonviolent leadership to peacefully achieve equal rights, equal opportunities and equal responsibilities for all. This is indeed a cause for general mourning and redicated effort by everyone to eliminate racial prejudice in all of its ugly and repressive forms." The King assassination riots affected many cities across the United States over the next few days. There was some rioting in Detroit and Romney ordered the National Guard deployed and imposed a curfew; but the situation was calmer there than in the worst-affected cities and much less violent than the 1967 riots had been. Romney and his wife Lenore attended the funeral of King on April 9. Romney greatly expanded the size of state government while governor. His first state budget, for fiscal year 1963, was \$550 million, a \$20 million increase over that of his predecessor Swainson. Romney had also inherited an \$85 million budget deficit, but left office with a surplus. In the following fiscal years, the state budget increased to \$684 million for 1964, \$820 million for 1965, \$1 billion for 1966, \$1.1 billion for 1967, and was proposed as \$1.3 billion for 1968. Romney led the way for a large increase in state spending on education, and Michigan began to develop one of the nation's most comprehensive systems of higher education. There was a significant increase in funding support for local governments and there were generous benefits for the poor and unemployed. Romney's spending was enabled by the post–World War II economic expansion that generated continued government surpluses and by a consensus of both parties in Michigan to maintain extensive state bureaucracies and expand public sector services. During his time as governor, Romney also signed the Public Employment Relations Act, which granted collective bargaining rights for public sector employees, reduced strike-related penalties to public employees, and prevented agencies from engaging in unfair practices against unions. It was one of the first state laws in the country that obligated governmental entities to negotiate with public employee unions. The bipartisan coalitions that Romney worked with in the state legislature enabled him to reach most of his legislative goals. His record as governor continued his reputation for having, as White said, "a knack for getting things done". Noted University of Michigan historian Sidney Fine assessed him as "a highly successful governor". ## 1968 presidential campaign Romney's wide margin of re-election as governor in November 1966 thrust him to the forefront of national Republicans. In addition to his political record, the tall, square-jawed, handsome, graying Romney matched what the public thought a president should look like. Republican governors were determined not to let a Goldwater-sized loss recur, and neither Rockefeller nor Scranton wanted to run again; the governors quickly settled on Romney as their favorite for the Republican presidential nomination in the 1968 U.S. presidential election. Former Congressman and Republican National Committee chair Leonard W. Hall became Romney's informal campaign manager. A Gallup Poll after the November elections showed Romney as favored among Republicans over former Vice President Richard Nixon for the Republican nomination, 39 percent to 31 percent; a Harris Poll showed Romney besting President Johnson among all voters by 54 percent to 46 percent. Nixon considered Romney his chief opponent. Romney announced an exploratory phase for a possible campaign in February 1967, beginning with a visit to Alaska and the Rocky Mountain states. Romney's greatest weakness was a lack of foreign policy expertise and a need for a clear position on the Vietnam War. The press coverage of the trip focused on Vietnam, and reporters were frustrated by Romney's initial reluctance to speak about it. The qualities that helped Romney as an industry executive worked against him as a presidential candidate; he had difficulty being articulate, often speaking at length and too forthrightly on a topic and then later correcting himself while maintaining he was not changing what he had said. Reporter Jack Germond joked that he was going to add a single key on his typewriter that would print, "Romney later explained ..." Life magazine wrote that Romney "manages to turn self-expression into a positive ordeal" and that he was no different in private: "nobody can sound more like the public George Romney than the real George Romney let loose to ramble, inevitably away from the point and toward some distant moral precept." Romney had the image of a do-gooder and reporters began to refer to him as "Saint George". The perception grew that Romney was gaffe-prone. The campaign, beset by internal rivalries, soon went through the first of several reorganizations. By then, Nixon had already overtaken Romney in Gallup's Republican preference poll, a lead he would hold throughout the rest of the campaign. The techniques that had brought Romney victories in Michigan, such as operating outside established partisan formulas and keeping a distance from Republican Party organizational elements, proved ineffective in a party nominating contest. Romney's national poll ratings continued to erode, and by May he had lost his edge over Johnson. The Detroit riots of July 1967 did not change his standing among Republicans, but did give him a bounce in national polls against the increasingly unpopular president. Questions were occasionally asked about Romney's eligibility to run for U.S. president owing to his birth in Mexico, given the ambiguity in the United States Constitution over the phrase "natural-born citizen". Romney would depart the race before the matter could be more definitively resolved, although the preponderance of opinion then and since has been that he was eligible. Romney was also the first Mormon to stage a credible run for the presidency. By this time, he was well known as a Mormon, especially through profiles in national magazines dating back to his years in business. Indeed, he was perhaps the most nationally visible Mormon since Brigham Young. However, his membership in the LDS Church was not heavily mentioned during the campaign. What indirect discussion there was helped bring to national attention the church's policy regarding blacks, but the contrast of Romney's pro-civil rights stance deflected any criticism of him and indirectly benefited the image of the church. Some historians and Mormons suspected then and later that had Romney's campaign lasted longer and been more successful, his religion might have become a more prominent issue. Romney's campaign did often focus on his core beliefs; a Romney billboard in New Hampshire read "The Way To Stop Crime Is To Stop Moral Decay". Dartmouth College students gave a bemused reaction to his morals message, displaying signs such as "God Is Alive and Thinks He's George Romney". A spate of books were published about Romney, more than for any other candidate, and included a friendly campaign biography, an attack from a former staffer, and a collection of Romney's speeches. On August 31, 1967, in a taped interview with locally influential (and nationally syndicated) talk show host Lou Gordon of WKBD-TV in Detroit, Romney stated: "When I came back from Viet Nam [in November 1965], I'd just had the greatest brainwashing that anybody can get." He then shifted to opposing the war: "I no longer believe that it was necessary for us to get involved in South Vietnam to stop Communist aggression in Southeast Asia." Decrying the "tragic" conflict, he urged "a sound peace in South Vietnam at an early time". Thus Romney disavowed the war and reversed himself from his earlier stated belief that the war was "morally right and necessary". The "brainwashing" reference had been an offhand, unplanned remark that came at the end of a long, behind-schedule day of campaigning. By September 7, it found its way into prominence at The New York Times. Eight other governors who had been on the same 1965, trip as Romney said no such activity had taken place, and one of them, Philip H. Hoff of Vermont, said Romney's remarks were "outrageous, kind of stinking ... Either he's a most naïve man or he lacks judgment." The overtones of brainwashing, following the experiences of American prisoners of war (highlighted by the 1962 film The Manchurian Candidate), made Romney's comment devastating, especially as it reinforced the negative image of Romney's abilities that had already developed. The topic of brainwashing quickly became the subject of critical newspaper editorials, as well as television talk show fodder, and Romney bore the brunt of the topical humor. Senator Eugene McCarthy, running against Johnson for the Democratic nomination, said that in Romney's case, "a light rinse would have been sufficient." Republican Congressman Robert T. Stafford of Vermont sounded a common concern: "If you're running for the presidency, you are supposed to have too much on the ball to be brainwashed." After the remark was aired, Romney's poll ratings nosedived, going from 11 percent behind Nixon to 26 percent behind. He nonetheless persevered, staging a three-week, 17-city tour of the nation's ghettos and disadvantaged areas that none of his advisors thought politically worthwhile. He sought to engage militants in dialogue, found himself exposed to the harsh realities and language of ghetto areas, and had an unusual encounter with hippies and the Diggers in San Francisco's Haight-Ashbury. Romney formally announced on November 18, 1967, at Detroit's Veterans Memorial Building, that he had "decided to fight for and win the Republican nomination and election to the Presidency of the United States". His subsequent release of his federal tax returns – twelve years' worth going back to his time as AMC head – was groundbreaking and established a precedent that many future presidential candidates would have to contend with. He spent the following months campaigning tirelessly, focusing on the New Hampshire primary, the first of the season, and doing all the on-the-ground activities known to that state: greeting workers at factory gates before dawn, having neighborhood meetings in private homes, and stopping at bowling alleys. He returned to Vietnam in December 1967 and made speeches and proposals on the subject, one of which presaged Nixon's eventual policy of Vietnamization. For a while, he got an improved response from voters. Two weeks before the March 12 primary, an internal poll showed Romney losing to Nixon by a six-to-one margin in New Hampshire. Rockefeller, seeing the poll result as well, publicly maintained his support for Romney but said he would be available for a draft; the statement made national headlines and embittered Romney (who would later claim it was Rockefeller's entry, and not the "brainwashing" remark, that doomed him). Seeing his cause was hopeless, Romney announced his withdrawal as a presidential candidate on February 28, 1968. Romney wrote his son Mitt, still away on missionary work: "Your mother and I are not personally distressed. As a matter of fact, we are relieved. ... I aspired, and though I achieved not, I am satisfied." Nixon went on to gain the nomination. At the 1968 Republican National Convention in Miami Beach, Romney refused to release his delegates to Nixon, something Nixon did not forget. Romney finished a weak fifth, with only 50 votes on the roll call (44 of Michigan's 48, plus six from Utah). When party liberals and moderates and others expressed dismay at Nixon's choice of Spiro Agnew as his running mate, Romney's name was placed into nomination for vice president by Mayor of New York John Lindsay and pushed by several delegations. Romney said he did not initiate the move, but he made no effort to oppose it. Nixon saw the rebellion as a threat to his leadership and actively fought against it; Romney lost to Agnew 1,119–186. Romney, however, worked for Nixon's eventually successful campaign in the fall, which did earn him Nixon's gratitude. Presidential historian Theodore H. White wrote that during his campaign Romney gave "the impression of an honest and decent man simply not cut out to be President of the United States". Governor Rhodes more memorably said, "Watching George Romney run for the presidency was like watching a duck try to make love to a football." ## Secretary of Housing and Urban Development After the election, Nixon named Romney to be secretary of Housing and Urban Development (HUD). The president-elect made the announcement as part of a nationally televised presentation of his new cabinet on December 11, 1968. Nixon praised Romney for his "missionary zeal" and said that he would also be tasked with mobilizing volunteer organizations to fight poverty and disease within the United States. In actuality, Nixon distrusted Romney politically, and appointed him to a liberally oriented, low-profile federal agency partly to appease Republican moderates and partly to reduce Romney's potential to challenge for the 1972 Republican presidential nomination. Romney was confirmed by the Senate without opposition on January 20, 1969, the day of Nixon's inauguration, and was sworn into office on January 22, with Nixon at his side. Romney resigned as governor that same day, and was succeeded by his lieutenant governor, William G. Milliken. Milliken continued Romney's model of downplaying party label and ideology, and Republicans held onto the governorship for three more terms until 1983, though Michigan was one of the nation's most blue-collar states. As secretary, Romney conducted the first reorganization of the department since its 1966 creation. The changes were intended to make the department more business-like with fewer independent bureaucracies. His November 1969, plan brought programs with similar functions together under unified, policy-based administration at the Washington level, and created two new assistant secretary positions. At the same time, he increased the number of regional and area offices and decentralized program operations and locality-based decisions to them, moves that were in keeping with Nixon's "New Federalism". In particular, the Federal Housing Administration underwent wholesale changes to make it less autonomous. During his tenure, Romney believed his reorganization made the department more efficient and able to withstand some, but not all, of the budget cuts that Nixon imposed on it. The Fair Housing Act of 1968 mandated a federal commitment towards housing desegregation, and required HUD to orient its programs in this direction. Romney, filled with moral passion, wanted to address the widening economic and geographic gulf between whites and blacks by moving blacks out of inner-city ghettos into suburbs. Romney proposed an open housing scheme to facilitate desegregation, dubbed "Open Communities"; HUD planned it for many months without keeping Nixon informed. When the open housing proposal became public, local reaction was often hostile. Such was the reaction of many residents in Warren, Michigan, a predominately white blue-collar suburb of Detroit. While it had no formal discriminatory laws, most blacks were excluded by zoning practices, refusals to sell to them, and intimidatory actions of white property owners, many of whom were ethnic Polish and Catholic and had moved to the suburb as part of white flight. By this time, Detroit was 40–50 percent black. HUD made Warren a prime target for Open Communities enforcement and threatened to halt all federal assistance to the town unless it took a series of actions to end racial discrimination there; town officials said progress was being made and that their citizens resented forced integration. Romney rejected this response, partly because when he was governor, Warren residents had thrown rocks and garbage and yelled obscenities for days at a biracial couple who moved into town. Now the secretary said, "The youth of this nation, the minorities of this nation, the discriminated of this nation are not going to wait for 'nature to take its course.' What is really at issue here is responsibility – moral responsibility." Romney visited Warren in July 1970, where he addressed leaders from it and around 40 nearby suburbs. He emphasized that the government was encouraging affirmative action rather than forced integration, but the local populace saw little difference and Romney was jostled and jeered as a police escort took him away from the meeting place. Nixon saw what happened in Warren and had no interest in the Open Communities policy in general, remarking to domestic adviser John Ehrlichman that, "This country is not ready at this time for either forcibly integrated housing or forcibly integrated education." The Open Communities policy conflicted with Nixon's purported use of the Southern strategy of gaining political support among traditionally white southern Democrats, and his own views on race. Romney was forced to back down on Warren and release federal monies to them unconditionally. When Black Jack, Missouri, subsequently resisted a HUD-sponsored plan for desegregated lower- and middle-income housing, Romney appealed to U.S. Attorney General John Mitchell for Justice Department intervention. In September 1970, Mitchell refused and Romney's plan collapsed. Under Romney, HUD did establish stricter affirmative racial guidelines in relation to new public housing projects, but overall administration implementation of the Fair Housing Act was lacking. Some of the responsibility lay with Romney's inattentiveness to gaining political backing for the policy, including the failure to rally natural allies such as the NAACP. Salisbury University historian Dean J. Kotlowski writes that, "No civil rights initiative developed on Nixon's watch was as sincerely devised or poorly executed as open communities." Another of Romney's initiatives was "Operation Breakthrough", launched in June 1969. It was intended to increase the amount of housing available to the poor and it initially had Nixon's support. Based on his automotive industry experience, Romney thought that the cost of housing could be significantly reduced if in-factory modular construction techniques were used, despite the lack of national building standards. HUD officials believed that the introduction of this technique could help bring about desegregation; Romney said, "We've got to put an end to the idea of moving to suburban areas and living only among people of the same economic and social class". This aspect of the program brought about strong opposition at the local suburban level and lost support in the White House as well. Over half of HUD's research funds during this time were spent on Operation Breakthrough, and it was modestly successful in its building goals. It did not revolutionize home construction, and was phased out once Romney left HUD. But it resulted indirectly in more modern and consistent building codes and introduction of technological advances such as the smoke alarm. In any case, using conventional construction methods, HUD set records for the amount of construction of assisted housing for low- and moderate-income families. Toward the end of his term, Romney oversaw the demolition of the infamous Pruitt–Igoe housing project in St. Louis, which had become crime-ridden, drug-infested, and largely vacant. Romney was largely outside the president's inner circle and had minimal influence within the Nixon administration. His intense, sometimes bombastic style of making bold advances and awkward pullbacks lacked adequate guile to succeed in Washington. Desegregation efforts in employment and education had more success than in housing during the Nixon administration, but HUD's many missions and unwieldy structure, which sometimes worked at cross-purposes, made it institutionally vulnerable to political attack. Romney also failed to understand or circumvent Nixon's use of counsel Ehrlichman and White House Chief of Staff H. R. Haldeman as policy gatekeepers, resulting in de facto downgrading of the power of cabinet officers. Romney was used to being listened to and making his own decisions; he annoyed Nixon by casually interrupting him at meetings. At one point, Nixon told Haldeman, "Just keep [Romney] away from me." A statement by Romney that he would voluntarily reduce his salary to aid the federal budget was viewed by Nixon as an "ineffective grandstand play". By early 1970, Nixon had decided he wanted Romney removed from his position. Nixon, who hated to fire people and was, as Ehrlichman later described, "notoriously inadept" at it, instead hatched a plot to get Romney to run in the 1970 U.S. Senate race in Michigan. Instead, Romney proposed that his wife Lenore run, and she received the backing of some state Republicans. There was also resistance to her candidacy and an initial suspicion that it was just a stalking horse for keeping his options open. She barely survived a primary against a conservative opponent, then lost badly in the general election to incumbent Democrat Philip A. Hart. Romney blamed others for his wife having entered the race, when he had been the major force behind it. In late 1970, after opposition to Open Communities reached a peak, Nixon again decided that Romney should go. Still reluctant to dismiss him, Nixon tried to get Romney to resign by forcing him to capitulate on a series of policy issues. Romney surprised both Nixon and Haldeman by agreeing to back off his positions, and Nixon kept him as HUD secretary. Nixon remarked privately afterwards, "[Romney] talks big but folds under pressure." Puzzled by Nixon's lack of apparent ideological consistency across different areas of the government, Romney told a friend, "I don't know what the president believes in. Maybe he doesn't believe in anything", an assessment shared by others both inside and outside the administration. For his work as Secretary of the Housing and Urban Development, in March 1972 Romney was awarded the Republican of the Year Award by the centrist Republican organization Ripon Society. In spring 1972, the FHA was struck by scandal. Since the passage of the Housing and Urban Development Act of 1968 and the creation of the Government National Mortgage Association (Ginnie Mae), it had been responsible for helping the poor buy homes in inner-city areas via government-backed mortgages. These were financed by mortgage-backed securities, the first issues of which Romney had announced in 1970. A number of FHA employees, along with a number of real estate firms and lawyers, were indicted for a scheme in which the value of cheap inner-city homes was inflated and they were sold to black buyers who could not really afford them, based on using those government-backed mortgages. The government was stuck for the bad loans when owners defaulted, as the properties were overvalued and could not be resold at inflated prices. Assessments of the overall cost of the scandal were as high as \$2 billion. Romney conceded that HUD had been unprepared to deal with speculators and had not been alert to earlier signs of illegal activity at the FHA. The FHA scandal gave Nixon the ability to shut down HUD's remaining desegregation efforts with little political risk; by January 1973, all federal housing funds had been frozen. In August 1972, Nixon announced Romney would inspect Hurricane Agnes flood damage in Wilkes-Barre, Pennsylvania, but neglected to tell Romney first. Much of the area lacked shelter six weeks after the storm, residents were angry, and Romney got into a three-way shouting match with Governor Milton J. Shapp and a local citizens' representative. Romney denounced Shapp's proposal that the federal government pay off the mortgages of victims as "unrealistic and demagogic", and the representative angrily said to Romney, "You don't give a damn whether we live or die." The confrontation received wide media attention, damaging Romney's public reputation. Feeling very frustrated, Romney wanted to resign immediately, but Nixon, worried about the fallout to his 1972 re-election campaign among moderate Republican voters, insisted that Romney stay on. Romney agreed, although he indicated to the press that he would leave eventually. Romney finally turned in his resignation on November 9, 1972, following Nixon's re-election. His departure was announced on November 27, 1972, as part of the initial wave of departures from Nixon's first-term cabinet. Romney said he was unhappy with presidential candidates who declined to address "the real issues" facing the nation for fear they would lose votes, and said he would form a new national citizens' organization that would attempt to enlighten the public on the most vital topics. He added that he would stay on as secretary until his successor could be appointed and confirmed, and did stay until Nixon's second inauguration on January 20, 1973. Upon his departure, Romney said he looked forward "with great enthusiasm" to his return to private life. The Boston Globe later termed Romney's conflicts with Nixon a matter that "played out with Shakespearean drama". Despite all the setbacks and frustrations, University at Buffalo political scientist Charles M. Lamb concludes that Romney pressed harder to achieve suburban integration than any prominent federal official in the ensuing 1970s through the 1990s. Lehman College sociology professor Christopher Bonastia assesses the Romney-era HUD as having come "surprisingly close to implementing unpopular antidiscrimination policies" but finally being unable to produce meaningful alterations in American residential segregation patterns, with no equivalent effort having happened since then or likely to in the foreseeable future. In contrast, Illinois State University historian Roger Biles has termed Romney's tenure as secretary "disastrous" while allowing that none of the secretaries who followed him have done any better. ## Public service, volunteerism, and final years Romney was known as an advocate of public service, and volunteerism was a passion of his. He initiated several volunteer programs while governor, and at the beginning of the Nixon administration chaired the Cabinet Committee on Voluntary Action. Out of this the National Center for Voluntary Action was created: an independent, private, non-profit organization intended to encourage volunteerism on the part of American citizens and organizations, to assist in program development for voluntary efforts, and to make voluntary action an important force in American society. Romney's long interest in volunteerism stemmed from the Mormon belief in the power of institutions to transform the individual, but also had a secular basis. At the National Center's first meeting on February 20, 1970, he said: > Americans have four basic ways of solving problems that are too big for individuals to handle by themselves. One is through the federal government. A second is through state governments and the local governments that the states create. The third is through the private sector – the economic sector that includes business, agriculture, and labor. The fourth method is the independent sector – the voluntary, cooperative action of free individuals and independent association. Voluntary action is the most powerful of these, because it is uniquely capable of stirring the people themselves and involving their enthusiastic energies, because it is their own – voluntary action is the people's action. ... As Woodrow Wilson said, "The most powerful force on earth is the spontaneous cooperation of a free people." Individualism makes cooperation worthwhile – but cooperation makes freedom possible. In 1973, after he left the cabinet, Romney became chair and CEO of the National Center for Voluntary Action. In 1979, this organization merged with the Colorado-based National Information Center on Volunteerism and became known as VOLUNTEER: The National Center for Citizen Involvement; Romney headed the new organization. The organization simplified its name to VOLUNTEER: The National Center in 1984 and to the National Volunteer Center in 1990. Romney remained as chair of these organizations throughout this time. Within the LDS Church, Romney remained active and prominent, serving as patriarch of the Bloomfield Hills stake and holding the office of regional representative of the Twelve, covering Michigan and northern Ohio. As part of a longtime habit of playing golf daily, he had long ago concocted a "compact 18" format in which he played three balls on each of six holes, or similar formulations depending upon the amount of daylight. During the early part of the Reagan administration, Romney served on the President's Task Force for Private Sector Initiatives along with LDS leader Monson. In 1987, he held a four-generation extended family reunion in Washington, where he showed the places and recounted the events of his life which had occurred there. Looking back on his and some other failed presidential bids, he once concluded, "You can't be right too soon and win elections." President George H. W. Bush's Points of Light Foundation was created in 1990, also to encourage volunteerism. Romney received the Points of Light Foundation's inaugural Lifetime Achievement Award from President Bush in April 1991. The Bush administration wanted to tap Romney to chair the new foundation, but he reportedly refused to head two organizations doing the same thing and suggested they merge. They did so in September 1991, and Romney became one of the founding directors of the Points of Light Foundation & Volunteer Center National Network. In the early 1990s, Romney was also involved in helping to set up the Commission on National and Community Service, one of the predecessors to the later Corporation for National and Community Service (CNCS). He gave speeches emphasizing the vital role of people helping people, and in 1993 inspired the first national meeting of volunteer centers. For much of his final two decades, Romney had been out of the political eye, but he re-emerged to the general public when he campaigned for his son, Mitt Romney, during the younger Romney's bid to unseat Senator Ted Kennedy in the 1994 U.S. Senate election in Massachusetts. Romney had urged Mitt to enter the race and moved into his son's house for its duration, serving as an unofficial advisor. Romney was a vigorous surrogate for his son in public appearances and at fundraising events. When Kennedy's campaign sought to bring up the LDS Church's past policy on blacks, Romney interrupted Mitt's press conference and said loudly, "I think it is absolutely wrong to keep hammering on the religious issues. And what Ted is trying to do is bring it into the picture." The father counseled the son to be relaxed in appearance and to pay less attention to his political consultants and more to his own instincts, a change that the younger Romney made late in the ultimately unsuccessful campaign. That same year, Ronna Romney, Romney's ex-daughter-in-law (formerly married to G. Scott Romney), decided to seek the Republican nomination for the U.S. Senate from Michigan. While Mitt and G. Scott endorsed Ronna Romney, George Romney had endorsed her opponent and the eventual winner, Spencer Abraham, during the previous year when Ronna was considering a run but had not yet announced. A family spokesperson said that George Romney had endorsed Abraham before knowing Ronna Romney would run and could not go back on his word, although he did refrain from personally campaigning on Abraham's behalf. By January 1995, amid press criticism of the Points of Light Foundation engaging in ineffective, wasteful spending, Romney expressed concern that the organization had too high a budget. Active to the end, in July 1995, four days before his death, Romney proposed a presidential summit to encourage greater volunteerism and community service, and the night before his death he drove to a meeting of another volunteer organization. On July 26, 1995, Romney died of a heart attack at the age of 88 while he was doing his morning exercising on a treadmill at his home in Bloomfield Hills, Michigan; he was discovered by his wife Lenore but it was too late to save him. He was buried at the Fairview Cemetery in Brighton, Michigan. In addition to his wife and children, Romney was survived by 23 grandchildren and 33 great-grandchildren. ## Legacy The Presidents' Summit For America's Future took place in Philadelphia in 1997, manifesting Romney's last volunteerism proposal, with the organization America's Promise coming out of it. For many years, the Points of Light Foundation (and its predecessor organization) has given out an annual Lenore and George W. Romney Citizen Volunteer Award (later retitled the George and Lenore Romney Citizen Volunteer Award); the inaugural award in 1987 went to George Romney himself. The Points of Light Foundation and the CNCS also give out a George W. Romney Volunteer Center Excellence Award (later the George W. Romney Excellence Award) at the annual National Conference on Community Volunteering and National Service (later the National Conference on Volunteering and Service). The George W. Romney Volunteer Center itself is sponsored by the United Way for Southeastern Michigan, and began during Romney's lifetime. The Automotive Hall of Fame of Dearborn, Michigan honored Romney with its Distinguished Service Citation award in 1956. He was then inducted into the hall of fame itself in 1995. Founded in 1998 with a grant from Romney's immediate family, the George W. Romney Institute of Public Management in the Marriott School of Management at Brigham Young University (BYU) honors the legacy left by Romney. Its mission is to develop people of high character who are committed to service, management, and leadership in the public sector and in non-profit organizations throughout the world. The building housing the main offices of the governor of Michigan in Lansing is known as the George W. Romney Building following a 1997 renaming. The Governor George Romney Lifetime Achievement Award is given annually by the State of Michigan, to recognize citizens who have demonstrated a commitment to community involvement and volunteer service throughout their lifetimes. In 2010, Adrian College in Michigan announced the opening of its George Romney Institute for Law and Public Policy. Its purpose is to explore the interdisciplinary nature of law and public policy and encourage practitioners, academics, and students to work together on issues in this realm. ## Authored books - (Galley proofs produced, but prepress process ceased before actual publication) ## See also - List of U.S. state governors born outside the United States
146,595
Olivier Messiaen
1,173,141,929
French composer (1908–1992)
[ "1908 births", "1992 deaths", "20th-century French composers", "20th-century French male musicians", "20th-century classical composers", "Academic staff of the Conservatoire de Paris", "Academic staff of the Schola Cantorum de Paris", "Academic staff of the École Normale de Musique de Paris", "Commanders of the Order of the Crown (Belgium)", "Composers for piano", "Composers for pipe organ", "Conservatoire de Paris alumni", "Deutsche Grammophon artists", "EMI Classics and Virgin Classics artists", "Ernst von Siemens Music Prize winners", "French Roman Catholics", "French classical composers", "French classical organists", "French composers of sacred music", "French male classical composers", "French male organists", "French military personnel of World War II", "French ornithologists", "Grand Cross of the Legion of Honour", "Kyoto laureates in Arts and Philosophy", "Male classical organists", "Members of the Académie des beaux-arts", "Modernist composers", "Musicians from Avignon", "Occitan musicians", "Olivier Messiaen", "Organ improvisers", "Pupils of Maurice Emmanuel", "Recipients of the Léonie Sonning Music Prize", "Royal Philharmonic Society Gold Medallists", "Wolf Prize in Arts laureates", "World War II prisoners of war held by Germany" ]
Olivier Eugène Prosper Charles Messiaen (UK: /ˈmɛsiæ̃/, US: /mɛˈsjæ̃, meɪˈsjæ̃, mɛˈsjɒ̃/; ; 10 December 1908 – 27 April 1992) was a French composer, organist, and ornithologist who was one of the major composers of the 20th century. His music is rhythmically complex; harmonically and melodically he employs a system he called modes of limited transposition, which he abstracted from the systems of material his early compositions and improvisations generated. He wrote music for chamber ensembles and orchestra, voice, solo organ, and piano, and experimented with the use of novel electronic instruments developed in Europe during his lifetime. Messiaen entered the Paris Conservatoire at age 11 and studied with Paul Dukas, Maurice Emmanuel, Charles-Marie Widor and Marcel Dupré, among others. He was appointed organist at the Église de la Sainte-Trinité, Paris, in 1931, a post he held for 61 years, until his death. He taught at the Schola Cantorum de Paris during the 1930s. After the fall of France in 1940, Messiaen was interned for nine months in the German prisoner of war camp Stalag VIII-A, where he composed his Quatuor pour la fin du temps (Quartet for the End of Time) for the four instruments available in the prison—piano, violin, cello and clarinet. The piece was first performed by Messiaen and fellow prisoners for an audience of inmates and prison guards. He was appointed professor of harmony soon after his release in 1941 and professor of composition in 1966 at the Paris Conservatoire, positions he held until he retired in 1978. His many distinguished pupils included Iannis Xenakis, George Benjamin, Alexander Goehr, Pierre Boulez, Tristan Murail, Karlheinz Stockhausen, György Kurtág, and Yvonne Loriod, who became his second wife. Messiaen perceived colours when he heard certain musical chords (a phenomenon known as chromesthesia); according to him, combinations of these colours were important in his compositional process. He travelled widely and wrote works inspired by diverse influences, including Japanese music, the landscape of Bryce Canyon in Utah, and the life of St. Francis of Assisi. For a short period Messiaen experimented with the parametrisation associated with "total serialism", in which field he is often cited as an innovator. His style absorbed many global musical influences, such as Indonesian gamelan (tuned percussion often features prominently in his orchestral works). He found birdsong fascinating, notating bird songs worldwide and incorporating birdsong transcriptions into his music. His innovative use of colour, his conception of the relationship between time and music, and his use of birdsong are among the features that make Messiaen's music distinctive. ## Biography ### Youth and studies Olivier Eugène Prosper Charles Messiaen was born on 10 December 1908 at 20 Boulevard Sixte-Isnard in Avignon, France, into a literary family. He was the elder of two sons of Cécile Anne Marie Antoinette Sauvage, a poet, and Pierre Léon Joseph Messiaen [fr], a scholar and teacher of English from a farm near Wervicq-Sud who translated William Shakespeare's plays into French. Messiaen's mother published a sequence of poems, L'âme en bourgeon (The Budding Soul), the last chapter of Tandis que la terre tourne (As the Earth Turns), which address her unborn son. Messiaen later said this sequence of poems influenced him deeply and cited it as prophetic of his future artistic career. His brother Alain André Prosper Messiaen [fr], four years his junior, was also a poet. At the outbreak of World War I, Pierre enlisted and Cécile took their two boys to live with her brother in Grenoble. There Messiaen became fascinated with drama, reciting Shakespeare to his brother with the help of a homemade toy theatre with translucent backdrops made from old cellophane wrappers. At this time he also adopted the Roman Catholic faith. Later, Messiaen felt most at home in the Alps of the Dauphiné, where he had a house built south of Grenoble where he composed most of his music. He took piano lessons, having already taught himself to play. His interests included the recent music of French composers Claude Debussy and Maurice Ravel, and he asked for opera vocal scores for Christmas presents. He also saved to buy scores, including Edvard Grieg's Peer Gynt, whose "beautiful Norwegian melodic lines with the taste of folk song ... gave me a love of melody." Around this time he began to compose. In 1918 his father returned from the war and the family moved to Nantes. He continued music lessons; one of his teachers, Jehan de Gibon, gave him a score of Debussy's opera Pelléas et Mélisande, which Messiaen described as "a thunderbolt" and "probably the most decisive influence on me". The next year, Pierre Messiaen gained a teaching post at Sorbonne University in Paris. Messiaen entered the Paris Conservatoire in 1919, aged 11. Messiaen made excellent academic progress at the Conservatoire. In 1924, aged 15, he was awarded second prize in harmony, having been taught in that subject by professor Jean Gallon. In 1925, he won first prize in piano accompaniment, and in 1926 he gained first prize in fugue. After studying with Maurice Emmanuel, he was awarded second prize for the history of music in 1928. Emmanuel's example engendered an interest in ancient Greek rhythms and exotic modes. After showing improvisational skills on the piano, Messiaen studied organ with Marcel Dupré. He won first prize in organ playing and improvisation in 1929. After a year studying composition with Charles-Marie Widor, in autumn 1927 he entered the class of the newly appointed Paul Dukas. Messiaen's mother died of tuberculosis shortly before the class began. Despite his grief, he resumed his studies, and in 1930 Messiaen won first prize in composition. While a student he composed his first published works—his eight Préludes for piano (the earlier Le banquet céleste was published subsequently). These exhibit Messiaen's use of his modes of limited transposition and palindromic rhythms (Messiaen called these non-retrogradable rhythms). His official début came in 1931 with his orchestral suite Les offrandes oubliées. That year he first heard a gamelan group, sparking his interest in the use of tuned percussion. ### La Trinité, La jeune France, and Messiaen's war In the autumn of 1927, Messiaen joined Dupré's organ course. Dupré later wrote that Messiaen, having never seen an organ console, sat quietly for an hour while Dupré explained and demonstrated the instrument, and then came back a week later to play Johann Sebastian Bach's Fantasia in C minor to an impressive standard. From 1929, Messiaen regularly deputised at the Église de la Sainte-Trinité for the ailing Charles Quef. The post became vacant in 1931 when Quef died, and Dupré, Charles Tournemire and Widor among others supported Messiaen's candidacy. His formal application included a letter of recommendation from Widor. The appointment was confirmed in 1931, and he remained the organist at the church for more than 60 years. He also assumed a post at the Schola Cantorum de Paris in the early 1930s. In 1932, he composed the Apparition de l'église éternelle for organ. He also married the violinist and composer Claire Delbos (daughter of Victor Delbos) that year. Their marriage inspired him both to compose works for her to play (Thème et variations for violin and piano in the year they were married) and to write pieces to celebrate their domestic happiness, including the song cycle Poèmes pour Mi in 1936, which he orchestrated in 1937. Mi was Messiaen's affectionate nickname for his wife. On 14 July 1937, the Messiaens' son, Pascal Emmanuel, was born; Messiaen celebrated the occasion by writing Chants de Terre et de Ciel. The marriage turned tragic when Delbos lost her memory after an operation toward the end of World War II. She spent the rest of her life in mental institutions. In 1934, Messiaen released his first major work for organ, La Nativité du Seigneur. He wrote a followup four years later, Les Corps glorieux; it premièred in 1945. In 1936, along with André Jolivet, Daniel Lesur and Yves Baudrier, Messiaen formed the group La jeune France ("Young France"). Their manifesto implicitly attacked the frivolity predominant in contemporary Parisian music and rejected Jean Cocteau's 1918 Le coq et l'arlequin in favour of a "living music, having the impetus of sincerity, generosity and artistic conscientiousness". Messiaen's career soon departed from this polemical phase. In response to a commission for a piece to accompany light-and-water shows on the Seine during the Paris Exposition, in 1937 Messiaen demonstrated his interest in using the ondes Martenot, an electronic instrument, by composing Fêtes des belles eaux for an ensemble of six. He included a part for the instrument in several of his subsequent compositions. During this period he composed several multi-movement organ works. He arranged his orchestral suite L'ascension for organ, replacing the orchestral version's third movement with an entirely new movement, Transports de joie d'une âme devant la gloire du Christ qui est la sienne ("Ecstasies of a soul before the glory of Christ which is the soul's own") (). He also wrote the extensive cycles La Nativité du Seigneur ("The Nativity of the Lord") and Les corps glorieux ("The glorious bodies"). At the outbreak of World War II, Messiaen was drafted into the French army. Due to poor eyesight, he was enlisted as a medical auxiliary rather than an active combatant. He was captured at Verdun, where he befriended clarinettist Henri Akoka; they were taken to Görlitz in May 1940, and imprisoned at Stalag VIII-A. He met a cellist (Étienne Pasquier) and a violinist (Jean le Boulaire [fr]) among his fellow prisoners. He wrote a trio for them, which he gradually incorporated into a more expansive new work, Quatuor pour la fin du temps ("Quartet for the End of Time"). With the help of a friendly German guard, Carl-Albert Brüll [de], he acquired manuscript paper and pencils. The work was first performed in January 1941 to an audience of prisoners and prison guards, with the composer playing a poorly maintained upright piano in freezing conditions and the trio playing third-hand unkempt instruments. The enforced introspection and reflection of camp life bore fruit in one of 20th-century classical music's acknowledged masterpieces. The title's "end of time" alludes to the Apocalypse, and also to the way that Messiaen, through rhythm and harmony, used time in a manner completely different from his predecessors and contemporaries. The idea of a European Centre of Education and Culture "Meeting Point Music Messiaen" on the site of Stalag VIII-A, for children and youth, artists, musicians and everyone in the region emerged in December 2004, was developed with the involvement of Messiaen's widow as a joint project between the council districts in Germany and Poland, and was completed in 2014. ### Tristan and serialism Shortly after his release from Görlitz in May 1941 in large part due to the persuasions of his friend and teacher Marcel Dupré, Messiaen, who was now a household name, was appointed a professor of harmony at the Paris Conservatoire, where he taught until retiring in 1978. He compiled his Technique de mon langage musical ("Technique of my musical language") published in 1944, in which he quotes many examples from his music, particularly the Quartet. Although only in his mid-thirties, his students described him as an outstanding teacher. Among his early students were the composers Pierre Boulez and Karel Goeyvaerts. Other pupils included Karlheinz Stockhausen in 1952, Alexander Goehr in 1956–57, Tristan Murail in 1967–72 and George Benjamin during the late 1970s. The Greek composer Iannis Xenakis was referred to him in 1951; Messiaen urged Xenakis to take advantage of his background in mathematics and architecture in his music. In 1943, Messiaen wrote Visions de l'Amen ("Visions of the Amen") for two pianos for Yvonne Loriod and himself to perform. Shortly thereafter he composed the enormous solo piano cycle Vingt regards sur l'enfant-Jésus ("Twenty gazes upon the child Jesus") for her. Again for Loriod, he wrote Trois petites liturgies de la présence divine ("Three small liturgies of the Divine Presence") for female chorus and orchestra, which includes a difficult solo piano part. Two years after Visions de l'Amen, Messiaen composed the song cycle Harawi, the first of three works inspired by the legend of Tristan and Isolde. The second of these works about human (as opposed to divine) love was the result of a commission from Serge Koussevitzky. Messiaen said the commission did not specify the length of the work or the size of the orchestra. This was the ten-movement Turangalîla-Symphonie. It is not a conventional symphony, but rather an extended meditation on the joy of human union and love. It does not contain the sexual guilt inherent in Richard Wagner's Tristan und Isolde because Messiaen believed sexual love to be a divine gift. The third piece inspired by the Tristan myth was Cinq rechants for 12 unaccompanied singers, described by Messiaen as influenced by the alba of the troubadours. Messiaen visited the United States in 1949, where his music was conducted by Koussevitsky and Leopold Stokowski. His Turangalîla-Symphonie was first performed in the US the same year, conducted by Leonard Bernstein. Messiaen taught an analysis class at the Paris Conservatoire. In 1947 he taught (and performed with Loriod) for two weeks in Budapest. In 1949 he taught at Tanglewood and presented his work at the Darmstadt new music summer school. While he did not employ the twelve-tone technique, after three years teaching analysis of twelve-tone scores, including works by Arnold Schoenberg, he experimented with ways of making scales of other elements (including duration, articulation and dynamics) analogous to the chromatic pitch scale. The results of these innovations was the "Mode de valeurs et d'intensités" for piano (from the Quatre études de rythme) which has been misleadingly described as the first work of "total serialism". It had a large influence on the earliest European serial composers, including Boulez and Stockhausen. During this period he also experimented with musique concrète, music for recorded sounds. ### Birdsong and the 1960s When in 1952 Messiaen was asked to provide a test piece for flautists at the Paris Conservatoire, he composed the piece Le Merle noir for flute and piano. While he had long been fascinated by birdsong, and birds had made appearances in several of his earlier works (for example La Nativité, Quatuor and Vingt regards), the flute piece was based entirely on the song of the blackbird. He took this development to a new level with his 1953 orchestral work Réveil des oiseaux—its material consists almost entirely of the birdsong one might hear between midnight and noon in the Jura. From this period onward, Messiaen incorporated birdsong into his compositions and composed several works for which birds provide both the title and subject matter (for example the collection of 13 piano pieces Catalogue d'oiseaux completed in 1958, and La fauvette des jardins of 1971). Paul Griffiths observed that Messiaen was a more conscientious ornithologist than any previous composer, and a more musical observer of birdsong than any previous ornithologist. Messiaen's first wife died in 1959 after a long illness, and in 1961 he married Loriod. He began to travel widely, to attend musical events and to seek out and transcribe the songs of more exotic birds in the wild. Despite this, he spoke only French. Loriod frequently assisted her husband's detailed studies of birdsong while walking with him, by making tape recordings for later reference. In 1962 he visited Japan, where Gagaku music and Noh theatre inspired the orchestral "Japanese sketches", Sept haïkaï, which contain stylised imitations of traditional Japanese instruments. Messiaen's music was by this time championed by, among others, Boulez, who programmed first performances at his Domaine musical concerts and the Donaueschingen festival. Works performed included Réveil des oiseaux, Chronochromie (commissioned for the 1960 festival), and Couleurs de la cité céleste. The latter piece was the result of a commission for a composition for three trombones and three xylophones; Messiaen added to this more brass, wind, percussion and piano, and specified a xylophone, xylorimba and marimba rather than three xylophones. Another work of this period, Et exspecto resurrectionem mortuorum, was commissioned as a commemoration of the dead of the two World Wars and was performed first semi-privately in the Sainte-Chapelle, then publicly in Chartres Cathedral with Charles de Gaulle in the audience. His reputation as a composer continued to grow and in 1959, he was nominated as an Officier of the Légion d'honneur. In 1966, he was officially appointed professor of composition at the Paris Conservatoire, although he had in effect been teaching composition for years. Further honours included election to the Institut de France in 1967 and the Académie des beaux-arts in 1968, the Erasmus Prize in 1971, the award of the Royal Philharmonic Society Gold Medal and the Ernst von Siemens Music Prize in 1975, the Sonning Award (Denmark's highest musical honour) in 1977, the Wolf Prize in Arts in 1982, and the presentation of the Croix de Commander of the Belgian Order of the Crown in 1980. ### Transfiguration, Canyons, St. Francis, and the Beyond Messiaen's next work was the large-scale La Transfiguration de Notre Seigneur Jésus-Christ. The composition occupied him from 1965 to 1969 and the musicians employed include a 100-voice ten-part choir, seven solo instruments and large orchestra. Its fourteen movements are a meditation on the story of Christ's Transfiguration. Shortly after its completion, Messiaen received a commission from Alice Tully for a work to celebrate the U.S. bicentennial. He arranged a visit to the U.S. in spring 1972, and was inspired by Bryce Canyon in Utah, where he observed the canyon's distinctive colours and birdsong. The 12-movement orchestral piece Des canyons aux étoiles... was the result, first performed in 1974 in New York. In 1971, he was asked to compose a piece for the Paris Opéra. Reluctant to take on such a major project, he was persuaded by French president Georges Pompidou to accept the commission and began work on Saint-François d'Assise in 1975 after two years of preparation. The composition was intensive (he also wrote his own libretto) and occupied him from 1975 to 1979; the orchestration was carried out from 1979 until 1983. Messiaen preferred to describe the final work as a "spectacle" rather than an opera. It was first performed in 1983. Some commentators at the time thought that the opera would be his valediction (at times Messiaen himself believed so), but he continued to compose. In 1984, he published a major collection of organ pieces, Livre du Saint Sacrement; other works include birdsong pieces for solo piano, and works for piano with orchestra. In the summer of 1978, Messiaen was forced to retire from teaching at the Paris Conservatoire due to French law. He was promoted to the highest rank of the Légion d'honneur, the Grand-Croix, in 1987, and was awarded the decoration in London by his old friend Jean Langlais. An operation prevented his participation in the celebration of his 70th birthday in 1978, but in 1988 tributes for Messiaen's 80th included a complete performance in London's Royal Festival Hall of St. François, which the composer attended, and Erato's publication of a 17-CD collection of his music, including a disc of Messiaen in conversation with Claude Samuel. Although in considerable pain near the end of his life (requiring repeated surgery on his back), he was able to fulfil a commission from the New York Philharmonic Orchestra, Éclairs sur l'au-delà..., which premièred six months after his death. He died in the Beaujon Hospital in Clichy on 27 April 1992, aged 83. On going through his papers, Loriod discovered that, in the last months of his life, he had been composing a concerto for four musicians he felt particularly grateful to: herself, the cellist Mstislav Rostropovich, the oboist Heinz Holliger and the flautist Catherine Cantin (hence the title Concert à quatre). Four of the five intended movements were substantially complete; Loriod undertook the orchestration of the second half of the first movement and of the whole of the fourth with advice from George Benjamin. It was premiered by the dedicatees in September 1994. ## Music Messiaen's music has been described as outside the western musical tradition, although growing out of that tradition and being influenced by it. Much of his output denies the western conventions of forward motion, development and diatonic harmonic resolution. This is partly due to the symmetries of his technique—for instance the modes of limited transposition do not admit the conventional cadences found in western classical music. "[Messiaen's youthful] fascination with Shakespeare’s depiction of human passion and with his magical world also influenced the composer's later works." Messiaen was not interested in depicting aspects of theology such as sin; rather he concentrated on the theology of joy, divine love and redemption. Messiaen continually evolved new composition techniques, always integrating them into his existing musical style; his final works still retain the use of modes of limited transposition. For many commentators this continual development made every major work from the Quatuor onwards a conscious summation of all that Messiaen had composed up to that time. But very few of these works lack new technical ideas—simple examples being the introduction of communicable language in Meditations, the invention of a new percussion instrument (the geophone) for Des canyons aux etoiles..., and the freedom from any synchronisation with the main pulse of individual parts in certain birdsong episodes of St. François d'Assise. As well as discovering new techniques, Messiaen studied and absorbed foreign music, including Ancient Greek rhythms, Hindu rhythms (he encountered Śārṅgadeva's list of 120 rhythmic units, the deçî-tâlas), Balinese and Javanese Gamelan, birdsong, and Japanese music (see Example 1 for an instance of his use of ancient Greek and Hindu rhythms). While he was instrumental in the academic exploration of his techniques (he compiled two treatises; the second, in five volumes, was substantially complete when he died and was published posthumously), and was a master of music analysis, he considered the development and study of techniques a means to intellectual, aesthetic, and emotional ends. Thus Messiaen maintained that a musical composition must be measured against three separate criteria: it must be interesting, beautiful to listen to, and touch the listener. Messiaen wrote a large body of music for the piano. Although a considerable pianist himself, he was undoubtedly assisted by Loriod's formidable technique and ability to convey complex rhythms and rhythmic combinations; in his piano writing from Visions de l'Amen onward he had her in mind. Messiaen said, "I am able to allow myself the greatest eccentricities because to her anything is possible." ### Western influences Developments in modern French music were a major influence on Messiaen, particularly the music of Debussy and his use of the whole-tone scale (which Messiaen called Mode 1 in his modes of limited transposition). Messiaen rarely used the whole-tone scale in his compositions because, he said, after Debussy and Dukas there was "nothing to add", but the modes he did use are similarly symmetrical. Messiaen had a great admiration for the music of Igor Stravinsky, particularly the use of rhythm in earlier works such as The Rite of Spring, and his use of orchestral colour. He was further influenced by the orchestral brilliance of Heitor Villa-Lobos, who lived in Paris in the 1920s and gave acclaimed concerts there. Among composers for the keyboard, Messiaen singled out Jean-Philippe Rameau, Domenico Scarlatti, Frédéric Chopin, Debussy, and Isaac Albéniz. He loved the music of Modest Mussorgsky and incorporated varied modifications of what he called the "M-shaped" melodic motif from Mussorgsky's Boris Godunov, although he modified the final interval from a perfect fourth to a tritone (Example 3). Messiaen was further influenced by Surrealism, as seen in the titles of some of the piano Préludes (Un reflet dans le vent..., "A reflection in the wind") and in some of the imagery of his poetry (he published poems as prefaces to certain works, for example Les offrandes oubliées). ### Colour Colour lies at the heart of Messiaen's music. He believed that terms such as "tonal", "modal" and "serial" are misleading analytical conveniences. For him there were no modal, tonal or serial compositions, only music with or without colour. He said that Claudio Monteverdi, Mozart, Chopin, Richard Wagner, Mussorgsky, and Stravinsky all wrote strongly coloured music. In some of Messiaen's scores, he notated the colours in the music (notably in Couleurs de la cité céleste and Des canyons aux étoiles...)—the purpose being to aid the conductor in interpretation rather than to specify which colours the listener should experience. The importance of colour is linked to Messiaen's synaesthesia, which caused him to experience colours when he heard or imagined music (his form of synaesthesia, the most common form, involved experiencing the associated colours in a non-visual form rather than perceiving them visually). In his multi-volume music theory treatise Traité de rythme, de couleur, et d'ornithologie ("Treatise of Rhythm, Colour and Birdsong"), Messiaen wrote descriptions of the colours of certain chords. His descriptions range from the simple ("gold and brown") to the highly detailed ("blue-violet rocks, speckled with little grey cubes, cobalt blue, deep Prussian blue, highlighted by a bit of violet-purple, gold, red, ruby, and stars of mauve, black and white. Blue-violet is dominant"). When asked what Messiaen's main influence had been on composers, George Benjamin said, "I think the sheer ... colour has been so influential, ... rather than being a decorative element, [Messiaen showed that colour] could be a structural, a fundamental element, ... the fundamental material of the music itself." ### Symmetry Many of Messiaen's composition techniques made use of symmetries of time and pitch. #### Time From his earliest works, Messiaen used non-retrogradable (palindromic) rhythms (Example 2). He sometimes combined rhythms with harmonic sequences in such a way that, if the process were repeated indefinitely, the music would eventually run through all possible permutations and return to its starting point. For Messiaen, this represented the "charm of impossibilities" of these processes. He only ever presented a portion of any such process, as if allowing the informed listener a glimpse of something eternal. In the first movement of Quatuor pour la fin du temps the piano and cello together provide an early example. #### Pitch Messiaen used modes he called modes of limited transposition. They are distinguished as groups of notes that can only be transposed by a semitone a limited number of times. For example, the whole-tone scale (Messiaen's Mode 1) exists in only two transpositions: C–D–E–F–G–A and D–E–F–G–A–B. Messiaen abstracted these modes from the harmony of his improvisations and early works. Music written using the modes avoids conventional diatonic harmonic progressions, since for example Messiaen's Mode 2 (identical to the octatonic scale used by other composers) permits precisely the dominant seventh chords whose tonic the mode does not contain. ### Time and rhythm As well as making use of non-retrogradable rhythm and the Hindu decî-tâlas, Messiaen also composed with "additive" rhythms. This involves lengthening individual notes slightly or interpolating a short note into an otherwise regular rhythm (see Example 3), or shortening or lengthening every note of a rhythm by the same duration (adding a semiquaver to every note in a rhythm on its repeat, for example). This led Messiaen to use rhythmic cells that irregularly alternate between two and three units, a process that also occurs in Stravinsky's The Rite of Spring, which Messiaen admired. A factor that contributes to Messiaen's suspension of the conventional perception of time in his music is the extremely slow tempos he often specifies (the fifth movement Louange à l'eternité de Jésus of Quatuor is actually given the tempo marking infiniment lent). Messiaen also used the concept of "chromatic durations", for example in his Soixante-quatre durées from Livre d'orgue (), which is built from, in Messiaen's words, "64 chromatic durations from 1 to 64 demisemiquavers [thirty-second notes]—invested in groups of 4, from the ends to the centre, forwards and backwards alternately—treated as a retrograde canon. The whole peopled with birdsong." ### Harmony `In addition to making harmonic use of the modes of limited transposition, he cited the harmonic series as a physical phenomenon that provides chords with a context he felt was missing in purely serial music. An example of Messiaen's use of this phenomenon, which he called "resonance", is the last two bars of his first piano Prélude, La colombe ("The dove"): the chord is built from harmonics of the fundamental base note E.` Related to this use of resonance, Messiaen also composed music in which the lowest, or fundamental, note is combined with higher notes or chords played much more quietly. These higher notes, far from being perceived as conventional harmony, function as harmonics that alter the timbre of the fundamental note like mixture stops on a pipe organ. An example is the song of the golden oriole in Le loriot of the Catalogue d'oiseaux for solo piano (Example 4). In his use of conventional diatonic chords, Messiaen often transcended their historically mundane connotations (for example, his frequent use of the added sixth chord as a resolution). ### Birdsong Birdsong fascinated Messiaen from an early age, and in this he found encouragement from Dukas, who reportedly urged his pupils to "listen to the birds". Messiaen included stylised birdsong in some of his early compositions (including L'abîme d'oiseaux from the Quatuor pour la fin du temps), integrating it into his sound-world by techniques like the modes of limited transposition and chord colouration. His evocations of birdsong became increasingly sophisticated, and with Le réveil des oiseaux this process reached maturity, the whole piece being built from birdsong: in effect it is a dawn chorus for orchestra. The same can be said for "Epode", the five-minute sixth movement of Chronochromie, which is scored for 18 violins, each playing a different birdsong. Messiaen notated the bird species with the music in the score (examples 1 and 4). The pieces are not simple transcriptions; even the works with purely bird-inspired titles, such as Catalogue d'oiseaux and Fauvette des jardins, are tone poems evoking the landscape, its colours and atmosphere. ### Serialism For a few compositions, Messiaen created scales for duration, attack and timbre analogous to the chromatic pitch scale. He expressed annoyance at the historical importance given to one of these works, Mode de valeurs et d'intensités, by musicologists intent on crediting him with the invention of "total serialism". Messiaen later introduced what he called a "communicable language", a "musical alphabet" to encode sentences. He first used this technique in his Méditations sur le Mystère de la Sainte Trinité for organ; where the "alphabet" includes motifs for the concepts to have, to be and God, while the sentences encoded feature sections from the writings of St. Thomas Aquinas. ## Writings - Essentially a republishing of . ## See also - Olivier Messiaen Competition
7,586,083
Domenico Selvo
1,172,609,530
Doge of Venice from 1071 to 1084
[ "1087 deaths", "11th-century Doges of Venice", "Burials at St Mark's Basilica", "Year of birth unknown" ]
Domenico Selvo (died 1087) was the 31st Doge of Venice, serving from 1071 to 1084. During his reign as Doge, his domestic policies, the alliances that he forged, and the battles that the Venetian military won and lost laid the foundations for much of the subsequent foreign and domestic policy of the Republic of Venice. He avoided confrontations with the Byzantine Empire, the Holy Roman Empire, and the Roman Catholic Church at a time in European history when conflict threatened to upset the balance of power. At the same time, he forged new agreements with the major nations that would set up a long period of prosperity for the Republic of Venice. Through his military alliance with the Byzantine Empire, Emperor Alexios I Komnenos awarded Venice economic favors with the declaration of a golden bull that would allow for the development of the republic's international trade over the next few centuries. Within the city itself, he supervised a longer period of the construction of the modern St Mark's Basilica than any other Doge. The basilica's complex architecture and expensive decorations stand as a testament to the prosperity of Venetian traders during this period. The essentially democratic way in which he not only was elected but also removed from power was part of an important transition of Venetian political philosophy. The overthrow of his rule in 1084 was one of many forced abdications in the early history of the republic that further blurred the lines between the powers of the Doge, the common electorate, and the nobility. ## Background Beginning with the reign of Pietro II Candiano in 932, Venice saw a string of inept leaders such as Pietro III Candiano, Pietro IV Candiano, and Tribuno Memmo. The reputed arrogance and ambition of these Doges caused the deterioration of the relationship with the Holy Roman Empire in the west, the stagnancy of the relationship with the Byzantine Empire in the east, and discord at home in the Republic. However, in 991, Pietro II Orseolo became the Doge and spent his reign pushing the boundaries of the Republic further east down the western coast of the Balkan Peninsula with his conquests in Dalmatia in 1000. This strengthened the commercial bonds with the empires of the east, Sicily, Northern Africa, and the Holy Roman Empire, and put an end to the infighting among the citizens of Venice. Pietro II's negotiations with Byzantine Emperor Basil II to decrease tariffs on Venetian-produced goods helped foster a new age of prosperity in the Republic as Venetian merchants could undercut the competition in the international markets of the Byzantine Empire. Similarly, Pietro II had success developing a new relationship with Holy Roman Emperor Otto III, who displayed his friendship to him by restoring previously seized lands to Venice, opening up routes of free trade between the two states, and exempting all Venetians from taxes in the Holy Roman Empire. As the power and reputation of Pietro II grew, the Venetian people began to wonder if he was secretly planning to establish a hereditary monarchy. Their fears were confirmed when his son, Otto Orseolo (named after Otto III), assumed the title of Doge upon Pietro II's death in 1009, thereby becoming the youngest Doge in Venetian history at the age of 16. Scandal marked much of Otto's reign as he showed a clear inclination toward nepotism by elevating several relatives to positions of power. In 1026, he was deposed by his enemies and exiled to Constantinople, but his successor, Pietro Barbolano, had such difficulty in attempting to unite the city that it seemed infighting would once again seize Venice. In 1032, Barbolano himself was deposed by those who wished to restore power to Otto Orseolo, but the former Doge lay dying in Constantinople and was unable to return from exile. Domenico Orseolo, a younger brother of Otto and a rather unpopular figure in Venice, attempted to seize the throne without waiting for the formality of an election, but as soon as he tried this, his many enemies, including those who pushed for the reinstatement of Otto, grew outraged that an Orseolo would assume the throne simply because he was the son of Pietro II. The power of the Doge was severely checked, and Domenico Flabanico, a successful merchant, was called by the people to the position of Doge. During his 11-year reign Flabanico enacted several key reforms that would restrict the power of future Doges, including a law forbidding the election of a son of a Doge. Doge Domenico Contarini (1043–1071) had a relatively uneventful reign, healing the rift between the Doge and his subjects and regaining territory that had been lost in the east to the Kingdom of Croatia in the years following the deposition of Otto Orseolo. However, one fact remained: based on their actions in the first half of the 11th century, the majority of the people of Venice were clearly not in favor of having a royal hereditary class. This reality, coupled with the fresh memories of power-hungry Doges, set the stage for Domenico Selvo. ## Biography ### Life before Dogeship What little is known of Selvo's past is based mostly on accounts of his reputation when he entered his Dogeship. Details of his family origins and even the year of his birth are unknown, but it can be assumed that he was a Venetian noble because, with the rare exception of Domenico Flabanico, only members of this class were elected to the position of Doge at this point in the Republic's history. Selvo supposedly belonged to a family in the patrician class from the sestiere of Dorsoduro who were allegedly of ancient Roman origin, possibly from one of the tribunes. He had also apparently been an ambassador to Holy Roman Emperor Henry III and he was certainly ducal counselor to Domenico Contarini prior to his election as Doge. Being connected to the relatively popular Doge might have been one of the causes for his own apparent initial popularity. ### Election as Doge Selvo is notable for being the first Doge in the history of Venice whose election was recorded by an eyewitness, a parish priest of the church of San Michele Archangelo by the name of Domenico Tino. The account gives historians a valuable glimpse of the power of the popular will of the Venetian people. Over the previous two centuries, the rule of quasi-tyrannies had plagued the popular belief that Venetians held democratic control over their leaders. The events of Selvo's election occurred in the spring of 1071, when the nearly thirty-year reign of Doge Domenico Contarini came to an end upon his death. According to Tino's account, on the day of the election, Selvo was attending mass for the funeral of the late Doge at the new monastery church of San Nicolò built under Domenico Contarini on Lido, an island in the Venetian Lagoon. The location was ideal for the funeral of a Doge not only because St Mark's Basilica was under construction at the time, but the new church was also spacious enough to hold a fairly large number of people. The location also proved ideal for the election of a new Doge for the very same reasons. After the funeral, a large crowd assembled in their gondolas and armed galleys. Domenico Tino says "an innumerable multitude of people, virtually all Venice" was there to voice their opinion on the selection of a new Doge. After the bishop of Venice asked "who would be worthy of his nation," the crowds chanted, "Domenicum Silvium volumus et laudamus" (We want Domenico Selvo and we praise him). The people, according to the account, had clearly spoken, and with these cries, the election was over. A group of more distinguished citizens then lifted the Doge-elect above the roaring crowd, and he was transported as such back to the city. Barefoot, in accordance with tradition, Selvo was led into St Mark's Basilica where, amidst the construction materials and scaffolding, he prayed to God, received his staff of office, heard the oaths of fidelity from his subjects, and was legally sworn in as the 31st Doge of Venice. ### Peace and prosperity (1071–1080) During the first decade of his rule, Selvo's policies were largely a continuation of those of Domenico Contarini. There were few armed conflicts at home or abroad, and the Doge enjoyed a period of popularity due to the prosperous economic conditions. The relations with the Holy Roman Empire were gradually strengthened to a level unknown since the reign of the last Orseolo through relatively free trade and the good relationship that Selvo maintained with Emperor Henry IV. The importance of the economic alliance between the two nations became increasingly crucial when the historically shared power of the Holy Roman Emperor and the Pope was challenged by the Investiture Controversy between Henry IV and Pope Gregory VII. Selvo had to walk an extremely tight line of competing priorities. On the one hand, he wanted to maintain the trade agreement Venice had with the lands occupied by Henry IV, but on the other hand, Venetians were religiously loyal to Roman Catholicism as opposed to the Eastern Orthodoxy. At the height of the controversy, Pope Gregory VII privately threatened to excommunicate Selvo and put an interdict on the Venetian Republic, but Selvo was able to narrowly escape this by diplomatically asserting Venice's religious power as the reputed holders of the remains of St Mark. In the east, Selvo not only maintained good trade relations with the Byzantine Empire, but also married into their royal family to consolidate the alliance that had existed for many years between the two nations. In 1075, Selvo married Theodora Doukas, daughter of Constantine X and sister of the reigning emperor, Michael VII. Though Venetians, especially the nobles, were wary of the pageantry that accompanied the marriage and the royal bride, the strengthened alliance meant even greater mobility for Venetian merchants in the east. Though the popularity of the new dogaressa was not great, Selvo was the hero of the merchant class that had had even greater political sway since the depositions of the Orseoli. ### Victory (1081–1083) Despite the relative peace of the early years of Selvo's reign, the forces that would eventually lead to his deposition had already swung into action. In southern Italy, the Duke of Apulia and Calabria, Robert Guiscard, had spent the majority of his reign consolidating Norman power along the heel and toe of lo Stivale by expelling the Byzantine armies. Guiscard was pushing north toward the Papal States (to which the Duchy of Apulia and Calabria was allied), and was threatening Byzantine control of cities along the Ionian and Adriatic seas. In May 1081, Guiscard led his army and navy across the sea to lay siege to the port city of Durazzo, as it was one end of the famous Via Egnatia, a direct route to the Byzantine capital of Constantinople. Alexios I Komnenos, the newly crowned Byzantine Emperor, dispatched an urgent message to Selvo asking for the mobilization of the Venetian fleet in defense of Durazzo in return for great rewards. The Doge wasted no time in setting sail for the besieged city in charge of his fleet of 14 warships and 45 other vessels. Selvo was motivated not only by his familial ties and the promise of reward, but also the realization that Norman control over the Strait of Otranto would be just as great of a threat to Venetian power in the region as it would be to their ally in the east. When Selvo approached the city, Guiscard's ships had already anchored in the harbor at Durazzo. Though the battle was fierce, superior tactics by the skilled Venetian fleet overpowered the inexperienced Normans who were mostly used to land battles. The battered fleet led by Guiscard retreated into the harbor after losing many ships. Victorious at sea, Selvo left the fleet under the command of his son and returned to Venice a hero. Because of the help given to the Byzantine Empire, in 1082 the Republic of Venice was awarded a Golden Bull: a decree by Emperor Alexios I Komnenos granting Venice many privileges, including a tax exemption for Venetian merchants, that would be crucial for the future economic and political expansion of Venice in the eastern Mediterranean. The defeat off the coast of Durazzo, though devastating to Guiscard's fleet, had inflicted little damage to his army as the majority of it had disembarked before the battle in preparation of the siege of Durazzo. In the coming months, Guiscard would regroup his forces and defeat a large Byzantine army led by Alexios I himself. In 1082, Guiscard took the city of Durazzo, and as the Venetian sailors were forced out of the city and their ships vacated the harbor of Durazzo, the first victory by Venice against the Norman fleet appeared just a temporary setback for the Normans. Due to the new trade privileges and the fact that virtually no damage was inflicted on the Venetians during this siege, Selvo remained very popular in Venice. Meanwhile, Guiscard advanced rapidly across the Balkan Peninsula, but his march was halted by an urgent dispatch and a call for help from his greatest ally, Pope Gregory VII. Guiscard responded by returning to Italy and marching on Rome to temporarily expel Henry IV, but in the process, he lost almost all the territories he had gained in the Balkans. Knowing that Guiscard was gone, in 1083, Selvo sent the Venetian fleet to recapture both Durazzo and the island of Corfu to the south. ### Defeat and deposition (1084) In 1084, Guiscard returned to the Balkans and planned a new offensive against Corfu, where a combined Greek-Venetian fleet, commanded by Selvo, awaited his arrival. When the Normans approached the island, the combined fleets dealt Guiscard an even greater defeat than he had received in the naval battle at Durazzo. Guiscard ordered another attack three days later, but the results were still more disastrous for the Normans. Selvo was completely convinced of his fleet's victory and sent all damaged ships north to Venice for repairs, to free them for other uses, and to report of their victory. The Doge then retired with the remaining ships to the Albanian coast to await the departure of the Normans. Acting on the Doge's belief that a third attack would be unlikely and that the presence of a slightly depleted Venetian fleet meant greater odds for victory, Guiscard summoned every floating vessel he could find and led the Normans into a surprise attack. His strategy, though perhaps risky, was ultimately well-calculated as it caused mass confusion among the Venetians, who were overwhelmed on all flanks, while the Greeks fled what they assumed to be a losing battle. Selvo barely managed to retreat with the remainder of his fleet, but not before 3,000 Venetians died and another 2,500 were taken prisoner. The Venetians also lost 9 great galleys, the largest and most heavily armed ships in their war fleet. When the battered fleet returned to Venice, news of the defeat spread throughout the city to mixed reactions. Though some were willing to forgive the defeat considering the circumstances, many others needed someone to blame for the loss that was considerable not only in human and material terms, but also symbolically. The people of Venice had been humiliated by an upstart nation with practically no naval experience. Though Guiscard would die the next year and the Norman threat would quickly disappear, a scapegoat was needed at that moment. A faction of influential Venetians, possibly led by Vitale Faliero based on later writings, led a popular revolt to depose Selvo, and in December 1084 they succeeded. Selvo apparently did not make a great effort to defend himself and was sent off to a monastery. He died three years later in 1087, and was buried in the loggiato of St. Mark's Basilica. ## Legacy After Selvo was deposed, it took several years for Venice to recover from the defeat at Corfu and for the Venetians to fully realize the immediate impact of his actions as Doge. When Venice provided military aid to the Byzantine Empire, they were awarded a Golden Bull by Emperor Alexios I that would provide the Venetians a great economic and strategic advantage throughout the eastern empire for centuries. According to the terms of the decree, annual grants were awarded to all the churches in Venice (including a special gift to the coffers of St Mark's), the Republic was granted whole sections of the Golden Horn in Constantinople, and Venetian merchants were given a full exemption from all taxes and duties throughout the territories of the Byzantine Empire. Not only did this aid the rapid economic growth of Venice in the next few centuries by giving Venetian goods a significant price advantage over other foreign goods, but it initiated a long period of artistic, cultural, and military relationships between Venice and Byzantium. This combination of eastern and western cultural influences made Venice a symbolic gateway between the east and the west in Southern Europe. At the beginning of Selvo's rule, he took over responsibility for the third construction of St. Mark's Basilica. This final and most famous version of the church, whose construction was begun by Domenico Contarini and finished by Vitale Faliero in 1094, remains an important symbol of the long periods of medieval Venetian wealth and power. The church is also a monument to the great Byzantine influence on Venetian art and culture throughout its history, but particularly in the 11th century. Though Selvo did not oversee the beginning or completion of St Mark's Basilica, his rule covered a longer period of its construction than the other two Doges who oversaw the project. The Doge decreed that all Venetian merchants returning from the east had to bring back marbles or fine carvings to decorate St Mark's. The first mosaics were started in the basilica under the supervision of Selvo. By gaining power through a vote of confidence from the people and then willingly surrendering power, Selvo, like many other Doges who underwent similar transitions, left a long-term impact on the succession process that would eventually become a model for peaceful, anti-nepotistic transitions of power in a classical republic. Although his deposition did not immediately change the system, it was one of many important changes of power in a society that was in the process of moving away from a monarchy and toward a government led by an elected official. Following the battles at Corfu, Selvo was seen by many as inept and incapable of handling the duties that a Doge must perform. His apparent squandering of nearly the entire fleet coupled with a decade-long distrust for his royal wife caused Selvo to become unpopular in Venice. By responding to the will of the people, Selvo helped shape a society that would eventually create a complicated system to check the power of its most influential members, create cooperative governmental branches that checked each other's power, and fuse the nation into a classical republic.
38,427,349
Kedok Ketawa
1,164,691,210
1940 action film
[ "1940 films", "1940 lost films", "1940s action films", "Indonesian action films", "Indonesian black-and-white films", "Lost Indonesian films", "Lost action films", "Union Films films" ]
Kedok Ketawa (; Indonesian for The Laughing Mask, also known by the Dutch title Het Lachende Masker) is a 1940 action film from the Dutch East Indies (now Indonesia). Union Films' first production, it was directed by Jo An Djan. Starring Basoeki Resobowo, Fatimah, and Oedjang, the film follows a young couple who fight off criminals with the help of a masked man. Advertised as an "Indonesian cocktail of violent actions ... and sweet romance", Kedok Ketawa received positive reviews, particularly for its cinematography. Following the success of the film, Union produced another six works before being shut down in early 1942 during the Japanese occupation. The film, screened until at least August 1944, may be lost. ## Plot In Cibodas, Banten, a young woman named Minarsih (Fatimah) is rescued from four thugs by the painter Basuki (Basoeki Resobowo). They fall in love and begin planning their life together. However, a rich man interested in taking Minarsih to be his wife sends a gang to kidnap her. Basuki is unable to repel them, but is soon joined by a masked vigilante known only as "The Laughing Mask" (Oedjang), who has almost supernatural fighting abilities. After two battles with the gang, Basuki and The Laughing Mask are victorious. Basuki and Minarsih can live together in peace. ## Production Kedok Ketawa was the first film produced by Union Films, one of four new production houses established after the success of Albert Balink's Terang Boelan revived the ailing motion picture industry of the Dutch East Indies. Union was headquartered in Prinsenlaan, Batavia (now Mangga Besar, Jakarta) and funded by the ethnic Chinese businessman Ang Hock Liem, although Tjoa Ma Tjoen was in charge of day-to-day operations. The film was shot on location in Cibodas, and featured fighting, comedy, and singing. The movie was directed by Jo An Djan and starred Oedjang, Fatimah, and Basoeki Resobowo. Other members of the cast included S Poniman and Eddy Kock. Oedjang had been a stage actor before appearing in the film, while Fatimah and Basoeki were nobles with a formal education. The Indonesian film historian Misbach Yusa Biran writes that this is evidence the picture was targeted at intellectual audiences, a manifestation of Union's stated goal of "improv[ing] the quality of Indonesian art". Following the success of Terang Boelan (1937; based on The Jungle Princess), the domestic movie-making industry began to model their productions after Hollywood works, as this was expected to ensure financial success. The Indonesian film scholars Ekky Imanjaya and Said Salim write that Kedok Ketawa was influenced by Bram Stoker's 1897 novel Dracula through its Hollywood adaptations. Neither writer gives comparisons to illustrate this influence. Kedok Ketawa was not the first contemporary film featuring a masked hero. Tan's Film had released Gagak Item (The Black Crow), with Rd Mochtar as the masked Black Crow, in 1939, and later productions, including Java Industrial Film's Srigala Item (The Black Wolf; 1941), continued the trend. As was common for contemporary productions, the soundtrack for Kedok Ketawa – performed by Poniman – consisted of kroncong songs. ## Release and reception Kedok Ketawa was released in Batavia in July 1940, with a press screening on 20 July. By September it was being shown in Surabaya. In some newspaper advertisements, such as in Pemandangan, it was referred to as Pendekar dari Preanger (Warrior from Preanger), while in others it was advertised with the Dutch title Het Lachende Masker. It was marketed as an "Indonesian cocktail of violent actions ... and sweet romance"[^1] and rated for all ages. The critic and screenwriter Saeroen, writing for Pemandangan, praised Kedok Ketawa, especially its cinematography and the beauty of its scenery; he compared the film to imported Hollywood films. An anonymous review in Bataviaasch Nieuwsblad found that the film was a mix of native and European sensibilities and lauded its cinematography. According to the review, the film surpassed expectations, but it was evident that this was a first production. Another review, in Soerabaijasch Handelsblad, considered the film among the best local productions, emphasising the quality of its cinematography and acting. ## Legacy Soon after the success of Kedok Ketawa, Saeroen joined Union Films and wrote four films for the company. These were not directed by Jo An Djan, who left Union for the competitor Populair's Film, but by the newly hired R Hu and Rd Ariffien. Union Film ultimately produced a total of seven films in 1940 and 1941 before being closed following the Japanese invasion in early 1942. Of the film's main cast, only Fatimah and Oedjang are recorded as continuing their acting career, both appearing in several further Union productions. However, in the 1950s Resobowo continued his career behind the screen, serving as art director of such films as Darah dan Doa (The Long March; 1950). Kedok Ketawa was screened as late as August 1944, but may be a lost film. Films were then shot on flammable nitrate film, and after a fire destroyed much of Produksi Film Negara's warehouse in 1952, old films shot on nitrate were deliberately destroyed. While the American visual anthropologist Karl G. Heider suggests that all Indonesian films from before 1950 are lost, J.B. Kristanto's Katalog Film Indonesia'' records several as having survived at Sinematek Indonesia's archives, and Biran writes that some Japanese propaganda films have survived at the Netherlands Government Information Service. ## Explanatory notes [^1]: Original: "... een indonesische cocktail van heftige acties ... zoete romantiek".
161,087
Timor Leste Defence Force
1,171,510,517
Combined military forces of East Timor
[ "2001 establishments in East Timor", "Military of East Timor", "Recipients of the Order of Timor-Leste" ]
The Timor Leste Defence Force (Tetum: Forcas Defesa Timor Lorosae, Portuguese: Forças de Defesa de Timor Leste or Falintil-FDTL, often F-FDTL) is the military of East Timor. The F-FDTL was established in February 2001 and comprises two infantry battalions, small naval and air components and several supporting units. The F-FDTL's primary role is to protect East Timor from external threats. It also has an internal security role, which overlaps with that of the Polícia Nacional de Timor-Leste (PNTL). This overlap has led to tensions between the services, which have been exacerbated by poor morale and lack of discipline within the F-FDTL. The F-FDTL's problems came to a head in 2006 when almost half the force was dismissed following protests over discrimination and poor conditions. The dismissal contributed to a general collapse of both the F-FDTL and PNTL in May and forced the government to request foreign peacekeepers to restore security. The F-FDTL is currently being rebuilt with foreign assistance and has drawn up a long-term force development plan. ## Role The constitution of East Timor assigns the F-FDTL responsibility for protecting East Timor against external attack. The constitution states that the F-FDTL "shall guarantee national independence, territorial integrity and the freedom and security of the populations against any aggression or external threat, in respect for the constitutional order." The constitution also states that the F-FDTL "shall be non-partisan and shall owe obedience to the competent organs of sovereignty in accordance with the Constitution and the laws, and shall not intervene in political matters." The National Police of East Timor (or PNTL) and other civilian security forces are assigned responsibility for internal security. In practice the responsibilities of the F-FDTL and PNTL were not clearly delineated, and this led to conflict between the two organisations. The East Timorese Government has broadened the F-FDTL's role over time. As what have been designated "new missions", the F-FDTL has been given responsibility for crisis management, supporting the suppression of civil disorder, responding to humanitarian crises and facilitating co-operation between different parts of the government. ## History ### Pre-independence The F-FDTL was formed from the national liberation movement guerrilla army known as FALINTIL (Portuguese acronym for Forças Armadas de Libertação de Timor-Leste or Armed Forces for the Liberation of East Timor). During the period before 1999 some East Timorese leaders, including the current President José Ramos-Horta, proposed that a future East Timorese state would not have a military. The widespread violence and destruction that followed the independence referendum in 1999 and the need to provide employment to FALINTIL veterans led to a change in policy. The inadequate number of police officers who were deployed to East Timor as part of the United Nations-led peacekeeping force contributed to high rates of crime. The presence of 1,300 armed and increasingly dissatisfied FALINTIL personnel in cantonments during late 1999 and most of 2000 also posed a threat to security. Following the end of Indonesian rule, FALINTIL proposed the establishment of a large military of about 5,000 personnel. In mid-2000 the United Nations Transitional Administration in East Timor (UNTAET) contracted a team from King's College London to conduct a study of East Timor's security force options and options to demobilise the former guerrilla forces. The team's report identified three options for an East Timorese military. Option 1 was based on FALINTIL's preference for a relatively large and heavily armed military of 3,000–5,000 personnel, option 2 was a force of 1,500 regulars and 1,500 conscripts and option 3 was for a force of 1,500 regulars and 1,500 volunteer reservists. The study team recommended option 3 as being best suited to East Timor's security needs and economic situation. This recommendation was accepted by UNTAET in September 2000 and formed the basis of East Timor's defence planning. The plan was also accepted by all the countries that had contributed peacekeeping forces to East Timor. The King's College report was criticised by Greg Sheridan, foreign editor of The Australian, on the grounds that it led East Timor to establish a large police force and a large Army when its security needs might have been better met by a single smaller paramilitary force. While East Timor's decision to form a military has been criticised by some commentators, the East Timorese government has consistently believed that the force is necessary for political and security reasons. Critics of the F-FDTL's establishment argue that as East Timor does not face any external threats the government's limited resources would be better spent on strengthening the PNTL. While East Timor's political leadership recognised that the country does not currently face an external threat, they believed that it is necessary to maintain a military capacity to deter future aggression. The establishment of the F-FDTL was also seen as an effective means of integrating FALINTIL into an independent East Timor. ### Formation of the F-FDTL An Office for Defence Force Development staffed mainly by foreign military officers was established to oversee the process of forming East Timor's armed forces and demobilising the former guerrillas. The Office delegated responsibility for recruiting personnel to FALINTIL's leaders. FALINTIL officially became F-FDTL on 1 February 2001. The first 650 members of the F-FDTL were selected from 1,736 former FALINTIL applicants and began training on 29 March. The FDTL's 1st Battalion was established on 29 June 2001 and reached full strength on 1 December. Most members of the battalion were from East Timor's eastern provinces. The 2nd Battalion was established in 2002 from a cadre of the 1st Battalion and was manned mainly by new personnel under the age of 21 who had not participated in the independence struggle. Due to the force's prestige and relatively high pay, there were 7,000 applications for the first 267 positions in the battalion. The F-FDTL's small naval component was established in December 2001. The Australian UNTAET contingent provided most of the F-FDTL's training, and the United States equipped the force. Some of the problems that have affected the F-FDTL throughout its existence were caused by the process used to establish the force. A key flaw in this process was that FALINTIL's high command was allowed to select candidates for the military from members of FALINTIL without external oversight. As a result, the selection was conducted, to a large degree, on the basis of applicants' political allegiance. This led to many FALINTIL veterans feeling that they had been unfairly excluded from the military and reduced the force's public standing. The decision to recruit young people who had not served in FALINTIL in the subsequent rounds of recruitment led to further tensions within the F-FDTL due to the often large age gap between the veterans and the new recruits and the fact that while the senior officers tended to be from the east of the country most of the junior officers and infantry were from the west. Furthermore, UNTAET failed to establish adequate foundations for the East Timorese security sector by developing legislative and planning documents, administrative support arrangements and mechanisms for the democratic control of the military. These omissions remained uncorrected after East Timor achieved independence on 20 May 2002. The F-FDTL gradually assumed responsibility for East Timor's security from the UN peacekeeping force. The Lautém District was the first area to pass to the F-FDTL in July 2002. After further training the F-FDTL took over responsibility for the entire country's external security on 20 May 2004, although some foreign peacekeepers remained in East Timor until mid-2005. The F-FDTL conducted its first operation in January 2003 when an army unit was called in to quell criminal activity caused by west Timorese militia gangs in the Ermera district. While the F-FDTL operated in a "relatively disciplined and orderly fashion" during this operation, it illegally arrested nearly 100 people who were released 10 days later without being charged. The F-FDTL has suffered from serious morale and disciplinary problems since its establishment. These problems have been driven by uncertainty over the F-FDTL's role, poor conditions of service due to limited resources, tensions arising from FALINTIL's transition from a guerrilla organisation to a regular military and political and regional rivalries. The F-FDTL's morale and disciplinary problems have resulted in large numbers of soldiers being disciplined or dismissed. The East Timorese Government was aware of these problems before the 2006 crisis but did not rectify the factors that were contributing to low morale. Tensions between the F-FDTL and PNTL have also reduced the effectiveness of East Timor's security services. In 2003, the East Timorese Government established three new paramilitary police forces equipped with modern military-grade weapons. The formation of these units led to dissatisfaction with the Government among some members of the F-FDTL. During 2003 and 2004, members of the police and F-FDTL clashed on a number of occasions, and groups of soldiers attacked police stations in September 2003 and December 2004. These tensions were caused by the overlapping roles of the two security services, differences of opinion between members of East Timor's leadership and the fact that many members of the PNTL had served with the Indonesian National Police prior to East Timor's independence while the F-FDTL was based around FALINTIL. ### 2006 crisis The tensions within the F-FDTL came to a head in 2006. In January, 159 soldiers from most units in the F-FDTL complained in a petition to then President Xanana Gusmão that soldiers from the east of the country received better treatment than westerners. The 'petitioners' received only a minimal response and left their barracks three weeks later, leaving their weapons behind. They were joined by hundreds of other soldiers and on 16 March the F-FDTL's commander, Brigadier General Taur Matan Ruak, dismissed 594 soldiers, which was nearly half of the force. The soldiers dismissed were not limited to the petitioners, and included about 200 officers and other ranks who had been chronically absent without leave in the months and years before March 2006. The crisis escalated into violence in late April. On 24 April, the petitioners and some of their supporters held a four-day demonstration outside the Government Palace in Dili calling for the establishment of an independent commission to address their grievances. Violence broke out on 28 April when some of the petitioners and gangs of youths who had joined the protest attacked the Government Palace. The PNTL failed to contain the protest and the Palace was badly damaged. After violence spread to other areas of Dili, Prime Minister Mari Alkatiri requested that the F-FDTL help restore order. Troops with no experience in crowd control were deployed to Dili on 29 April and three deaths resulted. On 3 May Major Alfredo Reinado, the commander of the F-FDTL's military police unit, and most of his soldiers including Lt Gastão Salsinha abandoned their posts in protest at what they saw as the army's deliberate shooting of civilians. Fighting broke out between the remnants of the East Timorese security forces and the rebels and gangs in late May. On 23 May Reinado's rebel group opened fire on F-FDTL and PNTL personnel in the Fatu Ahi area. On 24 May F-FDTL personnel near the Force's headquarters were attacked by a group of rebel police officers, petitioners and armed civilians. The attack was defeated when one of the F-FDTL naval component's patrol boats fired on the attackers. During the crisis the relationship between the F-FDTL and PNTL had deteriorated further, and on 25 May members of the F-FDTL attacked the PNTL's headquarters, killing nine unarmed police officers. As a result of the escalating violence the government was forced to appeal for international peacekeepers on 25 May. Peacekeepers began to arrive in Dili the next day and eventually restored order. A total of 37 people were killed in the fighting in April and May and 155,000 fled their homes. A United Nations inquiry found that the interior and defence ministers and the commander of the F-FDTL had illegally transferred weapons to civilians during the crisis and recommended that they be prosecuted. By September the F-FTDL had been much reduced, and comprised Headquarters (95 personnel), Force Communications Unit (21), Military Police Unit (18), First Battalion (317), Naval Component (83), Force Logistics Unit (63) and Nicolau Lobato Training Centre, Metinaro (118). In addition, 43 former Second Battalion members were on courses. ### Force development plans The 2006 crisis left the F-FDTL "in ruins". The F-FDTL's strength fell from 1,435 in January 2006 to 715 in September and the proportion of westerners in the military fell from 65 per cent to 28 per cent. The F-FDTL started a rebuilding process with support from several nations and the United Nations, but was still not ready to resume responsibility for East Timor's external security two years after the crisis. In 2004 the commander of the F-FDTL formed a team, which included international contractors, to develop a long-term strategic vision document for the military. This study was supported by the Australian Government. The resulting Force 2020 document was completed in 2006 and made public in 2007. The document sets out an 'aspirational' vision for the development of the F-FDTL to 2020 and beyond and is of equivalent status to a defence white paper. It proposes expanding the military to a strength of 3,000 regular personnel in the medium term through the introduction of conscription. It also sets longer-term goals such as establishing an air component and purchasing modern weapons, such as anti-armour weapons, armoured personnel carriers and missile boats, by 2020. The Force 2020 plan is similar to option 1 in the King's College report. The King's College study team strongly recommended against such a force structure, labelling it "unaffordable" and raising concerns over the impact of conscription upon East Timorese society and military readiness. The team estimated that sustaining such a force structure would cost 2.6 to 3.3 per cent of East Timor's annual gross domestic product and would "represent a heavy burden on the East Timor economy". Moreover, the Force 2020 plan may not be realistic or suitable as it appears to emphasise military expansion to counter external threats over spending on other government services and internal security and outlines ideas such as the long-term (\~2075) development of space forces. While the Force 2020 plan has proven controversial, it appears to have been adopted by the East Timorese government. The plan was criticised by the United Nations and the governments of Australia and the United States as unaffordable and in excess of East Timor's needs. East Timorese President José Ramos-Horta defended the plan, however, arguing that its adoption will transform the F-FDTL into a professional force capable of defending East Timor's sovereignty and contributing to the nation's stability. East Timorese defence officials have also stressed that Force 2020 is a long-term plan and does not propose acquiring advanced weapons for some years. The repercussions of the 2006 crisis lasted for several years. On 11 February 2008, a group of rebels led by Alfredo Reinado attempted to kill or kidnap President Ramos-Horta and Prime Minister Gusmão. Although Ramos-Horta and one of his guards were badly wounded, these attacks were not successful and Reinado and another rebel were killed. A joint F-FDTL and PNTL command was established to pursue the surviving rebels and the military and police demonstrated a high degree of co-operation during this operation. The joint command was disbanded on 19 June 2008. While the joint command contributed to the surrender of many of Reinado's associates, it has been alleged that members of this unit committed human rights violations. More broadly, the shock caused by the attack on Ramos-Horta and Gusmão led to lasting improvements in cooperation between the F-FDTL and PNTL. In June 2008 the Government offered to provide financial compensation to the petitioners who wished to return to civilian life. This offer was accepted, and all the petitioners returned to their homes by August that year. In May 2009, the F-FDTL accepted its first intake of recruits since the 2006 crisis. While the regional diversity of the 579 new recruits was generally much greater than that of the pre-crisis intakes, 60.3 per cent of officer candidates were from the country's eastern districts. From 2009 the F-FDTL established platoon-sized outposts to support the PNTL border police in the Bobonaro and Cova Lima border districts, and it has increasingly been deployed to undertake internal security tasks. From February to August 2010, 200 members of the F-FDTL were deployed to support PNTL operations against "Ninja" gangs. These troops undertook community engagement tasks, and were unarmed and not closely integrated with the PNTL efforts. In 2011 the F-FDTL was still under-strength and yet to reform its training and discipline standards. Tensions within the F-FDTL also continued to threaten the stability of the force. However, the East Timorese government placed a high priority on re-establishing the F-FDTL and developing it into a force capable of defending the country. In 2012 the Government authorised an expansion of the F-FDTL to 3,600 personnel by 2020, of whom approximately one quarter will be members of the Naval Component. The 2016 edition of the International Institute for Strategic Studies' (IISS) publication The Military Balance stated that the F-FDTL was "only capable of internal and border-security roles". The East Timorese Government published a new Strategic Defence and Security Concept during 2016. This document defined the role of the F-FDTL as defending the country against external threats and countering violent crime within East Timor. The Strategic Defence and Security Concept also called for the F-FDTL's naval capabilities to be improved to adequately protect East Timor's exclusive economic zone. In 2020 the IISS judged that the F-FDTL "has been reconstituted but is still a long way from meeting the ambitious force-structure goals set out in the Force 2020 plan". Similarly, a 2019 Stockholm International Peace Research Institute noted that there has been little progress in completing the acquisition program set out in the Force 2020 plan, likely due to a shortage of funds and "possibly also because there seems to be no rationale for acquiring some of the equipment". On 29 October 2020, the Council of Ministers approved of a plan to start compulsory national service for Timorese citizens who are 18 years old and above. ## Command arrangements The constitution of East Timor states that the president is the supreme commander of the defence force and has the power to appoint the F-FDTL's commander and chief of staff. The Council of Ministers and National Parliament are responsible for funding the F-FDTL and setting policy relating to East Timor's security. A Superior Council for Defence and Security [de] was established in 2005 to advise the president on defence and security policy and legislation and the appointment and dismissal of senior military personnel. The council is chaired by the president and includes the prime minister, the defence, justice, interior and foreign affairs ministers, the heads of the F-FDTL and PNTL a national state security officer and three representatives from the national parliament. The council's role is not clear, however, and neither it nor the parliament served as a check against the decision to sack large numbers of F-FDTL personnel in 2006. A parliamentary committee also provides oversight of East Timor's security sector. Major General Lere Anan Timor is the current commander of the F-FDTL, and was appointed to this position on 6 October 2011. A small ministry of defence (which was renamed the Ministry of Defence and Security in 2007) was established in 2002 to provide civilian oversight of the F-FDTL. A lack of suitable staff for the ministry and the close political relationship between senior F-FDTL officers and government figures rendered this oversight largely ineffectual and retarded the development of East Timor's defence policy up to at least 2004. The failure to institute effective civilian oversight of the F-FDTL also limited the extent to which foreign countries are willing to provide assistance to the F-FDTL and contributed to the 2006 crisis. As at early 2010 the Ministry of Defence and Security was organised into elements responsible for defence (including the F-FDTL) and security (including the PNTL), each headed by their own secretary of state. At this time the East Timorese Government was working to expand the ministry's capacity with assistance from UNMIT, but continuing shortages of qualified staff limited the extent to which the ministry could provide civilian oversight to the security sector. Moreover, elements of the F-FDTL were continuing to resist civilian control over the security forces at this time, and the force had not opened itself to international scrutiny. ## Organisation The F-FDTL is organised into a headquarters, a land component, a naval component and an air component. Following its establishment the F-FDTL also had the "largest and most sophisticated" human intelligence network in East Timor, which was based on the clandestine resistance reporting networks built up during the Indonesian occupation. However, in May 2008 the national parliament legislated to place the F-FTDL's intelligence branch under the authority of the head of the National Information Service. In 2011 F-FDTL had an authorised strength of 1,500 regular personnel and 1,500 reservists. It had not reached these totals as funding shortfalls prevented the reserve component from being formed and the Army's two regular battalions were under-strength. While all the F-FDTL's personnel were initially FALINTIL veterans the force's composition has changed over time and few soldiers from the insurgency remained as of 2005 due to the force's narrow age requirement. After the F-FDTL's 1st Battalion was established in 2001 recruitment was opened to all East Timorese above the age of 18, including women. Few women have joined the F-FDTL, however. As at February 2010, only seven per cent of new recruits were female. In 2020 women comprised 10.8 per cent of the F-DTL's personnel, with none holding a rank higher than captain. ### Army When initially established, the F-FDTL land force comprised two light infantry battalions, each with an authorised strength of 600 personnel. As of 2004 each battalion had three rifle companies, a support company and a headquarters company. Although the army is small, the guerrilla tactics employed by FALINTIL before the departure in 1999 of the Indonesian National Armed Forces were effective against overwhelming numbers and it has the potential to form a credible deterrent against invasion. The Army's current doctrine is focused on low-intensity infantry combat tactics as well as counter-insurgency tasks. Most of the force's training and operations are conducted at the section level, and company or battalion-sized exercises are rare. As of 2020 the Army's main elements remained two light infantry battalions. These units are located in separate bases. As of 2004 the 1st Battalion was based at Baucau, with a contingent in the seaside coastline village of Laga. In 2006 the 2nd Battalion was stationed at the Nicolau Lobato Training Centre near Metinaro. Almost all of the 2nd Battalion's soldiers were dismissed during the 2006 crisis. The other major Army units are a military police platoon and a logistic support company. As of 2019, the F-FDTL was planning to raise a special forces company. The 2020 edition of The Military Balance stated that the Army had 2,200 personnel. Logistics and service support is provided through Headquarters F-FDTL in Dili. The military police platoon polices the F-FDTL and performs traditional policing tasks, resulting in conflicting roles with the PNTL. The military police have also been responsible for presidential security since February 2007. In 2010 the United States Embassy in Dili reported that the F-FDTL also planned to raise two engineer squadrons during that year; these two units were to have a total strength of 125 personnel. The F-FDTL is armed only with small arms and does not have any crew-served weapons. The 2007 edition of Jane's Sentinel stated that the F-FDTL had the following equipment in service: 1,560 M16 rifles and 75 M203 grenade launchers, 75 FN Minimi squad automatic weapons, 8 sniper rifles and 50 .45 M1911A1 pistols. A further 75 Minimis were to be ordered at that time. The majority of the F-FDTL's weapons were donated by other countries. An assessment of East Timor's security forces published by the Centre for International Governance Innovation in 2010 stated that "F-FDTL weapons management and control systems, while superior to that of PNTL, are underdeveloped". The F-FDTL ordered eight lightly armed four wheel drive vehicles from China in 2007. Between 10 and 50 Malaysian Weststar GS trucks were delivered in 2014. ### Naval Component The Naval Component of the F-FDTL was established in December 2001 when Portugal transferred two small Albatroz-class patrol boats from the Portuguese Navy. Its establishment was not supported by the King's College study team, the UN, or East Timor's other donor countries on the grounds that East Timor could not afford to operate a naval force. The role of the naval component is to conduct fishery and border protection patrols and ensure that the maritime line of communication to the Oecussi enclave remains open. This is comparable to the role of the Portuguese Navy, which also undertakes military and coast guard functions. All of the force's warships are based at Hera Harbour, which is located a few kilometres east of Dili. A small base is located at Atabae near the Indonesian border. Under the Force 2020 plan the naval component may eventually be expanded to a light patrol force equipped with corvette-sized ships and landing craft. On 12 April 2008 East Timor signed a contract for two new Chinese-built 43-metre Type 062 patrol boats. These ships were to replace the Albatroz-class vessels and to be used to protect East Timor's fisheries. The contract for the ships also involved 30 to 40 East Timorese personnel being trained in China. The two new patrol boats arrived from China in June 2010, and were commissioned as the Jaco class on the eleventh of the month. This acquisition was controversial in East Timor due to a perceived lack of transparency regarding the purchase and concerns that the patrol boats were not suited to the rough sea conditions and tropical weather in which they would need to operate. The academic Ian Storey has written that "corruption may have played a part in the deal". The East Timorese government justified the purchase by arguing that the patrol boats were needed to safeguard the country's independence. The South Korean Government donated one ex-Republic of Korea Navy Chamsuri-class patrol boat and two smaller patrol boats in 2011, and these entered service with the naval component on 26 September 2011. The East Timorese government also ordered two fast patrol boats from the Indonesian company PT Pal in March 2011 for the price of \$US40 million. The 2020 edition of the IISS Military Balance listed the naval component's size as 80 personnel. The 2011 edition of Jane's Sentinel put the strength of the naval component at 250; this source also stated that recruitment for an approximately 60-person strong Marine unit began in 2011 from existing naval component personnel, members of the Army and civilians. The Marines were to serve as a Special Operations force. In 2017 Timor Leste accepted an offer of two new Guardian-class patrol boats and associated training and logistics assistance from the Australian Government. The vessels are scheduled to be delivered in 2023, and will be named Aitana and Laline. Australia is also funding a new wharf at Hera Harbour that will enable operations of the two Guardian-class patrol boats. ### Air Component As of 2020 the F-FDTL's Air Component operated a single Cessna 172P aircraft. In 2019 the East Timorese Government was considering purchasing three Chinese variants of the Mil Mi-17 helicopter, and a small number of F-FDTL personnel were trained to operate the type in the Philippines. The United States and East Timorese governments reached an agreement in June 2021 through which the United States will contribute funding to upgrade Baucau Airport to support F-FDTL and commercial operations and donate a Cessna 206 to the F-FDTL. Rehabilitation work on the airport began in January 2022, and the aircraft is scheduled to be delivered later in the year. The US military has stated that the purpose of this agreement is to support the creation of an Air Component to "help the Timorese government improve its maritime awareness, respond to natural disasters, and promote economic development". ## Ranks The military ranks of the F-FDTL are similar to the military ranks of the Portuguese Armed Forces. ## Defence expenditure and procurement Total defence expenditure for East Timor in 2018 was \$US29.1 million. This represented 2.7 per cent of gross domestic product (GDP). Timor Leste is one of the few South East Asian countries to have not increased its defence spending between 2009 and 2018, with defence expenditure decreasing by 63.4 per cent in real terms over this period. The modest size of the defence budget means that the East Timorese Government is only able to purchase small quantities of military equipment. Most of the F-FDTL's weapons and other equipment have been provided by foreign donors, and this is likely to remain the case in the future. No military production took place in East Timor as of 2011, and in 2020 the IISS noted that "maintenance capacity is unclear and the country has no traditional defence industry". Funding shortfalls have constrained the development of the F-FDTL. The government has been forced to postpone plans to form an independent company stationed in the Oecussi enclave and two reserve infantry battalions. These units formed an important part of the King's College report's option 3 force structure and their absence may have impacted on East Timor's defence policy. As of 2011 the government was yet to announce what, if any, reserve units would be formed, though provisions for such units had been included in legislation. ## Foreign defence relations While the UN was reluctant to engage with the F-FDTL, several bilateral donors have assisted the force's development. Australia has provided extensive training and logistical support to the F-FDTL since it was established, and currently provides advisors who are posted to the F-FDTL and Ministry of Defence and Security. Portugal also provides advisors and trains two naval officers each year in Portugal. China provided US\$1.8 million in aid to the F-FDTL between 2002 and 2008 and agreed to build a new US\$7 million headquarters for the force in late 2007. East Timor is one of Brazil's main destinations for aid and the Brazilian Army is responsible for training the F-FDTL's military police unit (Maubere Mission). The United States also provides a small amount of assistance to the F-FDTL through the State Department's International Military Education and Training Program. While Malaysia has provided training courses and financial and technical aid, this assistance was suspended after the 2006 crisis. As of 2010, Portugal provided the F-FDTL with basic and advanced training while Australia and other nations provided training in specialised skills. East Timor and Portugal signed a defence cooperation treaty in 2017 which will remain in force until 2022. Australian and US support for the F-FDTL had been reduced to only occasional training by 2020. East Timor and Indonesia have sought to build friendly relations since 2002. While movements of people and drug smuggling across their international border has caused tensions, both countries have worked with the UN to improve the security situation in the region. The East Timorese and Indonesian governments signed a defence agreement in August 2011 which aims to improve co-operation between their national militaries. The Timor Leste-Indonesia Defense Joint Committee was also established at this time to monitor the agreement's implementation. East Timor ratified the Nuclear Non-Proliferation Treaty, Biological and Toxin Weapons Convention and Chemical Weapons Convention in 2003. The East Timorese Government has no plans to acquire nuclear, biological or chemical weapons. The country also became a party to the Ottawa Treaty, which bans anti-personnel mines, in 2003. The East Timorese Government and F-FDTL are interested in deploying elements of the force on international peacekeeping missions. This is motivated by a desire to "give back to the international community". A platoon of 12 engineers was deployed to Lebanon between February and May 2012 as an element of a Portuguese unit which was serving with the United Nations Interim Force in Lebanon. Small numbers of F-FDTL specialists have been posted to the United Nations Mission in South Sudan (UNMISS) between 2011 and 2016 and since early 2020. For instance, three F-FDTL members served as observers with UNMISS in 2016. As of 2020, the F-FDTL was preparing plans to make larger peacekeeping deployments and Australia and Portugal were providing training for such missions.
874
Ancient Egypt
1,173,769,675
Northeastern African civilization
[ "Ancient Egypt", "Ancient peoples", "Bronze Age civilizations", "Cradle of civilization", "Former empires in Africa", "Former empires in Asia", "History of Egypt by period", "History of the Mediterranean" ]
Ancient Egypt was a civilization in North Africa situated in the Nile Valley. Ancient Egyptian civilization followed prehistoric Egypt and coalesced around 3100 BC (according to conventional Egyptian chronology) with the political unification of Upper and Lower Egypt under Menes (often identified with Narmer). The history of ancient Egypt occurred as a series of stable kingdoms, separated by periods of relative instability known as Intermediate Periods: the Old Kingdom of the Early Bronze Age, the Middle Kingdom of the Middle Bronze Age and the New Kingdom of the Late Bronze Age. Egypt reached the pinnacle of its power in the New Kingdom, ruling much of Nubia and a sizable portion of the Levant, after which it entered a period of slow decline. During the course of its history, Egypt was invaded or conquered by a number of foreign powers, including the Hyksos, the Nubians, the Assyrians, the Achaemenid Persians, and the Macedonians under Alexander the Great. The Greek Ptolemaic Kingdom, formed in the aftermath of Alexander's death, ruled Egypt until 30 BC, when, under Cleopatra, it fell to the Roman Empire and became a Roman province. The success of ancient Egyptian civilization came partly from its ability to adapt to the conditions of the Nile River valley for agriculture. The predictable flooding and controlled irrigation of the fertile valley produced surplus crops, which supported a more dense population, and social development and culture. With resources to spare, the administration sponsored mineral exploitation of the valley and surrounding desert regions, the early development of an independent writing system, the organization of collective construction and agricultural projects, trade with surrounding regions, and a military intended to assert Egyptian dominance. Motivating and organizing these activities was a bureaucracy of elite scribes, religious leaders, and administrators under the control of a pharaoh, who ensured the cooperation and unity of the Egyptian people in the context of an elaborate system of religious beliefs. The many achievements of the ancient Egyptians include the quarrying, surveying, and construction techniques that supported the building of monumental pyramids, temples, and obelisks; a system of mathematics, a practical and effective system of medicine, irrigation systems, and agricultural production techniques, the first known planked boats, Egyptian faience and glass technology, new forms of literature, and the earliest known peace treaty, made with the Hittites. Ancient Egypt has left a lasting legacy. Its art and architecture were widely copied, and its antiquities were carried off to far corners of the world. Its monumental ruins have inspired the imaginations of travelers and writers for millennia. A newfound respect for antiquities and excavations in the early modern period by Europeans and Egyptians has led to the scientific investigation of Egyptian civilization and a greater appreciation of its cultural legacy. ## History The Nile has been the lifeline of its region for much of human history. The fertile floodplain of the Nile gave humans the opportunity to develop a settled agricultural economy and a more sophisticated, centralized society that became a cornerstone in the history of human civilization. Nomadic modern human hunter-gatherers began living in the Nile valley through the end of the Middle Pleistocene some 120,000 years ago. By the late Paleolithic period, the arid climate of Northern Africa had become increasingly hot and dry, forcing the populations of the area to concentrate along the river region. ### Predynastic period In Predynastic and Early Dynastic times, the Egyptian climate was much less arid than it is today. Large regions of Egypt were covered in treed savanna and traversed by herds of grazing ungulates. Foliage and fauna were far more prolific in all environs, and the Nile region supported large populations of waterfowl. Hunting would have been common for Egyptians, and this is also the period when many animals were first domesticated. By about 5500 BC, small tribes living in the Nile valley had developed into a series of cultures demonstrating firm control of agriculture and animal husbandry, and identifiable by their pottery and personal items, such as combs, bracelets, and beads. The largest of these early cultures in upper (Southern) Egypt was the Badarian culture, which probably originated in the Western Desert; it was known for its high-quality ceramics, stone tools, and its use of copper. The Badari was followed by the Naqada culture: the Amratian (Naqada I), the Gerzeh (Naqada II), and Semainean (Naqada III). These brought a number of technological improvements. As early as the Naqada I Period, predynastic Egyptians imported obsidian from Ethiopia, used to shape blades and other objects from flakes. In Naqada II times, early evidence exists of contact with the Near East, particularly Canaan and the Byblos coast. Over a period of about 1,000 years, the Naqada culture developed from a few small farming communities into a powerful civilization whose leaders were in complete control of the people and resources of the Nile valley. Establishing a power center at Nekhen (in Greek, Hierakonpolis), and later at Abydos, Naqada III leaders expanded their control of Egypt northwards along the Nile. They also traded with Nubia to the south, the oases of the western desert to the west, and the cultures of the eastern Mediterranean and Near East to the east, initiating a period of Egypt-Mesopotamia relations. The Naqada culture manufactured a diverse selection of material goods, reflective of the increasing power and wealth of the elite, as well as societal personal-use items, which included combs, small statuary, painted pottery, high quality decorative stone vases, cosmetic palettes, and jewelry made of gold, lapis, and ivory. They also developed a ceramic glaze known as faience, which was used well into the Roman Period to decorate cups, amulets, and figurines. During the last predynastic phase, the Naqada culture began using written symbols that eventually were developed into a full system of hieroglyphs for writing the ancient Egyptian language. ### Early Dynastic Period (c. 3150–2686 BC) The Early Dynastic Period was approximately contemporary to the early Sumerian-Akkadian civilization of Mesopotamia and of ancient Elam. The third-century BC Egyptian priest Manetho grouped the long line of kings from Menes to his own time into 30 dynasties, a system still used today. He began his official history with the king named "Meni" (or Menes in Greek), who was believed to have united the two kingdoms of Upper and Lower Egypt. The transition to a unified state happened more gradually than ancient Egyptian writers represented, and there is no contemporary record of Menes. Some scholars now believe, however, that the mythical Menes may have been the king Narmer, who is depicted wearing royal regalia on the ceremonial Narmer Palette, in a symbolic act of unification. In the Early Dynastic Period, which began about 3000 BC, the first of the Dynastic kings solidified control over lower Egypt by establishing a capital at Memphis, from which he could control the labor force and agriculture of the fertile delta region, as well as the lucrative and critical trade routes to the Levant. The increasing power and wealth of the kings during the early dynastic period was reflected in their elaborate mastaba tombs and mortuary cult structures at Abydos, which were used to celebrate the deified king after his death. The strong institution of kingship developed by the kings served to legitimize state control over the land, labor, and resources that were essential to the survival and growth of ancient Egyptian civilization. ### Old Kingdom (2686–2181 BC) Major advances in architecture, art, and technology were made during the Old Kingdom, fueled by the increased agricultural productivity and resulting population, made possible by a well-developed central administration. Some of ancient Egypt's crowning achievements, the Giza pyramids and Great Sphinx, were constructed during the Old Kingdom. Under the direction of the vizier, state officials collected taxes, coordinated irrigation projects to improve crop yield, drafted peasants to work on construction projects, and established a justice system to maintain peace and order. With the rising importance of central administration in Egypt, a new class of educated scribes and officials arose who were granted estates by the king in payment for their services. Kings also made land grants to their mortuary cults and local temples, to ensure that these institutions had the resources to worship the king after his death. Scholars believe that five centuries of these practices slowly eroded the economic vitality of Egypt, and that the economy could no longer afford to support a large centralized administration. As the power of the kings diminished, regional governors called nomarchs began to challenge the supremacy of the office of king. This, coupled with severe droughts between 2200 and 2150 BC, is believed to have caused the country to enter the 140-year period of famine and strife known as the First Intermediate Period. ### First Intermediate Period (2181–2055 BC) After Egypt's central government collapsed at the end of the Old Kingdom, the administration could no longer support or stabilize the country's economy. Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars. Yet despite difficult problems, local leaders, owing no tribute to the king, used their new-found independence to establish a thriving culture in the provinces. Once in control of their own resources, the provinces became economically richer—which was demonstrated by larger and better burials among all social classes. In bursts of creativity, provincial artisans adopted and adapted cultural motifs formerly restricted to the royalty of the Old Kingdom, and scribes developed literary styles that expressed the optimism and originality of the period. Free from their loyalties to the king, local rulers began competing with each other for territorial control and political power. By 2160 BC, rulers in Herakleopolis controlled Lower Egypt in the north, while a rival clan based in Thebes, the Intef family, took control of Upper Egypt in the south. As the Intefs grew in power and expanded their control northward, a clash between the two rival dynasties became inevitable. Around 2055 BC the northern Theban forces under Nebhepetre Mentuhotep II finally defeated the Herakleopolitan rulers, reuniting the Two Lands. They inaugurated a period of economic and cultural renaissance known as the Middle Kingdom. ### Middle Kingdom (2134–1690 BC) The kings of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects. Mentuhotep II and his Eleventh Dynasty successors ruled from Thebes, but the vizier Amenemhat I, upon assuming the kingship at the beginning of the Twelfth Dynasty around 1985 BC, shifted the kingdom's capital to the city of Itjtawy, located in Faiyum. From Itjtawy, the kings of the Twelfth Dynasty undertook a far-sighted land reclamation and irrigation scheme to increase agricultural output in the region. Moreover, the military reconquered territory in Nubia that was rich in quarries and gold mines, while laborers built a defensive structure in the Eastern Delta, called the "Walls of the Ruler", to defend against foreign attack. With the kings having secured the country militarily and politically and with vast agricultural and mineral wealth at their disposal, the nation's population, arts, and religion flourished. In contrast to elitist Old Kingdom attitudes towards the gods, the Middle Kingdom displayed an increase in expressions of personal piety. Middle Kingdom literature featured sophisticated themes and characters written in a confident, eloquent style. The relief and portrait sculpture of the period captured subtle, individual details that reached new heights of technical sophistication. The last great ruler of the Middle Kingdom, Amenemhat III, allowed Semitic-speaking Canaanite settlers from the Near East into the Delta region to provide a sufficient labor force for his especially active mining and building campaigns. These ambitious building and mining activities, however, combined with severe Nile floods later in his reign, strained the economy and precipitated the slow decline into the Second Intermediate Period during the later Thirteenth and Fourteenth dynasties. During this decline, the Canaanite settlers began to assume greater control of the Delta region, eventually coming to power in Egypt as the Hyksos. ### Second Intermediate Period (1674–1549 BC) and the Hyksos Around 1785 BC, as the power of the Middle Kingdom kings weakened, a Western Asian people called the Hyksos, who had already settled in the Delta, seized control of Egypt and established their capital at Avaris, forcing the former central government to retreat to Thebes. The king was treated as a vassal and expected to pay tribute. The Hyksos ("foreign rulers") retained Egyptian models of government and identified as kings, thereby integrating Egyptian elements into their culture. They and other invaders introduced new tools of warfare into Egypt, most notably the composite bow and the horse-drawn chariot. After retreating south, the native Theban kings found themselves trapped between the Canaanite Hyksos ruling the north and the Hyksos' Nubian allies, the Kushites, to the south. After years of vassalage, Thebes gathered enough strength to challenge the Hyksos in a conflict that lasted more than 30 years, until 1555 BC. The kings Seqenenre Tao II and Kamose were ultimately able to defeat the Nubians to the south of Egypt, but failed to defeat the Hyksos. That task fell to Kamose's successor, Ahmose I, who successfully waged a series of campaigns that permanently eradicated the Hyksos' presence in Egypt. He established a new dynasty and, in the New Kingdom that followed, the military became a central priority for the kings, who sought to expand Egypt's borders and attempted to gain mastery of the Near East. ### New Kingdom (1549–1069 BC) The New Kingdom pharaohs established a period of unprecedented prosperity by securing their borders and strengthening diplomatic ties with their neighbours, including the Mitanni Empire, Assyria, and Canaan. Military campaigns waged under Tuthmosis I and his grandson Tuthmosis III extended the influence of the pharaohs to the largest empire Egypt had ever seen. Beginning with Merneptah the rulers of Egypt adopted the title of pharaoh. Between their reigns, Hatshepsut, a queen who established herself as pharaoh, launched many building projects, including the restoration of temples damaged by the Hyksos, and sent trading expeditions to Punt and the Sinai. When Tuthmosis III died in 1425 BC, Egypt had an empire extending from Niya in north west Syria to the Fourth Cataract of the Nile in Nubia, cementing loyalties and opening access to critical imports such as bronze and wood. The New Kingdom pharaohs began a large-scale building campaign to promote the god Amun, whose growing cult was based in Karnak. They also constructed monuments to glorify their own achievements, both real and imagined. The Karnak temple is the largest Egyptian temple ever built. Around 1350 BC, the stability of the New Kingdom was threatened when Amenhotep IV ascended the throne and instituted a series of radical and chaotic reforms. Changing his name to Akhenaten, he touted the previously obscure sun deity Aten as the supreme deity, suppressed the worship of most other deities, and moved the capital to the new city of Akhetaten (modern-day Amarna). He was devoted to his new religion and artistic style. After his death, the cult of the Aten was quickly abandoned and the traditional religious order restored. The subsequent pharaohs, Tutankhamun, Ay, and Horemheb, worked to erase all mention of Akhenaten's heresy, now known as the Amarna Period. Around 1279 BC, Ramesses II, also known as Ramesses the Great, ascended the throne, and went on to build more temples, erect more statues and obelisks, and sire more children than any other pharaoh in history. A bold military leader, Ramesses II led his army against the Hittites in the Battle of Kadesh (in modern Syria) and, after fighting to a stalemate, finally agreed to the first recorded peace treaty, around 1258 BC. Egypt's wealth, however, made it a tempting target for invasion, particularly by the Libyan Berbers to the west, and the Sea Peoples, a conjectured confederation of seafarers from the Aegean Sea. Initially, the military was able to repel these invasions, but Egypt eventually lost control of its remaining territories in southern Canaan, much of it falling to the Assyrians. The effects of external threats were exacerbated by internal problems such as corruption, tomb robbery, and civil unrest. After regaining their power, the high priests at the temple of Amun in Thebes accumulated vast tracts of land and wealth, and their expanded power splintered the country during the Third Intermediate Period. ### Third Intermediate Period (1069–653 BC) Following the death of Ramesses XI in 1078 BC, Smendes assumed authority over the northern part of Egypt, ruling from the city of Tanis. The south was effectively controlled by the High Priests of Amun at Thebes, who recognized Smendes in name only. During this time, Libyans had been settling in the western delta, and chieftains of these settlers began increasing their autonomy. Libyan princes took control of the delta under Shoshenq I in 945 BC, founding the so-called Libyan or Bubastite dynasty that would rule for some 200 years. Shoshenq also gained control of southern Egypt by placing his family members in important priestly positions. Libyan control began to erode as a rival dynasty in the delta arose in Leontopolis, and Kushites threatened from the south. Around 727 BC the Kushite king Piye invaded northward, seizing control of Thebes and eventually the Delta, which established the 25th Dynasty. During the 25th Dynasty, Pharaoh Taharqa created an empire nearly as large as the New Kingdom's. Twenty-fifth Dynasty pharaohs built, or restored, temples and monuments throughout the Nile valley, including at Memphis, Karnak, Kawa, and Jebel Barkal. During this period, the Nile valley saw the first widespread construction of pyramids (many in modern Sudan) since the Middle Kingdom. Egypt's far-reaching prestige declined considerably toward the end of the Third Intermediate Period. Its foreign allies had fallen under the Assyrian sphere of influence, and by 700 BC war between the two states became inevitable. Between 671 and 667 BC the Assyrians began the Assyrian conquest of Egypt. The reigns of both Taharqa and his successor, Tanutamun, were filled with constant conflict with the Assyrians, against whom Egypt enjoyed several victories. Ultimately, the Assyrians pushed the Kushites back into Nubia, occupied Memphis, and sacked the temples of Thebes. ### Late Period (653–332 BC) The Assyrians left control of Egypt to a series of vassals who became known as the Saite kings of the Twenty-Sixth Dynasty. By 653 BC, the Saite king Psamtik I was able to oust the Assyrians with the help of Greek mercenaries, who were recruited to form Egypt's first navy. Greek influence expanded greatly as the city-state of Naucratis became the home of Greeks in the Nile Delta. The Saite kings based in the new capital of Sais witnessed a brief but spirited resurgence in the economy and culture, but in 525 BC, the powerful Persians, led by Cambyses II, began their conquest of Egypt, eventually capturing the pharaoh Psamtik III at the Battle of Pelusium. Cambyses II then assumed the formal title of pharaoh, but ruled Egypt from Iran, leaving Egypt under the control of a satrap. A few successful revolts against the Persians marked the 5th century BC, but Egypt was never able to permanently overthrow the Persians. Following its annexation by Persia, Egypt was joined with Cyprus and Phoenicia in the sixth satrapy of the Achaemenid Persian Empire. This first period of Persian rule over Egypt, also known as the Twenty-Seventh Dynasty, ended in 402 BC, when Egypt regained independence under a series of native dynasties. The last of these dynasties, the Thirtieth, proved to be the last native royal house of ancient Egypt, ending with the kingship of Nectanebo II. A brief restoration of Persian rule, sometimes known as the Thirty-First Dynasty, began in 343 BC, but shortly after, in 332 BC, the Persian ruler Mazaces handed Egypt over to Alexander the Great without a fight. ### Ptolemaic period (332–30 BC) In 332 BC, Alexander the Great conquered Egypt with little resistance from the Persians and was welcomed by the Egyptians as a deliverer. The administration established by Alexander's successors, the Macedonian Ptolemaic Kingdom, was based on an Egyptian model and based in the new capital city of Alexandria. The city showcased the power and prestige of Hellenistic rule, and became a centre of learning and culture, that included the famous Library of Alexandria as part of the Mouseion. The Lighthouse of Alexandria lit the way for the many ships that kept trade flowing through the city—as the Ptolemies made commerce and revenue-generating enterprises, such as papyrus manufacturing, their top priority. Hellenistic culture did not supplant native Egyptian culture, as the Ptolemies supported time-honored traditions in an effort to secure the loyalty of the populace. They built new temples in Egyptian style, supported traditional cults, and portrayed themselves as pharaohs. Some traditions merged, as Greek and Egyptian gods were syncretized into composite deities, such as Serapis, and classical Greek forms of sculpture influenced traditional Egyptian motifs. Despite their efforts to appease the Egyptians, the Ptolemies were challenged by native rebellion, bitter family rivalries, and the powerful mob of Alexandria that formed after the death of Ptolemy IV. In addition, as Rome relied more heavily on imports of grain from Egypt, the Romans took great interest in the political situation in the country. Continued Egyptian revolts, ambitious politicians, and powerful opponents from the Near East made this situation unstable, leading Rome to send forces to secure the country as a province of its empire. ### Roman period (30 BC – AD 641) Egypt became a province of the Roman Empire in 30 BC, following the defeat of Mark Antony and Ptolemaic Queen Cleopatra VII by Octavian (later Emperor Augustus) in the Battle of Actium. The Romans relied heavily on grain shipments from Egypt, and the Roman army, under the control of a prefect appointed by the emperor, quelled rebellions, strictly enforced the collection of heavy taxes, and prevented attacks by bandits, which had become a notorious problem during the period. Alexandria became an increasingly important center on the trade route with the orient, as exotic luxuries were in high demand in Rome. Although the Romans had a more hostile attitude than the Greeks towards the Egyptians, some traditions such as mummification and worship of the traditional gods continued. The art of mummy portraiture flourished, and some Roman emperors had themselves depicted as pharaohs, though not to the extent that the Ptolemies had. The former lived outside Egypt and did not perform the ceremonial functions of Egyptian kingship. Local administration became Roman in style and closed to native Egyptians. From the mid-first century AD, Christianity took root in Egypt and it was originally seen as another cult that could be accepted. However, it was an uncompromising religion that sought to win converts from the pagan Egyptian and Greco-Roman religions and threatened popular religious traditions. This led to the persecution of converts to Christianity, culminating in the great purges of Diocletian starting in 303, but eventually Christianity won out. In 391, the Christian emperor Theodosius introduced legislation that banned pagan rites and closed temples. Alexandria became the scene of great anti-pagan riots with public and private religious imagery destroyed. As a consequence, Egypt's native religious culture was continually in decline. While the native population continued to speak their language, the ability to read hieroglyphic writing slowly disappeared as the role of the Egyptian temple priests and priestesses diminished. The temples themselves were sometimes converted to churches or abandoned to the desert. In the fourth century, as the Roman Empire divided, Egypt found itself in the Eastern Empire with its capital at Constantinople. In the waning years of the Empire, Egypt fell to the Sasanian Persian army in the Sasanian conquest of Egypt (618–628). It was then recaptured by the Byzantine emperor Heraclius (629–639), and was finally captured by Muslim Rashidun army in 639–641, ending Byzantine rule. ## Government and economy ### Administration and commerce The pharaoh was the absolute monarch of the country and, at least in theory, wielded complete control of the land and its resources. The king was the supreme military commander and head of the government, who relied on a bureaucracy of officials to manage his affairs. In charge of the administration was his second in command, the vizier, who acted as the king's representative and coordinated land surveys, the treasury, building projects, the legal system, and the archives. At a regional level, the country was divided into as many as 42 administrative regions called nomes each governed by a nomarch, who was accountable to the vizier for his jurisdiction. The temples formed the backbone of the economy. Not only were they places of worship, but were also responsible for collecting and storing the kingdom's wealth in a system of granaries and treasuries administered by overseers, who redistributed grain and goods. Much of the economy was centrally organized and strictly controlled. Although the ancient Egyptians did not use coinage until the Late period, they did use a type of money-barter system, with standard sacks of grain and the deben, a weight of roughly 91 grams (3 oz) of copper or silver, forming a common denominator. Workers were paid in grain; a simple laborer might earn 5+1⁄2 sacks (200 kg or 400 lb) of grain per month, while a foreman might earn 7+1⁄2 sacks (250 kg or 550 lb). Prices were fixed across the country and recorded in lists to facilitate trading; for example a shirt cost five copper deben, while a cow cost 140 deben. Grain could be traded for other goods, according to the fixed price list. During the fifth century BC coined money was introduced into Egypt from abroad. At first the coins were used as standardized pieces of precious metal rather than true money, but in the following centuries international traders came to rely on coinage. ### Social status Egyptian society was highly stratified, and social status was expressly displayed. Farmers made up the bulk of the population, but agricultural produce was owned directly by the state, temple, or noble family that owned the land. Farmers were also subject to a labor tax and were required to work on irrigation or construction projects in a corvée system. Artists and craftsmen were of higher status than farmers, but they were also under state control, working in the shops attached to the temples and paid directly from the state treasury. Scribes and officials formed the upper class in ancient Egypt, known as the "white kilt class" in reference to the bleached linen garments that served as a mark of their rank. The upper class prominently displayed their social status in art and literature. Below the nobility were the priests, physicians, and engineers with specialized training in their field. It is unclear whether slavery as understood today existed in ancient Egypt; there is difference of opinions among authors. The ancient Egyptians viewed men and women, including people from all social classes, as essentially equal under the law, and even the lowliest peasant was entitled to petition the vizier and his court for redress. Although slaves were mostly used as indentured servants, they were able to buy and sell their servitude, work their way to freedom or nobility, and were usually treated by doctors in the workplace. Both men and women had the right to own and sell property, make contracts, marry and divorce, receive inheritance, and pursue legal disputes in court. Married couples could own property jointly and protect themselves from divorce by agreeing to marriage contracts, which stipulated the financial obligations of the husband to his wife and children should the marriage end. Compared with their counterparts in ancient Greece, Rome, and even more modern places around the world, ancient Egyptian women had a greater range of personal choices, legal rights, and opportunities for achievement. Women such as Hatshepsut and Cleopatra VII even became pharaohs, while others wielded power as Divine Wives of Amun. Despite these freedoms, ancient Egyptian women did not often take part in official roles in the administration, aside from the royal high priestesses, apparently served only secondary roles in the temples (not much data for many dynasties), and were not so likely to be as educated as men. ### Legal system The head of the legal system was officially the pharaoh, who was responsible for enacting laws, delivering justice, and maintaining law and order, a concept the ancient Egyptians referred to as Ma'at. Although no legal codes from ancient Egypt survive, court documents show that Egyptian law was based on a common-sense view of right and wrong that emphasized reaching agreements and resolving conflicts rather than strictly adhering to a complicated set of statutes. Local councils of elders, known as Kenbet in the New Kingdom, were responsible for ruling in court cases involving small claims and minor disputes. More serious cases involving murder, major land transactions, and tomb robbery were referred to the Great Kenbet, over which the vizier or pharaoh presided. Plaintiffs and defendants were expected to represent themselves and were required to swear an oath that they had told the truth. In some cases, the state took on both the role of prosecutor and judge, and it could torture the accused with beatings to obtain a confession and the names of any co-conspirators. Whether the charges were trivial or serious, court scribes documented the complaint, testimony, and verdict of the case for future reference. Punishment for minor crimes involved either imposition of fines, beatings, facial mutilation, or exile, depending on the severity of the offense. Serious crimes such as murder and tomb robbery were punished by execution, carried out by decapitation, drowning, or impaling the criminal on a stake. Punishment could also be extended to the criminal's family. Beginning in the New Kingdom, oracles played a major role in the legal system, dispensing justice in both civil and criminal cases. The procedure was to ask the god a "yes" or "no" question concerning the right or wrong of an issue. The god, carried by a number of priests, rendered judgement by choosing one or the other, moving forward or backward, or pointing to one of the answers written on a piece of papyrus or an ostracon. ### Agriculture A combination of favorable geographical features contributed to the success of ancient Egyptian culture, the most important of which was the rich fertile soil resulting from annual inundations of the Nile River. The ancient Egyptians were thus able to produce an abundance of food, allowing the population to devote more time and resources to cultural, technological, and artistic pursuits. Land management was crucial in ancient Egypt because taxes were assessed based on the amount of land a person owned. Farming in Egypt was dependent on the cycle of the Nile River. The Egyptians recognized three seasons: Akhet (flooding), Peret (planting), and Shemu (harvesting). The flooding season lasted from June to September, depositing on the river's banks a layer of mineral-rich silt ideal for growing crops. After the floodwaters had receded, the growing season lasted from October to February. Farmers plowed and planted seeds in the fields, which were irrigated with ditches and canals. Egypt received little rainfall, so farmers relied on the Nile to water their crops. From March to May, farmers used sickles to harvest their crops, which were then threshed with a flail to separate the straw from the grain. Winnowing removed the chaff from the grain, and the grain was then ground into flour, brewed to make beer, or stored for later use. The ancient Egyptians cultivated emmer and barley, and several other cereal grains, all of which were used to make the two main food staples of bread and beer. Flax plants, uprooted before they started flowering, were grown for the fibers of their stems. These fibers were split along their length and spun into thread, which was used to weave sheets of linen and to make clothing. Papyrus growing on the banks of the Nile River was used to make paper. Vegetables and fruits were grown in garden plots, close to habitations and on higher ground, and had to be watered by hand. Vegetables included leeks, garlic, melons, squashes, pulses, lettuce, and other crops, in addition to grapes that were made into wine. #### Animals The Egyptians believed that a balanced relationship between people and animals was an essential element of the cosmic order; thus humans, animals and plants were believed to be members of a single whole. Animals, both domesticated and wild, were therefore a critical source of spirituality, companionship, and sustenance to the ancient Egyptians. Cattle were the most important livestock; the administration collected taxes on livestock in regular censuses, and the size of a herd reflected the prestige and importance of the estate or temple that owned them. In addition to cattle, the ancient Egyptians kept sheep, goats, and pigs. Poultry, such as ducks, geese, and pigeons, were captured in nets and bred on farms, where they were force-fed with dough to fatten them. The Nile provided a plentiful source of fish. Bees were also domesticated from at least the Old Kingdom, and provided both honey and wax. The ancient Egyptians used donkeys and oxen as beasts of burden, and they were responsible for plowing the fields and trampling seed into the soil. The slaughter of a fattened ox was also a central part of an offering ritual. Horses were introduced by the Hyksos in the Second Intermediate Period. Camels, although known from the New Kingdom, were not used as beasts of burden until the Late Period. There is also evidence to suggest that elephants were briefly used in the Late Period but largely abandoned due to lack of grazing land. Cats, dogs, and monkeys were common family pets, while more exotic pets imported from the heart of Africa, such as Sub-Saharan African lions, were reserved for royalty. Herodotus observed that the Egyptians were the only people to keep their animals with them in their houses. During the Late Period, the worship of the gods in their animal form was extremely popular, such as the cat goddess Bastet and the ibis god Thoth, and these animals were kept in large numbers for the purpose of ritual sacrifice. ### Natural resources Egypt is rich in building and decorative stone, copper and lead ores, gold, and semiprecious stones. These natural resources allowed the ancient Egyptians to build monuments, sculpt statues, make tools, and fashion jewelry. Embalmers used salts from the Wadi Natrun for mummification, which also provided the gypsum needed to make plaster. Ore-bearing rock formations were found in distant, inhospitable wadis in the Eastern Desert and the Sinai, requiring large, state-controlled expeditions to obtain natural resources found there. There were extensive gold mines in Nubia, and one of the first maps known is of a gold mine in this region. The Wadi Hammamat was a notable source of granite, greywacke, and gold. Flint was the first mineral collected and used to make tools, and flint handaxes are the earliest pieces of evidence of habitation in the Nile valley. Nodules of the mineral were carefully flaked to make blades and arrowheads of moderate hardness and durability even after copper was adopted for this purpose. Ancient Egyptians were among the first to use minerals such as sulfur as cosmetic substances. The Egyptians worked deposits of the lead ore galena at Gebel Rosas to make net sinkers, plumb bobs, and small figurines. Copper was the most important metal for toolmaking in ancient Egypt and was smelted in furnaces from malachite ore mined in the Sinai. Workers collected gold by washing the nuggets out of sediment in alluvial deposits, or by the more labor-intensive process of grinding and washing gold-bearing quartzite. Iron deposits found in upper Egypt were used in the Late Period. High-quality building stones were abundant in Egypt; the ancient Egyptians quarried limestone all along the Nile valley, granite from Aswan, and basalt and sandstone from the wadis of the Eastern Desert. Deposits of decorative stones such as porphyry, greywacke, alabaster, and carnelian dotted the Eastern Desert and were collected even before the First Dynasty. In the Ptolemaic and Roman Periods, miners worked deposits of emeralds in Wadi Sikait and amethyst in Wadi el-Hudi. ### Trade The ancient Egyptians engaged in trade with their foreign neighbors to obtain rare, exotic goods not found in Egypt. In the Predynastic Period, they established trade with Nubia to obtain gold and incense. They also established trade with Palestine, as evidenced by Palestinian-style oil jugs found in the burials of the First Dynasty pharaohs. An Egyptian colony stationed in southern Canaan dates to slightly before the First Dynasty. Narmer had Egyptian pottery produced in Canaan and exported back to Egypt. By the Second Dynasty at latest, ancient Egyptian trade with Byblos yielded a critical source of quality timber not found in Egypt. By the Fifth Dynasty, trade with Punt provided gold, aromatic resins, ebony, ivory, and wild animals such as monkeys and baboons. Egypt relied on trade with Anatolia for essential quantities of tin as well as supplementary supplies of copper, both metals being necessary for the manufacture of bronze. The ancient Egyptians prized the blue stone lapis lazuli, which had to be imported from far-away Afghanistan. Egypt's Mediterranean trade partners also included Greece and Crete, which provided, among other goods, supplies of olive oil. ## Language ### Historical development The Egyptian language is a northern Afro-Asiatic language closely related to the Berber and Semitic languages. It has the longest known history of any language having been written from c. 3200 BC to the Middle Ages and remaining as a spoken language for longer. The phases of ancient Egyptian are Old Egyptian, Middle Egyptian (Classical Egyptian), Late Egyptian, Demotic and Coptic. Egyptian writings do not show dialect differences before Coptic, but it was probably spoken in regional dialects around Memphis and later Thebes. Ancient Egyptian was a synthetic language, but it became more analytic later on. Late Egyptian developed prefixal definite and indefinite articles, which replaced the older inflectional suffixes. There was a change from the older verb–subject–object word order to subject–verb–object. The Egyptian hieroglyphic, hieratic, and demotic scripts were eventually replaced by the more phonetic Coptic alphabet. Coptic is still used in the liturgy of the Egyptian Orthodox Church, and traces of it are found in modern Egyptian Arabic. ### Sounds and grammar Ancient Egyptian has 25 consonants similar to those of other Afro-Asiatic languages. These include pharyngeal and emphatic consonants, voiced and voiceless stops, voiceless fricatives and voiced and voiceless affricates. It has three long and three short vowels, which expanded in Late Egyptian to about nine. The basic word in Egyptian, similar to Semitic and Berber, is a triliteral or biliteral root of consonants and semiconsonants. Suffixes are added to form words. The verb conjugation corresponds to the person. For example, the triconsonantal skeleton S-Ḏ-M is the semantic core of the word 'hear'; its basic conjugation is sḏm, 'he hears'. If the subject is a noun, suffixes are not added to the verb: sḏm ḥmt, 'the woman hears'. Adjectives are derived from nouns through a process that Egyptologists call nisbation because of its similarity with Arabic. The word order is predicate–subject in verbal and adjectival sentences, and subject–predicate in nominal and adverbial sentences. The subject can be moved to the beginning of sentences if it is long and is followed by a resumptive pronoun. Verbs and nouns are negated by the particle n, but nn is used for adverbial and adjectival sentences. Stress falls on the ultimate or penultimate syllable, which can be open (CV) or closed (CVC). ### Writing Hieroglyphic writing dates from c. 3000 BC, and is composed of hundreds of symbols. A hieroglyph can represent a word, a sound, or a silent determinative; and the same symbol can serve different purposes in different contexts. Hieroglyphs were a formal script, used on stone monuments and in tombs, that could be as detailed as individual works of art. In day-to-day writing, scribes used a cursive form of writing, called hieratic, which was quicker and easier. While formal hieroglyphs may be read in rows or columns in either direction (though typically written from right to left), hieratic was always written from right to left, usually in horizontal rows. A new form of writing, Demotic, became the prevalent writing style, and it is this form of writing—along with formal hieroglyphs—that accompany the Greek text on the Rosetta Stone. Around the first century AD, the Coptic alphabet started to be used alongside the Demotic script. Coptic is a modified Greek alphabet with the addition of some Demotic signs. Although formal hieroglyphs were used in a ceremonial role until the fourth century, towards the end only a small handful of priests could still read them. As the traditional religious establishments were disbanded, knowledge of hieroglyphic writing was mostly lost. Attempts to decipher them date to the Byzantine and Islamic periods in Egypt, but only in the 1820s, after the discovery of the Rosetta Stone and years of research by Thomas Young and Jean-François Champollion, were hieroglyphs substantially deciphered. ### Literature Writing first appeared in association with kingship on labels and tags for items found in royal tombs. It was primarily an occupation of the scribes, who worked out of the Per Ankh institution or the House of Life. The latter comprised offices, libraries (called House of Books), laboratories and observatories. Some of the best-known pieces of ancient Egyptian literature, such as the Pyramid and Coffin Texts, were written in Classical Egyptian, which continued to be the language of writing until about 1300 BC. Late Egyptian was spoken from the New Kingdom onward and is represented in Ramesside administrative documents, love poetry and tales, as well as in Demotic and Coptic texts. During this period, the tradition of writing had evolved into the tomb autobiography, such as those of Harkhuf and Weni. The genre known as Sebayt ("instructions") was developed to communicate teachings and guidance from famous nobles; the Ipuwer papyrus, a poem of lamentations describing natural disasters and social upheaval, is a famous example. The Story of Sinuhe, written in Middle Egyptian, might be the classic of Egyptian literature. Also written at this time was the Westcar Papyrus, a set of stories told to Khufu by his sons relating the marvels performed by priests. The Instruction of Amenemope is considered a masterpiece of Near Eastern literature. Towards the end of the New Kingdom, the vernacular language was more often employed to write popular pieces like the Story of Wenamun and the Instruction of Any. The former tells the story of a noble who is robbed on his way to buy cedar from Lebanon and of his struggle to return to Egypt. From about 700 BC, narrative stories and instructions, such as the popular Instructions of Onchsheshonqy, as well as personal and business documents were written in the demotic script and phase of Egyptian. Many stories written in demotic during the Greco-Roman period were set in previous historical eras, when Egypt was an independent nation ruled by great pharaohs such as Ramesses II. ## Culture ### Daily life Most ancient Egyptians were farmers tied to the land. Their dwellings were restricted to immediate family members, and were constructed of mudbrick designed to remain cool in the heat of the day. Each home had a kitchen with an open roof, which contained a grindstone for milling grain and a small oven for baking the bread. Ceramics served as household wares for the storage, preparation, transport, and consumption of food, drink, and raw materials. Walls were painted white and could be covered with dyed linen wall hangings. Floors were covered with reed mats, while wooden stools, beds raised from the floor and individual tables comprised the furniture. The ancient Egyptians placed a great value on hygiene and appearance. Most bathed in the Nile and used a pasty soap made from animal fat and chalk. Men shaved their entire bodies for cleanliness; perfumes and aromatic ointments covered bad odors and soothed skin. Clothing was made from simple linen sheets that were bleached white, and both men and women of the upper classes wore wigs, jewelry, and cosmetics. Children went without clothing until maturity, at about age 12, and at this age males were circumcised and had their heads shaved. Mothers were responsible for taking care of the children, while the father provided the family's income. Music and dance were popular entertainments for those who could afford them. Early instruments included flutes and harps, while instruments similar to trumpets, oboes, and pipes developed later and became popular. In the New Kingdom, the Egyptians played on bells, cymbals, tambourines, drums, and imported lutes and lyres from Asia. The sistrum was a rattle-like musical instrument that was especially important in religious ceremonies. The ancient Egyptians enjoyed a variety of leisure activities, including games and music. Senet, a board game where pieces moved according to random chance, was particularly popular from the earliest times; another similar game was mehen, which had a circular gaming board. "Hounds and Jackals" also known as 58 holes is another example of board games played in ancient Egypt. The first complete set of this game was discovered from a Theban tomb of the Egyptian pharaoh Amenemhat IV that dates to the 13th Dynasty. Juggling and ball games were popular with children, and wrestling is also documented in a tomb at Beni Hasan. The wealthy members of ancient Egyptian society enjoyed hunting, fishing, and boating as well. The excavation of the workers' village of Deir el-Medina has resulted in one of the most thoroughly documented accounts of community life in the ancient world, which spans almost four hundred years. There is no comparable site in which the organization, social interactions, and working and living conditions of a community have been studied in such detail. ### Cuisine Egyptian cuisine remained remarkably stable over time; indeed, the cuisine of modern Egypt retains some striking similarities to the cuisine of the ancients. The staple diet consisted of bread and beer, supplemented with vegetables such as onions and garlic, and fruit such as dates and figs. Wine and meat were enjoyed by all on feast days while the upper classes indulged on a more regular basis. Fish, meat, and fowl could be salted or dried, and could be cooked in stews or roasted on a grill. ### Architecture The architecture of ancient Egypt includes some of the most famous structures in the world: the Great Pyramids of Giza and the temples at Thebes. Building projects were organized and funded by the state for religious and commemorative purposes, but also to reinforce the wide-ranging power of the pharaoh. The ancient Egyptians were skilled builders; using only simple but effective tools and sighting instruments, architects could build large stone structures with great accuracy and precision that is still envied today. The domestic dwellings of elite and ordinary Egyptians alike were constructed from perishable materials such as mudbricks and wood, and have not survived. Peasants lived in simple homes, while the palaces of the elite and the pharaoh were more elaborate structures. A few surviving New Kingdom palaces, such as those in Malkata and Amarna, show richly decorated walls and floors with scenes of people, birds, water pools, deities and geometric designs. Important structures such as temples and tombs that were intended to last forever were constructed of stone instead of mudbricks. The architectural elements used in the world's first large-scale stone building, Djoser's mortuary complex, include post and lintel supports in the papyrus and lotus motif. The earliest preserved ancient Egyptian temples, such as those at Giza, consist of single, enclosed halls with roof slabs supported by columns. In the New Kingdom, architects added the pylon, the open courtyard, and the enclosed hypostyle hall to the front of the temple's sanctuary, a style that was standard until the Greco-Roman period. The earliest and most popular tomb architecture in the Old Kingdom was the mastaba, a flat-roofed rectangular structure of mudbrick or stone built over an underground burial chamber. The step pyramid of Djoser is a series of stone mastabas stacked on top of each other. Pyramids were built during the Old and Middle Kingdoms, but most later rulers abandoned them in favor of less conspicuous rock-cut tombs. The use of the pyramid form continued in private tomb chapels of the New Kingdom and in the royal pyramids of Nubia. ### Art The ancient Egyptians produced art to serve functional purposes. For over 3500 years, artists adhered to artistic forms and iconography that were developed during the Old Kingdom, following a strict set of principles that resisted foreign influence and internal change. These artistic standards—simple lines, shapes, and flat areas of color combined with the characteristic flat projection of figures with no indication of spatial depth—created a sense of order and balance within a composition. Images and text were intimately interwoven on tomb and temple walls, coffins, stelae, and even statues. The Narmer Palette, for example, displays figures that can also be read as hieroglyphs. Because of the rigid rules that governed its highly stylized and symbolic appearance, ancient Egyptian art served its political and religious purposes with precision and clarity. Ancient Egyptian artisans used stone as a medium for carving statues and fine reliefs, but used wood as a cheap and easily carved substitute. Paints were obtained from minerals such as iron ores (red and yellow ochres), copper ores (blue and green), soot or charcoal (black), and limestone (white). Paints could be mixed with gum arabic as a binder and pressed into cakes, which could be moistened with water when needed. Pharaohs used reliefs to record victories in battle, royal decrees, and religious scenes. Common citizens had access to pieces of funerary art, such as shabti statues and books of the dead, which they believed would protect them in the afterlife. During the Middle Kingdom, wooden or clay models depicting scenes from everyday life became popular additions to the tomb. In an attempt to duplicate the activities of the living in the afterlife, these models show laborers, houses, boats, and even military formations that are scale representations of the ideal ancient Egyptian afterlife. Despite the homogeneity of ancient Egyptian art, the styles of particular times and places sometimes reflected changing cultural or political attitudes. After the invasion of the Hyksos in the Second Intermediate Period, Minoan-style frescoes were found in Avaris. The most striking example of a politically driven change in artistic forms comes from the Amarna Period, where figures were radically altered to conform to Akhenaten's revolutionary religious ideas. This style, known as Amarna art, was quickly abandoned after Akhenaten's death and replaced by the traditional forms. ### Religious beliefs Beliefs in the divine and in the afterlife were ingrained in ancient Egyptian civilization from its inception; pharaonic rule was based on the divine right of kings. The Egyptian pantheon was populated by gods who had supernatural powers and were called on for help or protection. However, the gods were not always viewed as benevolent, and Egyptians believed they had to be appeased with offerings and prayers. The structure of this pantheon changed continually as new deities were promoted in the hierarchy, but priests made no effort to organize the diverse and sometimes conflicting myths and stories into a coherent system. These various conceptions of divinity were not considered contradictory but rather layers in the multiple facets of reality. Gods were worshiped in cult temples administered by priests acting on the king's behalf. At the center of the temple was the cult statue in a shrine. Temples were not places of public worship or congregation, and only on select feast days and celebrations was a shrine carrying the statue of the god brought out for public worship. Normally, the god's domain was sealed off from the outside world and was only accessible to temple officials. Common citizens could worship private statues in their homes, and amulets offered protection against the forces of chaos. After the New Kingdom, the pharaoh's role as a spiritual intermediary was de-emphasized as religious customs shifted to direct worship of the gods. As a result, priests developed a system of oracles to communicate the will of the gods directly to the people. The Egyptians believed that every human being was composed of physical and spiritual parts or aspects. In addition to the body, each person had a šwt (shadow), a ba (personality or soul), a ka (life-force), and a name. The heart, rather than the brain, was considered the seat of thoughts and emotions. After death, the spiritual aspects were released from the body and could move at will, but they required the physical remains (or a substitute, such as a statue) as a permanent home. The ultimate goal of the deceased was to rejoin his ka and ba and become one of the "blessed dead", living on as an akh, or "effective one". For this to happen, the deceased had to be judged worthy in a trial, in which the heart was weighed against a "feather of truth." If deemed worthy, the deceased could continue their existence on earth in spiritual form. If they were not deemed worthy, their heart was eaten by Ammit the Devourer and they were erased from the Universe. ### Burial customs The ancient Egyptians maintained an elaborate set of burial customs that they believed were necessary to ensure immortality after death. These customs involved preserving the body by mummification, performing burial ceremonies, and interring with the body goods the deceased would use in the afterlife. Before the Old Kingdom, bodies buried in desert pits were naturally preserved by desiccation. The arid, desert conditions were a boon throughout the history of ancient Egypt for burials of the poor, who could not afford the elaborate burial preparations available to the elite. Wealthier Egyptians began to bury their dead in stone tombs and use artificial mummification, which involved removing the internal organs, wrapping the body in linen, and burying it in a rectangular stone sarcophagus or wooden coffin. Beginning in the Fourth Dynasty, some parts were preserved separately in canopic jars. By the New Kingdom, the ancient Egyptians had perfected the art of mummification; the best technique took 70 days and involved removing the internal organs, removing the brain through the nose, and desiccating the body in a mixture of salts called natron. The body was then wrapped in linen with protective amulets inserted between layers and placed in a decorated anthropoid coffin. Mummies of the Late Period were also placed in painted cartonnage mummy cases. Actual preservation practices declined during the Ptolemaic and Roman eras, while greater emphasis was placed on the outer appearance of the mummy, which was decorated. Wealthy Egyptians were buried with larger quantities of luxury items, but all burials, regardless of social status, included goods for the deceased. Funerary texts were often included in the grave, and, beginning in the New Kingdom, so were shabti statues that were believed to perform manual labor for them in the afterlife. Rituals in which the deceased was magically re-animated accompanied burials. After burial, living relatives were expected to occasionally bring food to the tomb and recite prayers on behalf of the deceased. ## Military The ancient Egyptian military was responsible for defending Egypt against foreign invasion, and for maintaining Egypt's domination in the ancient Near East. The military protected mining expeditions to the Sinai during the Old Kingdom and fought civil wars during the First and Second Intermediate Periods. The military was responsible for maintaining fortifications along important trade routes, such as those found at the city of Buhen on the way to Nubia. Forts also were constructed to serve as military bases, such as the fortress at Sile, which was a base of operations for expeditions to the Levant. In the New Kingdom, a series of pharaohs used the standing Egyptian army to attack and conquer Kush and parts of the Levant. Typical military equipment included bows and arrows, spears, and round-topped shields made by stretching animal skin over a wooden frame. In the New Kingdom, the military began using chariots that had earlier been introduced by the Hyksos invaders. Weapons and armor continued to improve after the adoption of bronze: shields were now made from solid wood with a bronze buckle, spears were tipped with a bronze point, and the khopesh was adopted from Asiatic soldiers. The pharaoh was usually depicted in art and literature riding at the head of the army; it has been suggested that at least a few pharaohs, such as Seqenenre Tao II and his sons, did do so. However, it has also been argued that "kings of this period did not personally act as frontline war leaders, fighting alongside their troops." Soldiers were recruited from the general population, but during, and especially after, the New Kingdom, mercenaries from Nubia, Kush, and Libya were hired to fight for Egypt. ## Technology, medicine and mathematics ### Technology In technology, medicine, and mathematics, ancient Egypt achieved a relatively high standard of productivity and sophistication. Traditional empiricism, as evidenced by the Edwin Smith and Ebers papyri (c. 1600 BC), is first credited to Egypt. The Egyptians created their own alphabet and decimal system. ### Faience and glass Even before the Old Kingdom, the ancient Egyptians had developed a glassy material known as faience, which they treated as a type of artificial semi-precious stone. Faience is a non-clay ceramic made of silica, small amounts of lime and soda, and a colorant, typically copper. The material was used to make beads, tiles, figurines, and small wares. Several methods can be used to create faience, but typically production involved application of the powdered materials in the form of a paste over a clay core, which was then fired. By a related technique, the ancient Egyptians produced a pigment known as Egyptian blue, also called blue frit, which is produced by fusing (or sintering) silica, copper, lime, and an alkali such as natron. The product can be ground up and used as a pigment. The ancient Egyptians could fabricate a wide variety of objects from glass with great skill, but it is not clear whether they developed the process independently. It is also unclear whether they made their own raw glass or merely imported pre-made ingots, which they melted and finished. However, they did have technical expertise in making objects, as well as adding trace elements to control the color of the finished glass. A range of colors could be produced, including yellow, red, green, blue, purple, and white, and the glass could be made either transparent or opaque. ### Medicine The medical problems of the ancient Egyptians stemmed directly from their environment. Living and working close to the Nile brought hazards from malaria and debilitating schistosomiasis parasites, which caused liver and intestinal damage. Dangerous wildlife such as crocodiles and hippos were also a common threat. The lifelong labors of farming and building put stress on the spine and joints, and traumatic injuries from construction and warfare all took a significant toll on the body. The grit and sand from stone-ground flour abraded teeth, leaving them susceptible to abscesses (though caries were rare). The diets of the wealthy were rich in sugars, which promoted periodontal disease. Despite the flattering physiques portrayed on tomb walls, the overweight mummies of many of the upper class show the effects of a life of overindulgence. Adult life expectancy was about 35 for men and 30 for women, but reaching adulthood was difficult as about one-third of the population died in infancy. Ancient Egyptian physicians were renowned in the ancient Near East for their healing skills, and some, such as Imhotep, remained famous long after their deaths. Herodotus remarked that there was a high degree of specialization among Egyptian physicians, with some treating only the head or the stomach, while others were eye-doctors and dentists. Training of physicians took place at the Per Ankh or "House of Life" institution, most notably those headquartered in Per-Bastet during the New Kingdom and at Abydos and Saïs in the Late period. Medical papyri show empirical knowledge of anatomy, injuries, and practical treatments. Wounds were treated by bandaging with raw meat, white linen, sutures, nets, pads, and swabs soaked with honey to prevent infection, while opium, thyme, and belladona were used to relieve pain. The earliest records of burn treatment describe burn dressings that use the milk from mothers of male babies. Prayers were made to the goddess Isis. Moldy bread, honey, and copper salts were also used to prevent infection from dirt in burns. Garlic and onions were used regularly to promote good health and were thought to relieve asthma symptoms. Ancient Egyptian surgeons stitched wounds, set broken bones, and amputated diseased limbs, but they recognized that some injuries were so serious that they could only make the patient comfortable until death occurred. ### Maritime technology Early Egyptians knew how to assemble planks of wood into a ship hull and had mastered advanced forms of shipbuilding as early as 3000 BC. The Archaeological Institute of America reports that the oldest planked ships known are the Abydos boats. A group of 14 discovered ships in Abydos were constructed of wooden planks "sewn" together. Discovered by Egyptologist David O'Connor of New York University, woven straps were found to have been used to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Because the ships are all buried together and near a mortuary belonging to Pharaoh Khasekhemwy, originally they were all thought to have belonged to him, but one of the 14 ships dates to 3000 BC, and the associated pottery jars buried with the vessels also suggest earlier dating. The ship dating to 3000 BC was 75 feet (23 m) long and is now thought to perhaps have belonged to an earlier pharaoh, perhaps one as early as Hor-Aha. Early Egyptians also knew how to assemble planks of wood with treenails to fasten them together, using pitch for caulking the seams. The "Khufu ship", a 43.6-metre (143 ft) vessel sealed into a pit in the Giza pyramid complex at the foot of the Great Pyramid of Giza in the Fourth Dynasty around 2500 BC, is a full-size surviving example that may have filled the symbolic function of a solar barque. Early Egyptians also knew how to fasten the planks of this ship together with mortise and tenon joints. Large seagoing ships are known to have been heavily used by the Egyptians in their trade with the city states of the eastern Mediterranean, especially Byblos (on the coast of modern-day Lebanon), and in several expeditions down the Red Sea to the Land of Punt. In fact one of the earliest Egyptian words for a seagoing ship is a "Byblos Ship", which originally defined a class of Egyptian seagoing ships used on the Byblos run; however, by the end of the Old Kingdom, the term had come to include large seagoing ships, whatever their destination. In 1977, an ancient north–south canal was discovered extending from Lake Timsah to the Ballah Lakes. It was dated to the Middle Kingdom of Egypt by extrapolating dates of ancient sites constructed along its course. In 2011, archaeologists from Italy, the United States, and Egypt excavating a dried-up lagoon known as Mersa Gawasis have unearthed traces of an ancient harbor that once launched early voyages like Hatshepsut's Punt expedition onto the open ocean. Some of the site's most evocative evidence for the ancient Egyptians' seafaring prowess include large ship timbers and hundreds of feet of ropes, made from papyrus, coiled in huge bundles. In 2013, a team of Franco-Egyptian archaeologists discovered what is believed to be the world's oldest port, dating back about 4500 years, from the time of King Cheops on the Red Sea coast near Wadi el-Jarf (about 110 miles south of Suez). ### Mathematics The earliest attested examples of mathematical calculations date to the predynastic Naqada period, and show a fully developed numeral system. The importance of mathematics to an educated Egyptian is suggested by a New Kingdom fictional letter in which the writer proposes a scholarly competition between himself and another scribe regarding everyday calculation tasks such as accounting of land, labor, and grain. Texts such as the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus show that the ancient Egyptians could perform the four basic mathematical operations—addition, subtraction, multiplication, and division—use fractions, calculate the areas of rectangles, triangles, and circles and compute the volumes of boxes, columns and pyramids. They understood basic concepts of algebra and geometry, and could solve simple sets of simultaneous equations. Mathematical notation was decimal, and based on hieroglyphic signs for each power of ten up to one million. Each of these could be written as many times as necessary to add up to the desired number; so to write the number eighty or eight hundred, the symbol for ten or one hundred was written eight times respectively. Because their methods of calculation could not handle most fractions with a numerator greater than one, they had to write fractions as the sum of several fractions. For example, they resolved the fraction two-fifths into the sum of one-third + one-fifteenth. Standard tables of values facilitated this. Some common fractions, however, were written with a special glyph—the equivalent of the modern two-thirds is shown on the right. Ancient Egyptian mathematicians knew the Pythagorean theorem as an empirical formula. They were aware, for example, that a triangle had a right angle opposite the hypotenuse when its sides were in a 3–4–5 ratio. They were able to estimate the area of a circle by subtracting one-ninth from its diameter and squaring the result: Area ≈ [(8⁄9)D]<sup>2</sup> = (256⁄81)r<sup>2</sup> ≈ 3.16r<sup>2</sup>, a reasonable approximation of the formula πr<sup>2</sup>. The golden ratio seems to be reflected in many Egyptian constructions, including the pyramids, but its use may have been an unintended consequence of the ancient Egyptian practice of combining the use of knotted ropes with an intuitive sense of proportion and harmony. ## Population Estimates of the size of the population range from 1–1.5 million in the 3rd millennium BC to possibly 2–3 million by the 1st millennium BC, before growing significantly towards the end of that millennium. ### DNA According to historian William Stiebling and archaeologist Susan N. Helft, conflicting DNA analysis on recent genetic samples such as the Amarna royal mummies has led to a lack of consensus on the genetic makeup of the ancient Egyptians and their geographic origins. In 2012, two mummies of two 20th dynasty individuals, Ramesses III and "Unknown Man E" believed to be Ramesses III's son Pentawer, were analyzed by Albert Zink, Yehia Z Gad and a team of researchers under Zahi Hawass. Genetic kinship analyses revealed identical haplotypes in both mummies; using the Whit Athey's haplogroup predictor, the Y chromosomal haplogroup E1b1a was predicted. A 2017 study by Schuenemann et al. analysed the maternal DNA (mtDNA) of 90 mummies from Abusir el-Meleq. Additionally, three of the mummies were also analyzed for Y-DNA. Two were assigned to West Asian J and one to haplogroup E1b1b1 both are carried by modern Egyptians and are common in North Africa and the Middle East. The samples are from the time periods: Late New Kingdom, Ptolemaic, and Roman, and the study used 135 modern Egyptian samples (100 from modern Egyptians and 35 from el-Hayez Western Desert Oasis). The researchers cautioned that the affinities of the examined ancient Egyptian specimens may not be representative of those of all ancient Egyptians since they were from a single archaeological site. The authors of this study state that the Abusir el-Meleq mummies closely resembled Near Eastern populations. The genetics of the mummies remained remarkably consistent within this range even as different powers—including Nubians, Greeks, and Romans—conquered the empire. A wide range of mtDNA haplogroups were found including clades of J, U, H, HV, M, R0, R2, K, T, L, I, N, X, W. Modern Egyptians shared this mtDNA haplogroup profile. The authors of the study noted that the mummies at Abusir el-Meleq had 6–15% maternal sub-Saharan component while the 135 modern Egyptian samples had a little more maternal sub-Saharan component, 14–21%, suggesting some degree of influx after the end of the empire. "Genetic continuity between ancient and modern Egyptians cannot be ruled out despite this more recent sub-Saharan African influx, while continuity with modern Ethiopians is not supported". Gourdine, Anselin and Keita criticised the methodology of the Scheunemann et al study and argued that the Sub-Saharan "genetic affinities" may be attributed to "early settlers" and "the relevant Sub-Saharan genetic markers" do not correspond with the geography of known trade routes". In 2022, Danielle Candelora noted several limitations with the 2017 Scheunemann et al. study such as its “untested sampling methods, small sample size and problematic comparative data” which she argued had been misused to legitimize racist conceptions of Ancient Egypt with “scientific evidence”. In 2023, Christopher Ehret criticised the conclusions of the 2017 study which proposed the ancient Egyptians had a Levantine background based on insufficient sampling and a biased interpretation of the genetic data. Ehret argued this was reminiscent of earlier scholarship and also conflicted with existing archaeological, linguistic and biological anthropological evidence which determined the founding locales of Ancient Egypt to be the descendants of longtime populations in Northeastern Africa such as Nubia and the northern Horn of Africa. Ehret also criticised the study for asserting that there was “no sub-Saharan” component in the Egyptian population. Because the 2017 study only sampled from a single site at Abusir el-Meleq, Scheunemann et al.(2022) carried out a follow-up study by collecting samples from six different excavation sites along the entire length of the Nile Valley, spanning 4000 years of Egyptian history. 81 samples were collected from 17 mummies and 14 skeletal remains, and 18 high quality mitochondrial genomes were reconstructed from 10 individuals. The authors argued that the analyzed mitochondrial genomes supported the results from the earlier study at Abusir el-Meleq. In 2018, the 4000-year-old mummified head of Djehutynakht, a governor in the Middle Kingdom of the 11th or 12th dynasty, was analyzed for mitochondrial DNA. The sequence of the mummy most closely resembles a U5a lineage from sample JK2903, a much more recent 2000-year-old skeleton from the Abusir el-Meleq site in Egypt, although no direct matches to the Djehutynakht sequence have been reported. `Haplogroup U5 is found in modern Egyptians, and is found in modern Egyptian Berbers from the Siwa Oasis in Egypt. A 2009 study by Coudray et al. recorded haplogroup U5 at 16.7% in the Siwa Oasis in Egypt whereas haplogroup U6 is more common in other Berber populations to the west of Egypt.` In 2018, the mummified remains of two high-status Egyptian relatives, Nakht-Ankh and Khnum-Nakht were analyzed DNA by a team of researchers from the University of Manchester. The Y-chromosome sequences were not complete, but the Y-chromosome SNPs indicated that they had different fathers, suggesting that they were half-brothers. The SNP identities were consistent with mtDNA haplogroup M1a1 with 88.05–91.27% degree of confidence, thus "confirming the African origins of the two individuals" according to the study authors, based on their maternal lineage. A 2020 DNA study by Gad, Hawass et al., analysed mitochondrial and Y-chromosomal haplogroups from Tutankhamun's family members of the 18th Dynasty, using comprehensive control procedures to ensure quality results. They found that the Y-chromosome haplogroup of the family was R1b, which is believed to have originated in the Western Asia/Near Eastern region, and dispersed from there to Europe and parts of Africa during the Neolithic. Haplogroup R1b is carried by modern Egyptians. Modern Egypt is also the only African country that is known to harbor all three R1 subtypes, including R1b-M269. The mitochondrial haplogroup was K, which is most likely also part of a Near Eastern lineage. The profiles for Tutankhamun and Amenhotep III were incomplete and the analysis produced differing probability figures despite having concordant allele results. Because the relationships of these two mummies with the KV55 mummy had previously been confirmed in an earlier study, the haplogroup prediction of both mummies could be derived from the full profile of the KV55 data. Genetic analysis indicated the following haplogroups: - Tutankhamun YDNA R1b / mtDNA K - Akhenaten YDNA R1b / mtDNA K - Amenhotep III YDNA R1b / mtDNA H2b - Yuya G2a / mtDNA K - Tiye mtDNA K - Thuya mtDNA K Both Y-DNA haplogroups R1b and G2a, as well as both mtDNA haplogroups H and K, are carried by modern Egyptians. In a comment on Hawas et al. (2010&2012), Keita pointed out, based on inserting the data into the PopAffiliator online calculator, which only calculates affinity to East Asia, Eurasia, and sub-Saharan Africa, but not to North Africa or the Near East, for instance, that the majority of the samples: "have an affinity with sub-Saharan Africans in one affinity analysis, which does not mean that they lacked other affiliations—an important point that typological thinking obscures. Also, different data and algorithms might give different results, which would illustrate the complexity of biological heritage and its interpretation." ## Legacy The culture and monuments of ancient Egypt have left a lasting legacy on the world. Egyptian civilization significantly influenced the Kingdom of Kush and Meroë with both adopting Egyptian religious and architectural norms (hundreds of pyramids (6–30 meters high) were built in Egypt/Sudan), as well as using Egyptian writing as the basis of the Meroitic script. Meroitic is the oldest written language in Africa, other than Egyptian, and was used from the 2nd century BC until the early 5th century AD. The cult of the goddess Isis, for example, became popular in the Roman Empire, as obelisks and other relics were transported back to Rome. The Romans also imported building materials from Egypt to erect Egyptian-style structures. Early historians such as Herodotus, Strabo, and Diodorus Siculus studied and wrote about the land, which Romans came to view as a place of mystery. During the Middle Ages and the Renaissance, Egyptian pagan culture was in decline after the rise of Christianity and later Islam, but interest in Egyptian antiquity continued in the writings of medieval scholars such as Dhul-Nun al-Misri and al-Maqrizi. In the seventeenth and eighteenth centuries, European travelers and tourists brought back antiquities and wrote stories of their journeys, leading to a wave of Egyptomania across Europe, as evident in symbolism like the Eye of Providence and the Great Seal of the United States. This renewed interest sent collectors to Egypt, who took, purchased, or were given many important antiquities. Napoleon arranged the first studies in Egyptology when he brought some 150 scientists and artists to study and document Egypt's natural history, which was published in the Description de l'Égypte. In the 20th century, the Egyptian Government and archaeologists alike recognized the importance of cultural respect and integrity in excavations. Since the 2010s, the Ministry of Tourism and Antiquities has overseen excavations and the recovery of artifacts. ## See also - Egyptology - Glossary of ancient Egypt artifacts - Index of ancient Egypt–related articles - Outline of ancient Egypt - List of ancient Egyptians - List of Ancient Egyptian inventions and discoveries - Archaeology of Ancient Egypt - Archeological Map of Egypt - British school of diffusionism ## Citation
37,932,342
Political career of John C. Breckinridge
1,142,120,873
Career of Vice President of the United States, 1857
[ "Breckinridge family", "Political careers by person", "Politics of Kentucky" ]
The political career of John C. Breckinridge included service in the state government of Kentucky, the Federal government of the United States, as well as the government of the Confederate States of America. In 1857, 36 years old, he was inaugurated as Vice President of the United States under James Buchanan. He remains the youngest person to ever hold the office. Four years later, he ran as the presidential candidate of a dissident group of Southern Democrats, but lost the election to the Republican candidate Abraham Lincoln. A member of the Breckinridge political family, John C. Breckinridge became the first Democrat to represent Fayette County in the Kentucky House of Representatives, and in 1851, he was the first Democrat to represent Kentucky's 8th congressional district in over 20 years. A champion of strict constructionism, states' rights, and popular sovereignty, he supported Stephen A. Douglas's Kansas–Nebraska Act as a means of addressing slavery in the territories acquired by the U.S. in the Mexican–American War. Considering his re-election to the House of Representatives unlikely in 1854, he returned to private life and his legal practice. He was nominated for vice president at the 1856 Democratic National Convention, and although he and Buchanan won the election, he enjoyed little influence in Buchanan's administration. In 1859, the Kentucky General Assembly elected Breckinridge to a U.S. Senate term that would begin in 1861. In the 1860 United States presidential election, Breckinridge captured the electoral votes of most of the Southern states, but finished a distant second among four candidates. Lincoln's election as President prompted the secession of the Southern states to form the Confederate States of America. Though Breckinridge sympathized with the Southern cause, in the Senate he worked futilely to reunite the states peacefully. After the Confederates fired on Fort Sumter, beginning the Civil War, he opposed allocating resources for Lincoln to fight the Confederacy. Fearing arrest after Kentucky sided with the Union, he fled to the Confederacy, joined the Confederate States Army, and was subsequently expelled from the Senate. He served in the Confederate Army from October 1861 to February 1865, when Confederate President Jefferson Davis appointed him Confederate States Secretary of War. Then, concluding that the Confederate cause was hopeless, he encouraged Davis to negotiate a national surrender. Davis's capture on May 10, 1865 effectively ended the war, and Breckinridge fled to Cuba, then Great Britain, and finally Canada, remaining in exile until President Andrew Johnson's offer of amnesty in 1868. Returning to Kentucky, he refused all requests to resume his political career and died of complications related to war injuries in 1875. ## Formative years Historian James C. Klotter has speculated that, had John C. Breckinridge's father, Cabell, lived, he would have steered his son to the Whig Party and the Union, rather than the Democratic Party and the Confederacy, but the Kentucky Secretary of State and former Speaker of the Kentucky House of Representatives died of a fever on September 1, 1823, months before his son's third birthday. Burdened with her husband's debts, widow Mary Breckinridge and her children moved to her in-laws' home near Lexington, Kentucky, where John C. Breckinridge's grandmother taught him the political philosophies of his late grandfather, U.S. Attorney General John Breckinridge. John Breckinridge believed the federal government was created by, and subject to, the co-equal governments of the states. As a state representative, he introduced the Kentucky Resolutions of 1798 and 1799, which denounced the Alien and Sedition Acts and asserted that states could nullify them and other federal laws that they deemed unconstitutional. A strict constructionist, he held that the federal government could only exercise powers explicitly given to it in the Constitution. Most of the Breckinridges were Whigs, but John Breckinridge's posthumous influence inclined his grandson toward the Democratic Party. Additionally, John C. Breckinridge's friend and law partner, Thomas W. Bullock, was from a Democratic family. In 1842, Bullock told Breckinridge that by the time they opened their practice in Burlington, Iowa, "you were two-thirds of a Democrat"; living in heavily Democratic Iowa Territory further distanced him from Whiggery. He wrote weekly editorials in the Democratic Iowa Territorial Gazette and Advisor, and in February 1843, he was named to the Des Moines County Democratic committee. A letter from Breckinridge's brother-in-law related that, when Breckinridge's uncle William learned that his nephew had "become loco-foco", he said, "I felt as I would have done if I had heard that my daughter had been dishonored." On a visit to Kentucky in 1843, Breckinridge met and married Mary Cyrene Burch, ending his time in Iowa. ## Views on slavery Slavery issues dominated Breckinridge's political career, although historians disagree about Breckinridge's views. In Breckinridge: Statesman, Soldier, Symbol, William C. Davis argues that, by adulthood, Breckinridge regarded slavery as evil; his entry in the 2002 Encyclopedia of World Biography records that he advocated voluntary emancipation. In Proud Kentuckian: John C. Breckinridge 1821–1875, Frank Heck disagrees, citing Breckinridge's consistent advocacy for slavery protections, beginning with his opposition to emancipationist candidates—including his uncle, Robert Jefferson Breckinridge—in the state elections of 1849. ### Early influences Breckinridge's grandfather, John, owned slaves, believing it was a necessary evil in an agrarian economy. He hoped for gradual emancipation but did not believe the federal government was empowered to effect it; Davis wrote that this became "family doctrine". As a U.S. Senator, John Breckinridge insisted that decisions about slavery in Louisiana Territory be left to its future inhabitants, essentially the "popular sovereignty" advocated by John C. Breckinridge prior to the Civil War. John C. Breckinridge's father, Cabell, embraced gradual emancipation and opposed government interference with slavery, but Cabell's brother Robert, a Presbyterian minister, became an abolitionist, concluding that slavery was morally wrong. Davis recorded that all the Breckinridges were pleased when the General Assembly upheld the ban on importing slaves to Kentucky in 1833. John C. Breckinridge encountered conflicting influences as an undergraduate at Centre College and in law school at Transylvania University. Centre President John C. Young, Breckinridge's brother-in-law, believed in states' rights and gradual emancipation, as did George Robertson, one of Breckinridge's instructors at Transylvania, but James G. Birney, father of Breckinridge's friend and Centre classmate William Birney, was an abolitionist. In an 1841 letter to Robert Breckinridge, who became his surrogate father after Cabell Breckinridge's death, John C. Breckinridge wrote that only "ignorant, foolish men" feared abolition. In an Independence Day address in Frankfort later that year, he decried the "unlawful dominion over the bodies ... of men". An acquaintance believed that Breckinridge's move to Iowa Territory was motivated, in part, by the fact that it was a free territory under the Missouri Compromise. After returning to Kentucky, Breckinridge became friends with abolitionists Cassius Marcellus Clay, Garrett Davis, and Orville H. Browning. He represented freedmen in court and loaned them money. He was a Freemason and member of the First Presbyterian Church, both of which opposed slavery. Nevertheless, because blacks were educationally and socially disadvantaged in the South, Breckinridge concluded that "the interests of both races in the Commonwealth would be promoted by the continuance of their present relations". He supported the new state constitution adopted in 1850, which forbade the immigration of freedmen to Kentucky and required emancipated slaves to be expelled from the state. Believing it was best to relocate freedmen to the African colony of Liberia, he supported the Kentucky branch of the American Colonization Society. The 1850 Census showed that Breckinridge owned five slaves, aged 11 to 36. Heck recorded that his slaves were well-treated but noted that this was not unusual and proved nothing about his views on slavery. ### Moderate reputation Because Breckinridge defended both the Union and slavery in the General Assembly, he was considered a moderate early in his political career. In June 1864, Pennsylvania's John W. Forney opined that Breckinridge had been "in no sense an extremist" when elected to Congress in 1851. Of his early encounters with Breckinridge, Forney wrote: "If he had a conscientious feeling, it was hatred of slavery, and both of us, 'Democrats' as we were, frequently confessed that it was a sinful and an anti-Democratic institution, and that the day would come when it must be peaceably or forcibly removed." Heck discounts this statement, pointing out that Forney was editor of a pro-Union newspaper and Breckinridge a Confederate general at the time it was published. As late as the 1856 presidential election, some alleged that Breckinridge was an abolitionist. By the time he began his political career, Breckinridge had concluded that slavery was more a constitutional issue than a moral one. Slaves were property, and the Constitution did not empower the federal government to interfere with property rights. From Breckinridge's constructionist viewpoint, allowing Congress to legislate emancipation without constitutional sanction would lead to "unlimited dominion over the territories, excluding the people of the slave states from emigrating thither with their property". As a private citizen, he supported the slavery protections in the Kentucky Constitution of 1850 and denounced the Wilmot Proviso, which would have forbidden slavery in territory acquired in the Mexican–American War. As a state legislator, he declared slavery a "wholly local and domestic" matter, to be decided separately by the residents of each state and territory. Because Washington, D.C., was a federal entity and the federal government could not interfere with property rights, he concluded that forced emancipation there was unconstitutional. As a congressman, he insisted on Congress's "perfect non-intervention" with slavery in the territories. Debating the 1854 Kansas–Nebraska Act, he explained, "The right to establish [slavery in a territory by government sanction] involves the correlative right to prohibit; and, denying both, I would vote for neither." ### Later views Davis notes that Breckinridge's December 21, 1859, address to the state legislature marked a change in his public statements about slavery. He decried the Republicans' desire for "negro equality", his first public indication that he may have believed blacks were biologically inferior to whites. He declared that the Dred Scott decision showed that federal courts afforded adequate protection for slave property, but advocated a federal slave code if future courts failed to enforce those protections; this marked a departure from his previous doctrine of "perfect non-interference". Asserting that John Brown's raid on Harpers Ferry proved Republicans intended to force abolition on the South, he predicted "resistance [to the Republican agenda] in some form is inevitable". He still urged the Assembly against secession—"God forbid that the step shall ever be taken!"—but his discussion of growing sectional conflict bothered some, including his uncle Robert. Klotter wrote that Breckinridge's sale of a female slave and her six-week-old child in November 1857 probably ended his days as a slaveholder. Slaves were not listed among his assets in the 1860 Census, but Heck noted that he had little need for slaves at that time, since he was living in Lexington's Phoenix Hotel after returning to Kentucky from his term as vice president. Some slavery advocates refused to support him in the 1860 presidential race because he was not a slaveholder. Klotter noted that Breckinridge fared better in rural areas of the South, where there were fewer slaveholders; in urban areas where the slave population was higher, he lost to Constitutional Unionist candidate John Bell, who owned 166 slaves. William C. Davis recorded that, in most of the South, the combined votes for Bell and Illinois Senator Stephen Douglas exceeded those cast for Breckinridge. After losing the election to Abraham Lincoln, Breckinridge worked for adoption of the Crittenden Compromise—authored by fellow Kentuckian John J. Crittenden—as a means of preserving the Union. Breckinridge believed the Crittenden proposal—restoring the Missouri Compromise line as the separator between slave and free territory in exchange for stricter enforcement of the Fugitive Slave Act of 1850 and federal non-interference with slavery in the territories and Washington, D.C.—was the most extreme proposal to which the South would agree. Ultimately, the compromise was rejected and the Civil War soon followed. ## Early political career A supporter of the annexation of Texas and "manifest destiny", Breckinridge campaigned for James K. Polk in the 1844 presidential election, prompting a relative to observe that he was "making himself very conspicuous here by making flaming loco foco speeches at the Barbecues". He decided against running for Scott County clerk after his law partner complained that he spent too much time in politics. In 1845, he declined to seek election to the U.S. House of Representatives from the eighth district but campaigned for Alexander Keith Marshall, his party's unsuccessful nominee. He supported Zachary Taylor for the presidency in mid-1847 but endorsed the Democratic ticket of Lewis Cass and William O. Butler after Taylor became a Whig in 1848. ### Kentucky House of Representatives In October 1849, Kentucky voters called for a constitutional convention. Emancipationists, including Breckinridge's uncles William and Robert, his brother-in-law John C. Young, and his friend Cassius Marcellus Clay, nominated "friends of emancipation" to seek election to the convention and the state legislature In response, Breckinridge, who opposed "impairing [slavery protections] in any form", was nominated by a bipartisan pro-slavery convention for one of Fayette County's two seats in the Kentucky House of Representatives. With 1,481 votes, 400 more than any of his opponents, Breckinridge became the first Democrat elected to the state legislature from Fayette County, which was heavily Whig. When the House convened in December 1849, a member from Mercer County nominated Breckinridge for Speaker against two Whigs. After receiving 39 votes—8 short of a majority—on the first three ballots, he withdrew, and the position went to Whig Thomas Reilly. Assigned to the committees on the Judiciary and Federal Relations, Breckinridge functioned as the Democratic floor leader during the session. Davis wrote that his most important work during the session was bank reform. Breckinridge's first speech favored allowing the Kentucky Colonization Society to use the House chamber; later, he advocated directing Congress to establish an African freedmen colony, and to meet the costs of transporting settlers there. Funding internal improvements was traditionally a Whig stance, but Breckinridge advocated conducting a state geologic survey, making the Kentucky River more navigable, chartering a turnpike, incorporating a steamboat company, and funding the Kentucky Lunatic Asylum. As a reward for supporting these projects, he presided over the approval of the Louisville and Bowling Green Railroad's charter and was appointed director of the asylum. Resolutions outlining Kentucky's views on the proposed Compromise of 1850 were referred to the Committee on Federal Relations. The committee's Whig majority favored one calling the compromise a "fair, equitable, and just basis" for dealing with slavery in the territories and urging Congress not to interfere with slavery there or in Washington, D.C. Feeling this left open the issue of Congress's ability to legislate emancipation, Breckinridge asserted in a competing resolution that Congress could not establish or abolish slavery in states or territories. Both resolutions, and several passed by the state Senate, were laid on the table without being adopted. Breckinridge left the session on March 4, 1850, three days before its adjournment, to tend to John Milton Breckinridge, his infant son who had fallen ill; the boy died on March 18. To distract from his grief, he campaigned for ratification of the new constitution, objecting only to its difficult amendment process. He declined renomination, citing concerns "of a private and imperative nature". Davis wrote that the problem was money, since his absence from Lexington had hurt his legal practice, but his son's death was also a factor. ## U.S. House of Representatives At an October 17, 1850, barbecue celebrating the Compromise of 1850, Breckinridge toasted its author, Whig Party founder Henry Clay. Clay reciprocated by praising Breckinridge's grandfather and father, expressing hope that Breckinridge would use his talents to serve his country, then embracing him. Some observers believed that Clay was endorsing Breckinridge for higher office, and Whig newspapers began referring to him as "a sort of half-way Whig" and implying that he voted for Taylor in 1848. ### First term (1851–1853) Delegates to the Democrats' January 1851 state convention nominated Breckinridge to represent Kentucky's eighth district in the U.S. House of Representatives. Called the "Ashland district" because it contained Clay's Ashland estate and much of the area he once represented, Whigs typically won there by 600 to 1,000 votes. A Democrat had not represented it since 1828, and in the previous election no Democrat had sought the office. Breckinridge's opponent, Leslie Combs, was a popular War of 1812 veteran and former state legislator. As they campaigned together, Breckinridge's eloquence contrasted with Combs' plainspoken style. Holding that "free thought needed free trade", Breckinridge opposed Whig protective tariffs. He only favored federal funding of internal improvements "of a national character". Carrying only three of seven counties, but bolstered by a two-to-one margin in Owen County, Breckinridge garnered 54% of the vote, winning the election by a margin of 537. Considered for Speaker of the House, Breckinridge believed his election unlikely and refused to run against fellow Kentuckian Linn Boyd. Boyd was elected, and despite Breckinridge's gesture, assigned him to the lightly-regarded Foreign Affairs Committee. Breckinridge resisted United States Democratic Review editor George Nicholas Sanders' efforts to recruit him to the Young America movement. Like Young Americans, Breckinridge favored westward expansion and free trade, but he disagreed with the movement's support of European revolutions and its disdain for older statesmen. On March 4, 1852, Breckinridge made his first speech in the House, defending presidential aspirant William Butler against charges by Florida's Edward Carrington Cabell, a Young American and distant cousin, that Butler secretly sympathized with the Free Soilers. He denounced Sanders for his vitriolic attacks on Butler and for calling all likely Democratic presidential candidates except Stephen Douglas "old fogies". The speech made Breckinridge a target of Whigs, Young Americans, and Douglas supporters. Humphrey Marshall, a Kentucky Whig who supported incumbent President Millard Fillmore, attacked Breckinridge for claiming Fillmore had not fully disclosed his views on slavery. Illinois' William Alexander Richardson, a Douglas backer, tried to distance Douglas from Sanders' attacks on Butler, but Breckinridge showed that Douglas endorsed the Democratic Review a month after it printed its first anti-Butler article. Finally, Breckinridge's cousin, California's Edward C. Marshall, charged that Butler would name Breckinridge Attorney General in exchange for his support and revived the charge that Breckinridge broke party ranks, supporting Zachary Taylor for president. Breckinridge ably defended himself, but Sanders continued to attack him and Butler, claiming Butler would name Breckinridge as his running mate, even though Breckinridge was too young to qualify as vice president. After his maiden speech, Breckinridge took a more active role in the House. In debate with Ohio's Joshua Reed Giddings, he defended the Fugitive Slave Law's constitutionality and criticized Giddings for hindering the return of fugitive slaves. He opposed Tennessee Congressman Andrew Johnson's Homestead Bill, fearing it would create more territories that excluded slavery. Although generally opposed to funding local improvements, he supported the repair of two Potomac River bridges to avoid higher costs later. Other minor stands included supporting measures to benefit his district's hemp farmers, voting against giving the president ten more appointments to the U.S. Naval Academy, and opposing funds for a sculpture of George Washington because the sculptor proposed depicting Washington in a toga. Beginning in April, Breckinridge made daily visits to an ailing Henry Clay. Clay died June 29, 1852, and Breckinridge garnered nationwide praise and enhanced popularity in Kentucky after eulogizing Clay in the House. Days later, he spoke in opposition to increasing a subsidy to the Collins Line for carrying trans-Atlantic mail, noting that Collins profited by carrying passengers and cargo on mail ships. In wartime, the government could commandeer and retrofit Collins's steamboats as warships, but Breckinridge cited Commodore Matthew C. Perry's opinion that they would be useless in war. Finally, he showed Cornelius Vanderbilt's written statement promising to build a fleet of mail ships at his expense and carry the mail for \$4 million less than Collins. Despite this, the House approved the subsidy increase. ### Second term (1853–1855) With Butler's chances for the presidential nomination waning, Breckinridge convinced the Kentucky delegation to the 1852 Democratic National Convention not to nominate Butler until later balloting when he might become a compromise candidate. He urged restraint when Lewis Cass's support dropped sharply on the twentieth ballot, but Kentucky's delegates would wait no longer; on the next ballot, they nominated Butler, but he failed to gain support. After Franklin Pierce, Breckinridge's second choice, was nominated, Breckinridge tried, unsuccessfully, to recruit Douglas to Pierce's cause. Pierce lost by 3,200 votes in Kentucky—one of four states won by Winfield Scott—but was elected to the presidency, and appointed Breckinridge governor of Washington Territory in recognition of his efforts. Unsure of his re-election chances in Kentucky, Breckinridge had sought the appointment, but after John J. Crittenden, rumored to be his challenger, was elected to the Senate in 1853, he decided to decline it and run for re-election. #### Election The Whigs chose Attorney General James Harlan to oppose Breckinridge, but he withdrew in March when some party factions opposed him. Robert P. Letcher, a former governor who had not lost in 14 elections, was the Whigs' second choice. Letcher was an able campaigner who combined oratory and anecdotes to entertain and energize an audience. Breckinridge focused on issues in their first debate, comparing the Whig Tariff of 1842 to the Democrats' lower Walker tariff, which increased trade and yielded more tax revenue. Instead of answering Breckinridge's points, Letcher appealed to party loyalty, claiming Breckinridge would misrepresent the district "because he is a Democrat". Letcher appealed to Whigs "to protect the grave of Mr. [Henry] Clay from the impious tread of Democracy", but Breckinridge pointed to his friendly relations with Clay, remarking that Clay's will did not mandate that "his ashes be exhumed" and "thrown into the scale to influence the result of the present Congressional contest". Cassius Clay, Letcher's political enemy, backed Breckinridge despite their differences on slavery. Citing Clay's support and the abolitionism of Breckinridge's uncle Robert, Letcher charged that Breckinridge was an abolitionist. In answer, Breckinridge quoted newspaper accounts and sworn testimony, provided by John L. Robinson, of a speech Letcher made in Indiana for Zachary Taylor in 1848. In the speech, made alongside Thomas Metcalfe, another former Whig governor of Kentucky, Letcher predicted that the Kentucky Constitution then being drafted would provide for gradual emancipation, declaring, "It is only the ultra men in the extreme South who desire the extension of slavery." When Letcher confessed doubts about his election chances, Whigs began fundraising outside the district, using the money to buy votes or pay Breckinridge supporters not to vote. Breckinridge estimated that the donations, which came from as far away as New York and included contributions from the Collins Line, totaled \$30,000; Whig George Robertson believed it closer to \$100,000. Washington, D.C., banker William Wilson Corcoran contributed \$1,000 to Breckinridge, who raised a few thousand dollars. Out of 12,538 votes cast, Breckinridge won by 526. He received 71% of the vote in Owen County, which recorded 123 more votes than registered voters. Grateful for the county's support, he nicknamed his son, John Witherspoon Breckinridge, "Owen". #### Service Of 234 representatives in the House, Breckinridge was one of 80 re-elected to the Thirty-third Congress. His relative seniority, and Pierce's election, increased his influence. He was rumored to have Pierce's backing for Speaker of the House, but he again deferred to Boyd; Maryland's Augustus R. Sollers spoiled Boyd's unanimous election by voting for Breckinridge. Still not given a committee chairmanship, he was assigned to the Ways and Means Committee, where he secured passage of a bill to cover overspending in fiscal year 1853–1854; it was the only time in his career that he solely managed a bill. His attempts to increase Kentucky's allocation in a rivers and harbors bill were unsuccessful but popular with his Whig constituents. In January 1854, Douglas introduced the Kansas–Nebraska Act to organize the Nebraska Territory. Southerners had thwarted his previous attempts to organize the territory because Nebraska lay north of parallel 36°30' north, the line separating slave and free territory under the Missouri Compromise. They feared that the territory would be organized into new free states that would vote against the South on slavery issues. The Kansas–Nebraska Act allowed the territory's settlers to decide whether or not to permit slavery, an implicit repeal of the Missouri Compromise. Kentucky Senator Archibald Dixon's amendment to make the repeal explicit angered northern Democrats, but Breckinridge believed it would move the slavery issue from national to local politics, and he urged Pierce to support it. Breckinridge wrote to his uncle Robert that he "had more to do than any man here, in putting [the Act] in its present shape", but Heck notes that few extant records support this claim. The repeal amendment made the act more palatable to the South; only 9 of 58 Southern congressmen voted against it. No Northern Whigs voted for the measure, but 44 of 86 Northern Democrats voted in the affirmative, enough to pass it. The Senate quickly concurred, and Pierce signed the act into law on May 30, 1854. During the debate on the bill, New York's Francis B. Cutting demanded that Breckinridge retract or explain a statement he had made, which Breckinridge understood as a challenge to duel. Under the code duello, the challenged party selected the weapons and the distance between combatants; Breckinridge chose rifles at 60 paces and suggested the duel be held in Silver Spring, Maryland, on the estate of his friend, Francis Preston Blair. Cutting had not meant his remark as a challenge, but insisted that he was now challenged and selected pistols at 10 paces. While their representatives tried to clarify matters, Breckinridge and Cutting made amends, averting the duel. Had it taken place, Breckinridge could have been removed from the House; the 1850 Kentucky Constitution prevented duelers from holding office. In the second session of the 33rd Congress, Breckinridge acted as spokesman for Ways and Means Committee bills, including a bill to assume and pay the debts Texas incurred prior to its annexation. Breckinridge's friends, W. W. Corcoran and Jesse D. Bright, were two of Texas's major creditors. The bill, which was approved, paid only those debts related to powers Texas surrendered to Congress upon annexation. Breckinridge was disappointed that the House defeated a measure to pay the Sioux \$12,000 owed them for the 1839 purchase of an island in the Mississippi River; the debt was never paid. Another increase in the subsidy to the Collins Line passed over his opposition, but Pierce vetoed it. ### Retirement from the House In February 1854, the General Assembly's Whig majority gerrymandered the eighth district, removing over 500 Democratic voters and replacing them with several hundred Whig voters by removing Owen and Jessamine counties from the district and adding Harrison and Nicholas counties to it. The cooperation of the Know Nothing Party—a relatively new nativist political entity—with the faltering Whigs further hindered Breckinridge's re-election chances. With his family again in financial straits, his wife wanted him to retire from national politics. Pierre Soulé, the U.S. Minister to Spain, resigned in December 1854 after being unable to negotiate the annexation of Cuba and angering the Spanish by drafting the Ostend Manifesto, which called for the U.S. to take Cuba by force. Pierce nominated Breckinridge to fill the vacancy, but did not tell him until just before the Senate's January 16 confirmation vote. After consulting Secretary of State William L. Marcy, Breckinridge concluded that the salary was insufficient and Soulé had so damaged Spanish relations that he would be unable to accomplish anything significant. In a letter to Pierce on February 8, 1855, he cited reasons "of a private and domestic nature" for declining the nomination. On March 17, 1855, he announced he would retire from the House. Breckinridge and Minnesota Territory's Henry Mower Rice were among the speculators who invested in land near present-day Superior, Wisconsin. Rice disliked Minnesota's territorial governor, Willis A. Gorman, and petitioned Pierce to replace him with Breckinridge. Pierce twice investigated Gorman, but found no grounds to remove him from office. Breckinridge fell ill when traveling to view his investments in mid-1855 and was unable to campaign in the state elections. Know Nothings captured every state office and six congressional districts—including the eighth district—and Breckinridge sent regrets to friends in Washington, D.C., promising to take a more active role in the 1856 campaigns. ## U.S. vice president Two Kentuckians—Breckinridge's friend, Governor Lazarus W. Powell and his enemy, Linn Boyd—were potential Democratic presidential nominees in 1856. Breckinridge—a delegate to the national convention and designated as a presidential elector—favored Pierce's re-election but convinced the state Democratic convention to leave the delegates free to support any candidate the party coalesced behind. To a New Yorker who proposed that Breckinridge's nomination could unite the party, he replied "Humbug". ### Election Pierce was unable to secure the nomination at the national convention, so Breckinridge switched his support to Stephen Douglas, but the combination of Pierce and Douglas supporters did not prevent James Buchanan's nomination. After Douglas's floor manager, William Richardson, suggested that nominating Breckinridge for vice president would help Buchanan secure the support of erstwhile Douglas backers in the general election, Louisiana's J. L. Lewis nominated him. Breckinridge declined in deference to Linn Boyd but received 51 votes on the first ballot, behind Mississippi's John A. Quitman with 59, but ahead of third-place Boyd, who garnered 33. On the second ballot, Breckinridge received overwhelming support, and opposition delegates changed their votes to make his nomination unanimous. The election was between Buchanan and Republican John C. Frémont in the north and between Buchanan and Millard Fillmore, nominated by a pro-slavery faction of the Know Nothings, in the South. Tennessee Governor Andrew Johnson and Congressional Globe editor John C. Rives promoted the possibility that Douglas and Pierce supporters would back Fillmore in the Southern states, denying Buchanan a majority in the Electoral College and throwing the election to the House of Representatives. There, Buchanan's opponents would prevent a vote, and the Senate's choice for vice president—certain to be Breckinridge—would become president. There is no evidence that Breckinridge countenanced this scheme. Defying contemporary political convention, Breckinridge spoke frequently during the campaign, stressing Democratic fidelity to the constitution and charging that the Republican emancipationist agenda would tear the country apart. His appearances in the critical state of Pennsylvania helped allay Buchanan's fears that Breckinridge desired to throw the election to the House. "Buck and Breck" won the election with 174 electoral votes to Frémont's 114 and Fillmore's 8, and Democrats carried Kentucky for the first time since 1828. Thirty-six at the time of his inauguration on March 4, 1857, Breckinridge remains the youngest vice president in U.S. history. The Constitution requires the president and vice-president to be at least thirty-five years old. ### Service When Breckinridge asked to meet with Buchanan shortly after the inauguration, Buchanan told him to come to the White House and ask to see the hostess, Harriet Lane. Offended, Breckinridge refused to do so; Buchanan's friends later explained that asking to see Lane was a secret instruction to take a guest to the president. Buchanan apologized for the misunderstanding, but the event portended a poor relationship between the two men. Resentful of Breckinridge's support for both Pierce and Douglas, Buchanan allowed him little influence in the administration. Breckinridge's recommendation that former Whigs and Kentuckians—Powell, in particular—be included in Buchanan's cabinet went unheeded. Kentuckians James B. Clay and Cassius M. Clay were offered diplomatic missions to Berlin and Peru, respectively, but both declined. Buchanan often asked Breckinridge to receive and entertain foreign dignitaries, but in 1858, Breckinridge declined Buchanan's request that he resign and take the again-vacant position as U.S. Minister to Spain. The only private meeting between the two occurred near the end of Buchanan's term, when the president summoned Breckinridge to get his advice on whether to issue a proclamation declaring a day of "Humiliation and Prayer" over the divided state of the nation; Breckinridge affirmed that Buchanan should make the proclamation. As vice president, Breckinridge was tasked with presiding over the debates of the Senate. In an early address to that body, he promised, "It shall be my constant aim, gentlemen of the Senate, to exhibit at all times, to every member of this body, the courtesy and impartiality which are due to the representatives of equal States." Historian Lowell H. Harrison wrote that, while Breckinridge fulfilled his promise to the satisfaction of most, acting as moderator limited his participation in debate. Five tie-breaking votes provided a means of expressing his views. Economic motivations explained two—forcing an immediate vote on a codfishing tariff and limiting military pensions to \$50 per month (\$ in present-day currency). A third cleared the floor for a vote on Douglas's motion to admit Oregon to the Union, and a fourth defeated Johnson's Homestead Bill. The final vote effected a wording change in a resolution forbidding constitutional amendments that empowered Congress to interfere with property rights. The Senate's move from the Old Senate Chamber to a more spacious one on January 4, 1859, provided another opportunity. Afforded the chance to make the last address in the old chamber, Breckinridge encouraged compromise and unity among the states to resolve sectional conflicts. Despite irregularities in the approval of the Lecompton Constitution by Kansas voters, Breckinridge agreed with Buchanan that it was legitimate, but he kept his position secret, and some believed he agreed with his friend, Stephen Douglas, that Lecompton was invalid. Breckinridge's absence from the Senate during debate on admitting Kansas to the Union under Lecompton seemed to confirm this, but his leave—to take his wife from Baton Rouge, Louisiana, where she was recovering from an illness, to Washington, D.C.—had been planned for months. The death of his grandmother, Polly Breckinridge, prompted him to leave earlier than planned. During his absence, both houses of Congress voted to re-submit the Lecompton Constitution to Kansas voters for approval. On resubmission, it was overwhelmingly rejected. By January 1859, friends knew Breckinridge desired the U.S. Senate seat of John J. Crittenden, whose term expired on March 3, 1861. The General Assembly would elect Crittenden's successor in December 1859, so Breckinridge's election would not affect any presidential aspirations he might harbor. Democrats chose Breckinridge's friend Beriah Magoffin over Linn Boyd as their gubernatorial nominee, bolstering Breckinridge's chances for the senatorship, the presidency, or both. Boyd was expected to be Breckinridge's chief opponent for the Senate, but he withdrew on November 28, citing ill health, and died three weeks later. The Democratic majority in the General Assembly elected Breckinridge to succeed Crittenden by a vote of 81 to 53 over Joshua Fry Bell, whom Magoffin had defeated for the governorship in August. After Minnesota's admission to the Union in May 1858, opponents accused Breckinridge of rigging a random draw so that his friend, Henry Rice, would get the longer of the state's two Senate terms. Senate Secretary Asbury Dickins blunted the charges, averring that he alone handled the instruments used in the drawing. Republican Senator Solomon Foot closed a special session of the Thirty-sixth Congress in March 1859 by offering a resolution praising Breckinridge for his impartiality; after the session, the Republican-leaning New York Times noted that while the star of the Buchanan administration "falls lower every hour in prestige and political consequence, the star of the Vice President rises higher". ## Presidential election of 1860 Breckinridge's lukewarm support for Douglas in his 1858 senatorial re-election bid against Abraham Lincoln convinced Douglas that Breckinridge would seek the Democratic presidential nomination, but in a January 1860 letter to his uncle, Breckinridge averred he was "firmly resolved not to". Douglas's political enemies supported Breckinridge, and Buchanan reluctantly dispensed patronage to Breckinridge allies, further alienating Douglas. After Breckinridge left open the possibility of supporting a federal slave code in 1859, Douglas wrote to Robert Toombs that he would support his enemy and fellow Georgian Alexander H. Stephens for the nomination over Breckinridge, although he would vote for Breckinridge over any Republican in the general election. ### Nomination Breckinridge asked James Clay to protect his interests at the 1860 Democratic National Convention in Charleston, South Carolina. Clay, Lazarus Powell, William Preston, Henry Cornelius Burnett, and James B. Beck desired to nominate Breckinridge for president, but in a compromise with Kentucky's Douglas backers, the delegation went to Charleston committed to former Treasury Secretary James Guthrie of Louisville. Fifty Southern Democrats, upset at the convention's refusal to include slavery protection in the party's platform, walked out of the convention; the remaining delegates decided that nominations required a two-thirds majority of the original 303 delegates. For 35 ballots, Douglas ran well ahead of Guthrie but short of the needed majority. Arkansas's lone remaining delegate nominated Breckinridge, but Beck asked that the nomination be withdrawn because Breckinridge refused to compete with Guthrie. Twenty-one more ballots were cast, but the convention remained deadlocked. On May 3, the convention adjourned until June 18 in Baltimore, Maryland. Breckinridge's communication with his supporters between the meetings indicated greater willingness to become a candidate, but he instructed Clay to nominate him only if his support exceeded Guthrie's. Many believed that Buchanan supported Breckinridge, but Breckinridge wrote to Beck that "The President is not for me except as a last necessity, that is to say not until his help will not be worth a damn." After a majority of the delegates, most of them Douglas supporters, voted to replace Alabama and Louisiana's walk-out delegates with new, pro-Douglas men in Baltimore, Virginia's delegation led another walk-out of Southern Democrats and Buchanan-controlled delegates from the northeast and Pacific coast; 105 delegates, including 10 of Kentucky's 24, left, and the remainder nominated Douglas. The walk-outs held a rival nominating convention, styled the National Democratic Convention, at the Maryland Institute in Baltimore. At that convention on June 23, Massachusetts' George B. Loring nominated Breckinridge for president, and he received 81 of the 105 votes cast, the remainder going to Daniel S. Dickinson of New York. Oregon's Joseph Lane was nominated for vice-president. Breckinridge told Beck he would not accept the nomination because it would split the Democrats and ensure the election of Republican Abraham Lincoln. On June 25, Mississippi Senator Jefferson Davis proposed that Breckinridge should accept the nomination; his strength in the South would convince Douglas that his own candidacy was futile. Breckinridge, Douglas, and Constitutional Unionist John Bell would withdraw, and Democrats could nominate a compromise candidate. Breckinridge accepted the nomination, but maintained that he had not sought it and that he had been nominated "against my expressed wishes". Davis's compromise plan failed when Douglas refused to withdraw, believing his supporters would vote for Lincoln rather than a compromise candidate. ### Election The election effectively pitted Lincoln against Douglas in the North and Breckinridge against Bell in the South. Far from expectant of victory, Breckinridge told Davis's wife, Varina, "I trust I have the courage to lead a forlorn hope." Caleb Cushing oversaw the publication of several Breckinridge campaign documents, including a campaign biography and copies of his speeches on the occasion of the Senate's move to a new chamber and his election to the Senate. After making a few short speeches during stops between Washington, D.C. and Lexington, Breckinridge stated that, consistent with contemporary custom, he would make no more speeches until after the election, but the results of an August 1860 special election to replace the deceased clerk of the Kentucky Court of Appeals convinced him that his candidacy could be faltering. He had expressed confidence that the Democratic candidate for the clerkship would win, and "nothing short of a defeat by 6,000 or 8,000 would alarm me for November". Constitutional Unionist Leslie Combs won by 23,000 votes, prompting Breckinridge to make a full-length campaign speech in Lexington on September 5, 1860. Breckinridge's three-hour speech was primarily defensive; his moderate tone was designed to win votes in the north but risked losing Southern support to Bell. He denied charges that he had supported Zachary Taylor over Lewis Cass in 1848, that he had sided with abolitionists in 1849, and that he had sought John Brown's pardon for the Harpers Ferry raid. Reminding the audience that Douglas wanted the Supreme Court to decide the issue of slavery in the territories, he pointed out that Douglas then denounced the Dred Scott ruling and laid out a means for territorial legislatures to circumvent it. Breckinridge supported the legitimacy of secession but insisted it was not the solution to the country's sectional disagreements. In answer to Douglas's charge that there was not "a disunionist in America who is not a Breckinridge man", he challenged the assembled crowd "to point out an act, to disclose an utterance, to reveal a thought of mine hostile to the constitution and union of the States". He warned that Lincoln's insistence on emancipation made him the real disunionist. Breckinridge finished third in the popular vote with 849,781 votes to Lincoln's 1,866,452, Douglas's 1,379,957, and Bell's 588,879. He carried 12 of the 15 Southern states and the border states of Maryland, Delaware and North Carolina but lost his home state to Bell. His greatest support in the Deep South came from areas that opposed secession. Davis pointed out that only Breckinridge garnered nearly equal support from the Deep South, the border states, and the free states of the North. His 72 electoral votes bested Bell's 59 and Douglas's 12, but Lincoln received 180, enough to win the election. ### Aftermath Three weeks after the election, Breckinridge returned to Washington, D.C., to preside over the Senate's lame duck session. Lazarus Powell, now a senator, proposed a resolution creating a committee of thirteen members to respond to the portion of Buchanan's address regarding the disturbed condition of the country. Breckinridge appointed the members of the committee, which, in Heck's opinion, formed "an able committee, representing every major faction." John J. Crittenden proposed a compromise by which slavery would be forbidden in territories north of parallel 36°30′ north—the demarcation line used in the Missouri Compromise—and permitted south of it, but the committee's five Republicans rejected the proposal. On December 31, the committee reported that it could come to no agreement. Writing to Magoffin on January 6, Breckinridge complained that the Republicans were "rejecting everything, proposing nothing" and "pursuing a policy which ... threatens to plunge the country into ... civil war". One of Breckinridge's final acts as vice-president was announcing the vote of the Electoral College to a joint session of Congress on February 13, 1861. Rumors abounded that he would tamper with the vote to prevent Lincoln's election. Knowing that some legislators planned to attend the session armed, Breckinridge asked Winfield Scott to post guards in and around the chambers. One legislator raised a point of order, requesting that the guards be ejected, but Breckinridge refused to sustain it; the electoral vote proceeded, and Breckinridge announced Lincoln's election as president. After Lincoln's arrival in Washington, D.C., on February 24, Breckinridge visited him at the Willard Hotel. After making a valedictory address on March 4, he swore in Hannibal Hamlin as his successor as vice president; Hamlin then swore in Breckinridge and the other incoming senators. ## U.S. Senate Because Republicans controlled neither house of Congress, nor the Supreme Court, Breckinridge did not believe Lincoln's election was a mandate for secession. Ignoring James Murray Mason's contention that no Southerner should serve in Lincoln's cabinet, Breckinridge supported the appointment of Virginian Montgomery Blair as Postmaster General. He also voted against a resolution to remove the names of the senators from seceded states from the Senate roll. Working for a compromise that might yet save the Union, Breckinridge opposed a proposal by Ohio's Clement Vallandigham that the border states unite to form a "middle confederacy" that would place a buffer between the U.S. and the seceded states, nor did Breckinridge desire to see Kentucky as the southernmost state in a northern confederacy; its position south of the Ohio River left it too vulnerable to the southern confederacy should war occur. Urging that federal troops be withdrawn from the seceded states, he insisted "their presence can accomplish no good, but will certainly produce incalculable mischief". He warned that, unless Republicans made some concessions, Kentucky and the other border states would also secede. When the legislative session ended on March 28, Breckinridge returned to Kentucky and addressed the state legislature on April 2, 1861. He urged the General Assembly to push for federal adoption of the Crittenden Compromise and advocated calling a border states convention, which would draft a compromise proposal and submit it to the Northern and Southern states for adoption. Asserting that the states were coequal and free to choose their own course, he maintained that, if the border states convention failed, Kentucky should call a sovereignty convention and join the Confederacy as a last resort. The Battle of Fort Sumter, which began the Civil War, occurred days later, before the border states convention could be held. Magoffin called a special legislative session on May 6, and the legislature authorized creation of a six-man commission to decide the state's course in the war. Breckinridge, Magoffin, and Richard Hawes were the states' rights delegates to the conference, while Crittenden, Archibald Dixon, and Samuel S. Nicholas represented the Unionist position. The delegates were only able to agree on a policy of armed neutrality, which Breckinridge believed impractical and ultimately untenable, but preferable to more drastic actions. In special elections held June 20, 1861, Unionists won nine of Kentucky's ten House seats, and in the August 5 state elections, Unionists gained majorities in both houses of the state legislature. When the Senate convened for a special session on July 4, 1861, Breckinridge stood almost alone in opposition to the war. Labeled a traitor, he was removed from the Committee on Military Affairs. He demanded to know what authority Lincoln had to blockade Southern ports or suspend the writ of habeas corpus. He reminded his fellow senators that Congress had not approved a declaration of war and maintained that Lincoln's enlistment of men and expenditure of funds for the war effort were unconstitutional. If the Union could be persuaded not to attack the Confederacy, he predicted that "all those sentiments of common interest and feeling ... might lead to a political reunion founded upon consent". On August 1, he declared that if Kentucky supported Lincoln's prosecution of the war, "she will be represented by some other man on the floor of this Senate." Asked by Oregon's Edward Dickinson Baker how he would handle the secession crisis, he responded, "I would prefer to see these States all reunited upon true constitutional principles to any other object that could be offered me in life ... But I infinitely prefer to see a peaceful separation of these States, than to see endless, aimless, devastating war, at the end of which I see the grave of public liberty and of personal freedom." In early September, Confederate and Union forces entered Kentucky, ending her neutrality. On September 18, Unionists shut down the pro-Southern Louisville Courier newspaper and arrested former governor Charles S. Morehead, who was suspected of having Confederate sympathies. Learning that Colonel Thomas E. Bramlette was under orders to arrest him, Breckinridge fled to Prestonsburg, Kentucky, where he was joined by Confederate sympathizers George W. Johnson, George Baird Hodge, William E. Simms, and William Preston. The group continued to Abingdon, Virginia, where they took a train to Confederate-held Bowling Green, Kentucky. On October 2, 1861, the Kentucky General Assembly passed a resolution declaring that neither of the state's U.S. Senators—Breckinridge and Powell—represented the will of the state's citizens and requesting that both resign. Governor Magoffin refused to endorse the resolution, preventing its enforcement. Writing from Bowling Green on October 8, Breckinridge declared, "I exchange with proud satisfaction a term of six years in the Senate of the United States for the musket of a soldier." Later that month, he was part of a convention in Confederate-controlled Russellville, Kentucky, that denounced the Unionist legislature as not representing the will of most Kentuckians and called for a sovereignty convention to be held in that city on November 18. Breckinridge, George W. Johnson, and Humphrey Marshall were named to the planning committee, but Breckinridge did not attend the convention, which created a provisional Confederate government for Kentucky. On November 6, Breckinridge was indicted for treason in a federal court in Frankfort. The Senate passed a resolution formally expelling him on December 2, 1861; Powell was the only member to vote against the resolution, claiming that Breckinridge's statement of October 8 amounted to a resignation, rendering the resolution unnecessary. ## Confederate Secretary of War Breckinridge served in the Confederate Army from November 2, 1861, until early 1865. In mid-January 1865, Confederate President Jefferson Davis summoned Breckinridge to the Confederate capital at Richmond, Virginia, and rumors followed that Davis would appoint Breckinridge Confederate States Secretary of War, replacing James A. Seddon. Breckinridge arrived in Richmond on January 17, and some time in the next two weeks, Davis offered him the appointment. Breckinridge made his acceptance conditional upon the removal of Lucius B. Northrop from his office as Confederate Commissary General. Most Confederate officers regarded Northrop as inept, but Davis had long defended him. Davis relented on January 30, allowing Seddon to replace Northrop with Breckinridge's friend, Eli Metcalfe Bruce, on an interim basis; Breckinridge accepted Davis's appointment the next day. Some Confederate congressmen were believed to oppose Breckinridge because he had waited so long to join the Confederacy, but his nomination was confirmed unanimously on February 6, 1865. At 44 years old, he was the youngest person to serve in the Confederate president's cabinet. Klotter called Breckinridge "perhaps the most effective of those who held that office", but Harrison wrote that "no one could have done much with the War Department at that late date". While his predecessors had largely served Davis's interests, Breckinridge functioned independently, assigning officers, recommending promotions, and consulting on strategy with Confederate generals. Breckinridge's first act as secretary was to meet with assistant secretary John Archibald Campbell, who had opposed Breckinridge's nomination, believing he would focus on a select few of the department's bureaus and ignore the rest. During their conference, Campbell expressed his desire to retain his post, and Breckinridge agreed, delegating many of the day-to-day details of the department's operation to him. Breckinridge recommended that Davis appoint Isaac M. St. John, head of the Confederate Nitre and Mining Bureau, as permanent commissary general. Davis made the appointment on February 15, and the flow of supplies to Confederate armies improved under St. John. With Confederate ranks plagued by desertion, Breckinridge instituted a draft; when this proved ineffective, he negotiated the resumption of prisoner exchanges with the Union in order to replenish the Confederates' depleted manpower. By late February, Breckinridge had concluded that the Confederate cause was hopeless. He opposed the use of guerrilla warfare by Confederate forces and urged a national surrender. Meeting with Confederate senators from Virginia, Kentucky, Missouri, and Texas, he urged, "This has been a magnificent epic. In God's name let it not terminate in a farce." In April, with Union forces approaching Richmond, Breckinridge organized the escape of the other cabinet officials to Danville, Virginia. Afterward, he ordered the burning of the bridges over the James River and ensured the destruction of buildings and supplies that might aid the enemy. During the surrender of the city, he helped preserve the Confederate government and military records housed there. After a brief rendezvous with Robert E. Lee's retreating forces at Farmville, Virginia, Breckinridge moved south to Greensboro, North Carolina, where he, Naval Secretary Stephen Mallory, and Postmaster General John Henninger Reagan joined Generals Joseph E. Johnston and P. G. T. Beauregard to urge surrender. Davis and Secretary of State Judah P. Benjamin initially resisted, but eventually asked Major General William T. Sherman to parley. Johnston and Breckinridge negotiated terms with Sherman, but President Andrew Johnson (who had assumed the presidency on Lincoln's assassination on April 15) rejected them as too generous. On Davis' orders, Breckinridge told Johnston to meet Richard Taylor in Alabama, but Johnston, believing his men would refuse to fight any longer, surrendered to Sherman on similar terms to those offered to Lee at Appomattox. After the failed negotiations, Confederate Attorney General George Davis and Confederate Treasury Secretary George Trenholm resigned. The rest of the Confederate cabinet—escorted by over 2,000 cavalrymen under Basil W. Duke and Breckinridge's cousin William Campbell Preston Breckinridge—traveled southwest to meet Taylor at Mobile. Believing that the Confederate cause was not yet lost, Davis convened a council of war on May 2 in Abbeville, South Carolina, but the cavalry commanders told him that the only cause for which their men would fight was to aid Davis's escape from the country. Informed that gold and silver coins and bullion from the Confederate treasury were at the train depot in Abbeville, Breckinridge ordered Duke to load it onto wagons and guard it as they continued southward. En route to Washington, Georgia, some members of the cabinet's escort threatened to take their back salaries by force. Breckinridge had intended to wait until their arrival to make the payments, but to avoid mutiny, he dispersed some of the funds immediately. Two brigades deserted immediately after being paid; the rest continued to Washington, where the remaining funds were deposited in a local bank. Discharging most of the remaining escort, Breckinridge left Washington with a small party on May 5, hoping to distract federal forces from the fleeing Confederate president. Between Washington and Woodstock, the party was overtaken by Union forces under Lieutenant Colonel Andrew K. Campbell; Breckinridge ordered his nephew to surrender while he, his sons Cabell and Clifton, James B. Clay, Jr., and a few others fled into the nearby woods. At Sandersville, he sent Clay and Clifton home, announcing that he and the rest of his companions would proceed to Madison, Florida. On May 11, they reached Milltown, Georgia, where Breckinridge expected to rendezvous with Davis, but on May 14, he learned of Davis's capture days earlier. ## Later life Besides marking the end of the Confederacy and the war, Davis's capture left Breckinridge as the highest-ranking former Confederate still at large. Fearing arrest, he fled to Cuba, Great Britain, and Canada, where he lived in exile. Andrew Johnson issued a proclamation of amnesty for all former Confederates in December 1868, and Breckinridge returned home the following March. Friends and government officials, including President Ulysses S. Grant, urged him to return to politics, but he declared himself "an extinct volcano" and never sought public office again. He died of complications from war-related injuries on May 17, 1875.
6,112,839
Ice (The X-Files)
1,142,805,432
null
[ "1993 American television episodes", "Bottle television episodes", "Television episodes directed by David Nutter", "Television episodes set in Alaska", "Television episodes set in the Arctic", "The X-Files (season 1) episodes" ]
"Ice" is the eighth episode of the first season of the American science fiction television series The X-Files, premiering on the Fox network on November 5, 1993. It was directed by David Nutter and written by Glen Morgan and James Wong. The debut broadcast of "Ice" was watched by 10 million viewers in 6.2 million households. The episode received positive reviews at large from critics, who praised its tense atmosphere. The plot of the episode shows FBI agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson) investigating the deaths of an Alaskan research team. Isolated and alone, the agents and their accompanying team discover the existence of extraterrestrial parasitic organisms that drive their hosts into impulsive fits of rage. The episode was inspired by an article in Science News about an excavation in Greenland, and series creator Chris Carter also cited John W. Campbell's 1938 novella Who Goes There?, the inspiration for the films The Thing from Another World (1951) and The Thing (1982), as an influence. Although the producers thought that "Ice" would save money by being shot in a single location, it ended up exceeding its own production budget. ## Plot A mass murder–suicide occurs among a team of geophysicists at an outpost in Icy Cape, Alaska. FBI agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson) head for the outpost, accompanied by physician Dr. Hodge (Xander Berkeley); toxicologist Dr. Da Silva (Felicity Huffman); geologist Dr. Murphy (Steve Hytner); and Bear (Jeff Kober), their pilot. Along with the scientists' bodies the group finds a dog, which attacks Mulder and Bear. Scully notices black nodules on its skin and suspects that it may be infected with bubonic plague; she also notices a rash on its neck and movement beneath its skin. Although Bear, who was bitten by the dog, becomes ill and develops similar nodules on his body, autopsies reveal no such nodules on the bodies of the scientists. Murphy finds an ice core sample believed to have originated from a meteor crater and theorizes that the sample might be 250,000 years old. Although Bear insists on leaving, the others are concerned about infecting the outside world. When Bear is asked to provide a stool sample, he attacks Mulder and tries to flee. Something moves under Bear's skin, and he dies when Hodge makes an incision there and removes what turns out to be a small worm from the back of his neck. Now without a pilot, the group is informed that evacuation is impossible because of an oncoming storm. The worm removed from Bear is kept in a jar, and another is recovered from one of the scientists' bodies. Mulder, believing that the worms are extraterrestrial, wants them kept alive, but Scully feels they should be destroyed to prevent infection. The group check each other for black nodules and find none, although Mulder reminds Scully that the nodules disappeared from the dog over time. He wakes in the night and finds Murphy in the freezer with his throat cut; when the others arrive to see him standing over it, leading all of them, including Scully, to suspect he has become infected and killed Murphy. They lock Mulder in a storeroom. Scully discovers that two worms placed in the same host environment will kill each other. When they investigate by putting one worm into the infected dog, it recovers. Against Scully's objections and after trapping her in the freezer, Hodge and Da Silva try to put the other worm into Mulder. Hodge sees movement under Da Silva's skin and realizes she is the one infected as well as Murphy's killer. Da Silva breaks free and the rest pursue her through the outpost until Mulder and Scully restrain her, allowing Hodge to place the last worm inside her. After they are evacuated, Da Silva and the dog are quarantined and the others are released after showing no sign of infection. When Mulder declares he wants to return to the site, Hodge tells him that it has been destroyed by the government. ## Production ### Conception and writing Glen Morgan began writing the episode after he read a Science News article about men in Greenland who found a 250,000-year-old item encased in ice. The setting—an icy, remote research base overcome by an extraterrestrial creature—is similar to that of John W. Campbell's 1938 novella Who Goes There? and its two feature-film incarnations: The Thing from Another World (1951), directed by Christian Nyby and produced by Howard Hawks, and The Thing (1982), directed by John Carpenter. Chris Carter has cited them as the main inspirations for the episode. As in the novella and films, the characters cannot trust each other because they are uncertain if they are who they seem to be. Carter particularly enjoyed this aspect, because it pitted Mulder and Scully against each other and provided "a new look on their characters early on in the series". The episode's premise became a recurring theme in the series, with episodes such as "Darkness Falls" and "Firewalker" repeating the combination of remote locations and unknown lifeforms. A similar plot was featured in "The Enemy", a 1995 episode of Morgan and his writing partner James Wong's series Space: Above and Beyond, and according to UGO Networks the Fringe episode "What Lies Below" has "basically" the same plot as "Ice". The episode introduced invertebrate parasites as antagonists in the series; this plot device would recur in "Firewalker", "The Host", "F. Emasculata" and "Roadrunners". ### Filming The similarity to Carpenter's version of The Thing was due in part to new production designer Graeme Murray, who worked on Carpenter's film and created the complex in which the episode took place. Although "Ice" was intended as a bottle episode which would save money by being shot in a single location, it went over budget. According to Carter, The X-Files typically worked from a small budget and "every dollar we spend ends up on the screen". As a bottle episode, "Ice" used a small cast and its interiors were filmed on a set constructed at an old Molson brewery site. The episode's few exterior shots were filmed at Delta Air Park in Vancouver, whose hangars and flat terrain simulated an Arctic location. Carter said that he would have preferred to set the episode at the North Pole, but he believed that this was unfeasible at the time. For the worm effect, one member of the special effects department suggested putting a "baby snake" in a latex suit. After explaining that that couldn't be done, animal trainer Debbie Coe suggested using a "super mealworm" to achieve the desired effect. The effect of the worms crawling in the host bodies was achieved with wires under fake skins, including a skin with hair for the dog. Digital effects were used for scenes involving the worms swimming in jars and entering the dog's ear. Although extra footage of the worm scenes was shot so they would last as long as intended if Fox's standards-and-practices officials asked for cuts, no edits were requested. "Ice" was the first significant role in the series for makeup effects artist Toby Lindala, who become its chief makeup artist. The dog used in the episode was a parent of Duchovny's dog, Blue. Ken Kirzinger, who played one of the scientists killed in the episode's cold open, was the series' stunt coordinator. ## Analysis Although "Ice" is not directly connected to the series' overarching mythology, it has been described as "a portent to the alien conspiracy arc which would become more pronounced in the second season" with its themes of alien invasion and governmental conspiracy. The episode is noted for exploring the relationship between its lead characters; Mulder and Scully's trust contrasts with the behavior of Hodge and Da Silva, who are united by a distrust of those around them. The pairs are "mirror images" in their approaches to partnership. "Ice" features two elements common to other works by Morgan and Wong: dual identities and the questioning of one's personality. In her essay "Last Night We Had an Omen", Leslie Jones noted this thematic leitmotif in several of their other X-Files scripts: "the meek animal-control inspector who is a mutant shape-shifter with a taste for human liver ["Squeeze"], the hapless residents of rural Pennsylvania driven mad by a combination of insecticides and electronic equipment ["Blood"], [and] the uptight PTA run by practicing Satanists ["Die Hand Die Verletzt"]". Anne Simon, a biology professor at the University of Maryland, discussed the episode in her book Monsters, Mutants and Missing Links: The Real Science Behind the X-Files. Simon noted that like the worms in "Ice", parasitic worms can attach to the human hypothalamus because it is not blocked by the blood–brain barrier. She compared "Ice" to the later episodes "Tunguska" and "Gethsemane", with their common theme of extraterrestrial life reaching earth through panspermia. ## Reception ### Ratings "Ice" originally aired on Fox on November 5, 1993, and was first broadcast in the United Kingdom on BBC Two on November 10, 1994. The episode's initial American broadcast received a Nielsen rating of 6.6 with an 11 share; about 6.6 percent of all households with television and 11 percent of households watching TV viewed the episode, a total of 6.2 million households and 10 million viewers. "Ice" and "Conduit" were released on VHS in 1996, and the episode was released on DVD as part of the complete first season. ### Reviews "Ice" was praised by critics. In The Complete X-Files, authors Matt Hurwitz and Chris Knowles called the episode a milestone for the fledgling series. An Entertainment Weekly first-season retrospective graded "Ice" as A−, calling it "particularly taut and briskly paced". On The A.V. Club, Keith Phipps praised the episode and gave it an A. According to Phipps, the cast "plays the paranoia beautifully" and the episode was "as fine an hour as this first season would produce". "Ice" was included on an A.V. Club list of greatest bottle episodes, where it was described as "us[ing] its close quarters as an advantage". A third A.V. Club article, listing ten "must-see" episodes of the series, called "Ice" "the first sign that this show had a shot at really being something special" and said that it "makes great use of claustrophobia and the uneasy but growing alliance between the heroes". Digital Spy's Ben Rawson-Jones described the episode's stand-off between Mulder and Scully as "an extremely tense moment of paranoia." A New York Daily News review called the episode "potent and creepy", and said that its plot "was worthy of honorary passage into The Twilight Zone". Matt Haigh called it "an extremely absorbing and thrilling episode" on the Den of Geek website, noting its debt to The Thing, and Juliette Harrisson called "Ice" the "finest" stand-alone episode of the first season. On the TV Squad blog, Anna Johns called it "a spectacular episode" with an "excellent" opening. UGO Networks called the episode's worms among the series' best "Monsters-of-the-Week" and the cause of "much pointed-guns aggression". In Tor.com, Meghan Deans compared the scene where Mulder and Scully inspect each other for infection to a similar scene in "Pilot"; in "Ice", both characters were equally vulnerable and (unlike the pilot scene) Scully was not portrayed as "an idiot". Robert Shearman and Lars Pearson, in their book Wanting to Believe: A Critical Guide to The X-Files, Millennium & The Lone Gunmen, gave the episode five out of five stars. They called it "the most influential episode ever made", noting that the series reprised its formula several times during its run. Shearman felt that although their script was derivative, Morgan and Wong created "a pivotal story" by combining crucial themes from The Thing with a "well rounded" cast of characters. "Ice" was also considered one of the best episodes of the first season by the production crew. According to Carter, Morgan and Wong "just outdid themselves on this show, as did director David Nutter, who really works so hard for us. I think they wrote a great script and he did a great job directing it, and we had a great supporting cast". Nutter said: "The real great thing about 'Ice' is that we were able to convey a strong sense of paranoia. It was also a great ensemble piece. We're dealing with the most basic emotions of each character, ranging from their anger to their ignorance and fear. It established the emotional ties these two characters have with each other, which is very important. Scaring the hell out of the audience was definitely the key to the episode". Anderson said that "it was very intense. There was a lot of fear and paranoia going on. We had some great actors to work with".
276,101
Killdeer
1,169,945,654
Shorebird found in the Americas
[ "Articles containing video clips", "Birds described in 1758", "Birds of Ecuador", "Birds of North America", "Birds of Peru", "Birds of the Caribbean", "Birds of the Dominican Republic", "Charadrius", "Taxa named by Carl Linnaeus" ]
The killdeer (Charadrius vociferus) is a large plover found in the Americas. It gets its name from its shrill, two-syllable call, which is often heard. It was described and given its current scientific name in 1758 by Carl Linnaeus in the 10th edition of his Systema Naturae. Three subspecies are described. Its are mostly brown with rufous fringes, the head has patches of white and black, and two black bands cross the breast. The belly and the rest of the breast are white. The nominate (or originally described) subspecies breeds from southeastern Alaska and southern Canada to Mexico. It is seen year-round in the southern half of its breeding range; the subspecies C. v. ternominatus is resident in the West Indies, and C. v. peruvianus inhabits Peru and surrounding South American countries throughout the year. North American breeders winter from their resident range south to Central America, the West Indies, and the northernmost portions of South America. The nonbreeding habitat of the killdeer includes coastal wetlands, beach habitats, and coastal fields. Its breeding grounds are generally open fields with short vegetation (but locations such as rooftops are sometimes used); although it is a shorebird, it does not necessarily nest close to water. The nest itself is a scrape lined with vegetation and white material, such as pebbles or seashell fragments. This bird lays a clutch of four to six buff to beige eggs with dark markings. The breeding season (starting with egg-laying) occurs from mid-March to August, with later timing of egg-laying in the northern portion of the range. Both parents incubate the eggs for 22 to 28 days typically. The young stay in the nest until the day after being hatched, when they are led by their parents to a feeding territory (generally with dense vegetation where hiding spots are abundant), where the chicks feed themselves. The young then fledge about 31 days after hatching, and breeding first occurs after one year of age. The killdeer primarily feeds on insects, although other invertebrates and seeds are eaten. It forages almost exclusively in fields, especially those with short vegetation and with cattle and standing water. It primarily forages during the day, but in the nonbreeding season, when the moon is full or close to full, it forages at night, likely because of increased insect abundance and reduced predation during the night. Predators of the killdeer include various birds and mammals. Its multiple responses to predation range from calling to the "ungulate display", which can be fatal for the performing individual. This bird is classified as least concern by the International Union for Conservation of Nature, because of its large range and population. Its population is declining, but this trend is not severe enough for the killdeer to be considered a vulnerable species. It is protected by the American Migratory Bird Treaty Act of 1918 and the Canadian Migratory Birds Convention Act. ## Etymology and taxonomy The killdeer was described in 1758 by Swedish naturalist Carl Linnaeus in the 10th edition of his Systema Naturae as Charadrius vociferus, its current scientific name. Linnaeus' description was based on a 1731 account of it by English naturalist Mark Catesby in his The Natural History of Carolina, Florida and the Bahama Islands, where he called it the "chattering plover". The genus name Charadrius is Late Latin for a yellowish bird mentioned in the fourth-century Vulgate Bible. This word derives from the Ancient Greek kharadrios, a bird found in ravines and river valleys (kharadra, "ravine"). The specific name vociferus is Latin, coming from vox, "cry", and ferre, "to bear". Three subspecies are described: - C. v. vociferus Linnaeus, 1758 – The nominate subspecies (originally described subspecies), it is found in the US (including southeastern Alaska), southern Canada, Mexico, and with some less widespread grounds further south, to Panama. It winters to northwestern South America. - C. v. ternominatus Bangs & Kennard, 1920 – This subspecies is found on the Bahamas, the Greater Antilles, and Virgin Islands. - C. v. peruvianus (Chapman, 1920) – This South American subspecies is found in western Ecuador, Peru, and extreme northwest Chile. The killdeer's common name comes from its frequently heard call. ## Description The killdeer is a large plover, with adults ranging in length from 20 to 28 cm (7.9 to 11.0 in), having a wingspan between 59 and 63 cm (23 and 25 in), and usually being between 72 and 121 g (2.5 and 4.3 oz) in weight. It has a short, thick, and dark bill, flesh-colored legs, and a red eye ring. Its upper parts are mostly brown with rufous fringes, its cap, back, and wings being the former color. It has a white forehead and a white stripe behind the eye, and its lores and the upper borders to the white forehead are black. The killdeer also has a white collar with a black upper border. The rest of the face is brown. The breast and belly are white, except for two black breast bands. It is the only plover in North America with two breast bands. The rump is red, and the tail is mostly brown. The latter also has a black subterminal band, a white terminal band, and barred white feathers on the outer portion of the tail. A white wing stripe at the base of the flight feathers is visible in flight. The female's mask and breast bands tend to be browner than those of the male. The adult of the subspecies C. v. ternominatus is smaller, paler, and greyer than the nominate. The subspecies C. v. peruvianus is smaller than the nominate and has more extensive rufous feather fringes. The juvenile is similar to the adult. The upper parts of the chicks are colored dusky and buff. Their underparts, forehead, neck, and chin are white, and they have a single band across their breast. The killdeer is a vocal species, calling even at night. Its calls include nasal notes, like "deee", "tyeeee", and "kil-deee" (the basis of its common name). During display flights, it repeats a call of "kil-deer" or "kee-deeyu". When this plover is disturbed, it emits notes in a rapid sequence, such as "kee-di-di-di". Its is a long, fast trill. ## Habitat and distribution The nominate subspecies of the killdeer breeds in the US (including southeastern Alaska), southern Canada, and Mexico, with less widespread grounds further south, to Panama. Some northern populations are migratory. This bird is resident in the southern half of its breeding range, found throughout the year in most of the contiguous United States. It also winters south to Central America, the West Indies, Colombia, Ecuador, and islands off Venezuela, leaving its breeding grounds after mid-July, with migration peaking from August to September. Migration to the breeding grounds starts in February and ends in mid-May. The subspecies C. v. ternominatus is thought to be resident in the Bahamas, Greater Antilles, and Virgin Islands. C. v. peruvianus is seen year-round in western Ecuador, Peru, and extreme northwestern Chile. The killdeer uses beach habitats, coastal wetlands, and fields during the non-breeding season. It forages almost exclusively in these fields, especially those with short vegetation and with cattle (which likely shorten the vegetation) and standing water. When breeding, the killdeer has a of about 6 ha (15 acres), although this is generally larger when nesting more than 50 m (160 ft) away from water. Although generally a low-land species, it is found up to the snowline in meadows and open lakeshores during its autumn migration. ## Behavior ### Breeding The killdeer forms pairs on its breeding grounds right after arriving. Both sexes (although the male more often than the female) advertise in flight with loud "killdeer" calls. The male also advertises by calling from a high spot, scraping out a dummy nest, and with killdeer flights, where it flies with slow wingbeats across its territory. Ground chases occur when a killdeer has been approached multiple times by another killdeer; similarly, flight chases occur when an individual has been approached from the air. Both are forms of territorial defense. The killdeer nests in open fields or other flat areas with short vegetation (usually below 1 cm (0.39 in) tall), such as agricultural fields and meadows. Nests are also sometimes located on rooftops. This plover frequently breeds close to where it bred the previous year. The male seems to usually renest in the same area regardless of whether or not he retains the same mate. This does not appear to be true of the female, which has been observed to not use the same territory if she does not have the same mate. The nest itself is merely a shallow depression or scrape in the ground, fringed by some stones and blades of grass. It is generally built with white nesting material instead of darker colors; the function of this is suspected to either help keep the nest cool or conceal it. In a study of piping plovers, the former function was supported, as nests were 2 °C (3.6 °F) to 6 °C (11 °F) cooler than the surrounding ground. The latter function also had some support, as the plovers generally chose pebbles closer in color to the eggs; nests that contrasted more with the ground suffered more predation. When nesting on rooftops, the killdeer may choose a flat roof, or build a nest of raised gravel, sometimes lined with white pebbles or pieces of seashells. The eggs of the killdeer are typically laid from mid-March to early June in the southern portion of the range, and from mid-April to mid-July in the northern part. In both cases, the breeding season itself extends to about August. In Puerto Rico, and possibly in other Caribbean islands, breeding occurs year-round. The killdeer lays a clutch of four to six eggs that are buff to beige, with brown markings and black speckles. The eggs are about 38 by 27 mm (1.5 by 1.1 in) in size, and laid at intervals of 24 to 48 hours. The energy expenditure of both sexes is at its highest during egg-laying; the female needs to produce eggs, and the male needs to defend his territory. Both of the sexes are closer to the nest site than usual during egg-laying and incubation, although the male is generally closer than the female during all stages of breeding. This latter fact is likely due to the male's increased investment in nest-site defense. Up to five replacement clutches can be laid, and occasionally two broods occur. Second broods are usually laid in the nesting territory of the first brood. The eggs are incubated for 22 to 28 days by both the male and the female, with the former typically incubating at night. The time dedicated to incubation is related to temperature, with one study recording that killdeer incubated eggs 99% of the time when the temperature was about 13 °C (55 °F), 76% of the time around 26 °C (79 °F), and 87% of the time at about 35 °C (95 °F). When it is hot (above at least 25 °C (77 °F)), incubation cools the eggs, generally through shading by one of the parents. About 53% of eggs are lost, mainly to predators. The young are precocial, starting to walk within the first days of their life. After they hatch, both parents lead them out of the nest, generally to a feeding territory with dense vegetation under which the chicks can hide when a predator is near. The chicks are raised, at least in single-brood pairs, by both parents, likely because of the high failure rate of nests and the need for both parents to be present to successfully raise the young. In these broods, the young are usually attended by one parent at a time (generally the female) until about two weeks of age, after which both parents are occasionally seen together with the chicks. Otherwise, the inattentive adult is at least about 23 m (75 ft) away from the chicks. Periods of attentiveness for each parent generally last from about one to one and a half hours. When the chicks are young, this is mainly spent standing; as the chicks get older, less time is dedicated to standing. When the young are below two weeks of age, the attending adult spends little time feeding; foraging time increases as the chicks grow. The inattentive adult defends the young most of the time when they are less than a week old, but this task steadily shifts onto the attentive adult, until about three weeks of age, when the attending parent does almost all of the defense. One parent at a time broods the chicks and does so frequently until they are two days old. The young are brooded during the day until about 15 days after hatching and during the night for about 18 days after hatching. The only time when they are not in the presence of a parent is when the parents are mating or responding to a predator or aggressive conspecific. When a pair has two broods, the second is usually attended by just the male (which can hatch the eggs on his own, unlike the female). In this case, the male does not spend most of the time standing; the amount of time he does stand, though, stays constant as the chicks age. Like attentive adults in two-parent broods, the sole parent increases the time spent foraging as the young age. The young fledge about 31 days after hatching, and generally move to moister areas in valleys and on the banks of rivers. They may be cared for by their parents for up to 10 days after they fledge, and exceptionally for 81 days after hatching. About 52 to 63% of nests fail to produce any fledged young. Breeding starts after one year of age. The killdeer has a maximum lifespan of 10 years and 11 months. ### Feeding The killdeer feeds primarily on insects (especially beetles and flies), in addition to millipedes, worms, snails, spiders, and some seeds. It opportunistically takes tree frogs and dead minnows. It forages almost exclusively in fields (no matter the tide), especially those with short vegetation and with cattle (which likely shorten the vegetation) and standing water. Standing water alone does not have a significant effect on field choice unless combined with cattle. Viable disseminules can be recovered from killdeer feces, indicating that this bird is important in transporting aquatic organisms. The killdeer uses visual cues to forage. An example of this is "foot-trembling", where it stands on one foot, shaking the other in shallow water for about five seconds, pecking at any prey stirred up. When feeding in fields, it sometimes follows plows to take earthworms disturbed to the surface. The female forages significantly more than the male during most stages of breeding. The former feeds the most before and during egg-laying, the least when incubation starts (as little time to feed remains), with a return to high levels after. During the nonbreeding season, the killdeer forages during the night, depending on the lunar cycle. When the moon is full, it feeds more at night and roosts more during the day. Foraging at night has benefits for this bird, including increased insect abundance and reduced predation. ## Predators and parasites The killdeer is parasitized by acanthocephalans, cestodes, nematodes, and trematodes. It is preyed upon by herring gulls, common crows, raccoons, and striped skunks. The mentioned birds and other avian predators are the majority of predators in some areas during the breeding season. Predation is not limited to eggs and chicks: mustelids, for example, can kill incubating adults. ### Responses to predators The parents use various methods to distract predators during the breeding season. One method is the "broken-wing display", also known as "injury feigning". Before displaying, it usually runs from its nest, making alarm calls and other disturbances. When the bird has the attention of the predator, the former turns its tail towards the latter, displaying the threatening orange color of the rump. It then crouches, droops its wings, and lowers its tail, which is more common for them. With increasing intensity, the wings are held higher, the tail is fanned out, and the tail becomes more depressed. Another behavior that has received attention is the "ungulate display", where the adult raises its wings, exposes its rump, lowers its head, and charges at the intruder. This can be fatal to the displaying bird. The intensity of the responses to predators varies throughout the breeding season. During egg-laying, the most common response to predators is to quietly leave the nest. As incubation starts and progresses, the intensity of predator responses increases, peaking after hatching. This is probably because it is worth more to protect the young then, as they are more likely to fledge. After hatching, reactions decrease in intensity, until a normal response is called. This is because the young become more independent as they age. ## Status The killdeer is considered a least-concern species by the IUCN due to its large range of about 26.3 million km<sup>2</sup> (10.2 million sq mi) and population, estimated by the IUCN to be about one million birds, or about two million, according to the Handbook of the Birds of the World Alive. Though the population is declining, it is not decreasing fast enough to be considered a vulnerable species. It is protected in the US by the Migratory Bird Treaty Act of 1918, and in Canada by the Migratory Birds Convention Act.
69,170,927
Can I Get It
1,153,019,295
2021 song by Adele
[ "2021 songs", "Adele songs", "Song recordings produced by Max Martin", "Song recordings produced by Shellback (record producer)", "Songs written by Adele", "Songs written by Max Martin", "Songs written by Shellback (record producer)" ]
"Can I Get It" is a song by English singer Adele from her fourth studio album 30 (2021), written with Swedish producers Max Martin and Shellback. The song became available as the album's sixth track on 19 November 2021, when it was released by Columbia Records. A pop song with pop rock and country pop influences, "Can I Get It" has acoustic guitar, drum, and horn instrumentation and a whistled hook. The song is about moving on from a breakup and explores Adele's search for true love and the thrilling and wondrous parts of a new relationship. "Can I Get It" received mixed reviews from music critics, who were generally positive about its acoustic portion and lyrics, but highly criticised its whistled hook. They thought the song's brazen pop production catered to the tastes of mainstream radio, which made it an outlier on 30, and compared it to Flo Rida's single "Whistle" (2012). It reached the top 20 in Sweden, Canada, Switzerland, Australia, Finland, and Norway and entered the top 40 in some other countries. ## Background and chart performance Adele began working on her fourth studio album by 2018. She filed for divorce from her husband Simon Konecki in September 2019, which inspired the album. After experiencing anxiety, Adele undertook therapy sessions and mended her estranged relationship with her father. Single again for the first time in almost ten years, she sought a serious relationship in Los Angeles but struggled to find one. Adele said, "I lasted five seconds [dating there ...], everyone is someone or everyone wants to be someone." She had regular conversations with her son, which inspired her return to the studio and the album was developed as a body of work that would explain to her son why she left his father. Adele co-wrote the song "Can I Get It" with Swedish record producers Max Martin and Shellback, who had produced her 2016 Mainstream Top 40 number-one single "Send My Love (To Your New Lover)". "Can I Get It" is about wanting to be in a committed relationship instead of one centred around casual sex. She released "Easy on Me" as the lead single from the album, entitled 30, on 14 October 2021. Adele announced the album's tracklist, which included "Can I Get It" as the sixth track, on 1 November 2021. It became available for digital download on 30, which was released on 19 November. In the United Kingdom, "Can I Get It" debuted at number 7 on the Official Audio Streaming Chart. The song peaked at number 26 on the US Billboard Hot 100 issued for 4 December 2021. It charted at number 11 on the Canadian Hot 100. "Can I Get It" debuted at number 15 in Australia. The song peaked at number 39 in New Zealand. Elsewhere, it charted at number 9 in Sweden, number 13 on the Billboard Global 200, number 14 in Switzerland, number 16 in Finland, number 19 in Norway, number 25 in Denmark, number 32 in Portugal, number 40 in Austria, number 71 in France, and number 94 in Spain. ## Composition "Can I Get It" is three minutes and 30 seconds long. Martin and Shellback produced and programmed the song, which was recorded at MXM Studios, House Mouse Studios, and Kallbacken Studios in Sweden, MXM Studios in Los Angeles, and Eastcote Studios in London. Martin played piano and keyboards; Shellback played drums, bass, guitar, percussion, and keyboards and provided the whistle and stomps; Adele assisted him with handclaps. Randy Merrill mastered it at Sterling Sound Studios in New York City; Serban Ghenea and John Hanes mixed it at MixStar Studios in Virginia Beach, Virginia; and Lasse Mårtén, Michael Ilbert, and Sam Holland engineered it. "Can I Get It" is a pop song, with influences of pop rock and country pop. The song has a kitchen sink production, which incorporates "acoustic guitar breakdowns, slickly produced drum loops, [...] and horns" according to Exclaim!'s Kyle Mullin. It includes a three-chord riff and Martin and Shellback provide a 2010s music-influenced whistle for its hook. This inclusion was likened to Flo Rida's single "Whistle" (2012), and Lady Gaga's song "Why Did You Do That?" (2018). The Los Angeles Times's Mikael Wood and Variety's Chris Willman likened the "boot-scooting acoustic groove" and chorus guitar strums of "Can I Get It" to George Michael's single "Faith" (1987). Adele moans during the song's chorus; writing for Slant Magazine, Eric Mason stated that its spirited percussion instrumentation and her hushed moans construct a sultry atmosphere but get interrupted by its "discordantly chirpy whistle drop". Ilana Kaplan of Consequence described it as a "'70s rock-inspired track" and David Cobbald of The Line of Best Fit called it an "American-inspired, stomping rodeo of a song". "Can I Get It" has lyrics about moving on from a breakup. Adele returns to dating and tries to be more vulnerable with a new partner: "I'm counting on you/to put the pieces of me back together". The song is about searching for a true romantic relationship, refusing to settle for a hookup. It explores the thrilling and wondrous parts of a romantic relationship. The lyrics of "Can I Get It" avow and assure an extremely devoted love that borders on desperation and subservience. She expresses optimism in the song and counts on this new affair to "set [her] free". Adele pauses mid-sentence while singing its chorus's lyric "Let me just come and get it". ## Critical reception "Can I Get It" received mixed reviews from music critics, who thought it strayed from the rest of 30, which consisted mostly of emotional ballads that seek Adele's identity outside of romantic relationships. MusicOMH's Graeme Marsh thought the song's optimism and whistled portion made it sound misplaced. Peter Piatkowski of PopMatters stated that its brazen pop production felt "a bit shocking, almost disrespectful, and discordant" in the context of the album but praised its "earworm" hook and infectious chorus and favourably compared it to Adele's 2010 single "Rolling in the Deep". Writing for DIY, Emma Swann viewed "Can I Get It" as "easily Adele's most conventionally 'pop' moment to date" and added that though its production defies her signature ballads, it also projects more character. The A.V. Club's Gabrielle Sanchez wrote that the song constituted the "most pop-oriented and straightforward" segment of 30, along with "Oh My God", but criticised its whistling as "a hollow carry-over from 2010s radio pop". Maura Johnston of Entertainment Weekly opined that it was one of "a few grand pop moments" on the album and noted that its carefree production complements its lyrics. Writing for Billboard, Jason Lipshutz ranked "Can I Get It" as the second-best song on 30; he believed it succeeded on all levels and could outdo the radio success of "Easy on Me". NME's El Hunt thought the acoustic part of "Can I Get It" was bright and intriguing but derailed by its whistled hook. Cobbald praised the harmonies in its chorus but derided it as a "2013 Kesha B-side, or something like 'Whistle' by Flo Rida"; he believed it did not attain what its writers intended. Writing for The Independent, Annabel Nugent described the "stomp-and-clap hook" of "Can I Get It" as "most unsettling" and thought Martin and Shellback left more of a mark on it than "Send My Love (To Your New Lover)". Mapes identified the whistling as a "corny '10s pop trend" and thought it was crafted with pop and country radio in mind. Willman named the song as the "most obvious booster-shot-bop" on 30 and praised it as "Frankenstein-ian pop confection" but questioned if its different parts meshed well. Wood opined that it lived up to its title. Piatkowski thought the vulnerability and honest depiction of love in the lyrics of "Can I Get It" showcased "the sting and candor of Adele at her most honest". Kaplan stated that she waded into "more sensual territory" in the song, and Sanchez said it "harnesses a sensuality not often heard in Adele's work". The latter dismissed it as less interesting than the rest of the album and opined that the raw and moving lyricism on other tracks renders it a "mere blip in the grandeur of the rest of the album". Hunt thought that while the rest of 30's lyrics "stick to safer territory", Adele's pause in the chorus of "Can I Get It" is more frisky. ## Credits and personnel Credits are adapted from the liner notes of 30. - Max Martin – producer, songwriter, piano, programming, keyboards - Shellback – producer, songwriter, drums, bass, guitar, percussion, programming, whistle, keyboards, stomps, handclaps - Adele – songwriter, stomps, handclaps - Randy Merrill – mastering - Serban Ghenea – mixing - John Hanes – mixing - Lasse Mårtén – engineering - Michael Ilbert – engineering - Sam Holland – engineering ## Charts ## Certifications
1,716,364
Lisa the Skeptic
1,172,735,580
null
[ "1997 American television episodes", "Stephen Jay Gould", "Television episodes about angels", "Television episodes about publicity stunts", "Television episodes written by David X. Cohen", "The Simpsons (season 9) episodes" ]
"Lisa the Skeptic" is the eighth episode of the ninth season of the American animated television series The Simpsons. It first aired on the Fox network in the United States on November 23, 1997. On an archaeological dig with her class, Lisa discovers a skeleton that resembles an angel. All of the townspeople believe that the skeleton actually came from an angel, but skeptical Lisa attempts to persuade them that there must be a rational scientific explanation. The episode's writer, David X. Cohen, developed the idea after visiting the American Museum of Natural History, and decided to loosely parallel themes from the Scopes Monkey Trial. The episode also makes allusions to actual hoaxes, such as the Cardiff Giant. It has been discussed in the context of ontology, existentialism, and skepticism; it has also been used in Christian religious education classes to initiate discussion about angels, science, and faith. ## Plot Homer Simpson attempts to claim a motorboat from a "police raffle" that turns out to be a sting operation. While returning home, the family passes a new mall being built on an area where a number of fossils were found. Lisa Simpson protests and the management allows Springfield Elementary to conduct an archaeological survey. During the excavations, Lisa finds a human skeleton with wings. Springfield's residents are convinced it is the remains of an angel, and Homer cashes in by moving the skeleton into the family's garage, charging visitors to see it. Lisa remains skeptical and asks scientist Dr. Stephen Jay Gould to test a sample of the skeleton. When Dr. Gould appears at the Simpson house the next day to tell Lisa that the tests were inconclusive, Lisa goes on television to compare the belief in angels to the belief in fictional things, such as leprechauns. In response, Springfield's religious zealots go on a rampage to destroy all scientific institutions. Appalled with the violence, Lisa goes into the garage to destroy the skeleton, but finds that it has disappeared. The mob soon converges on the Simpson household, and Lisa is arrested and put on trial for destroying the skeleton. Before the trial even begins, the skeleton is seen outside the courtroom. Everyone rushes to it to see a foreboding message added to the skeleton, warning that "The End" will come at sundown. Sunset approaches and the citizens gather around the skeleton, but nothing happens. As Lisa reprimands them, a booming voice from the skeleton silences her and announces, "The End... of high prices!" The skeleton is then hoisted over to the entrance of the new Heavenly Hills Mall. Lisa realizes the whole event was a publicity stunt for the mall, and criticizes management for taking advantage of peoples' beliefs. She attempts to boycott them, but the bargain-loving public shrugs off the exploitation and goes shopping, while Dr. Gould confesses that he never actually tested the sample. Marge observes that while it was talking, Lisa believed the angel was real. She denies this, but admits she was frightened, and thanks her mother for her support. ## Production "Lisa the Skeptic" was written by David X. Cohen, and directed by Neil Affleck. Cohen was inspired to write the episode after a trip to Manhattan's American Museum of Natural History, where he decided to turn the visit into a "business trip", and think of a possible episode connection to the museum. He initially wanted Lisa to find a "missing link" skeleton, and do an episode reminiscent of the Scopes Monkey Trial. Writer George Meyer convinced him instead to have the focus be on an angel skeleton, while keeping an emphasis on the conflict between religion and science. Both Cohen and Meyer acknowledged how silly the "angel skeleton" idea was owing to simple questions raised such as why an angel died and why bones were left behind, but they went forward with the idea anyway. In an early draft of the script, the skeleton was made of pastry dough baked by the mall's window dresser. Cohen had initially written the Stephen Jay Gould role as a generic scientist or paleontologist, not knowing that they would eventually get Gould. He had taken Gould's Introduction to Paleontology class at Harvard University. The only phrase Gould had objected to in the script was a line that introduced him as the "world's most brilliant paleontologist". His original final line was "I didn't do the test. I had more important work to do", but it was cut because the writers felt it would be funnier to give him a short final line. In an earlier version of the episode, Marge would have ended up apologizing to Lisa for not supporting her, letting the ending be more of a nod to Lisa's correct assumptions all along. ## Themes Author Joley Wood compared "Lisa the Skeptic" to an alternate reality game, in analyzing the effects of watching the television program Lost on contemporary culture and our own perceptions of reality. Dan O'Brien cited the episode in a discussion of ontology, skepticism, and religious faith, in his book An Introduction to the Theory of Knowledge. O'Brien leaves it up to the reader to decide whether or not Lisa was justified in her skepticism. In The Simpsons and Philosophy: The D'oh! of Homer, "Lisa the Skeptic" is cited as a prime example of why Lisa is seen as the epitome of a nerd. The book also cited the episode in noting that Lisa is not infallible, for when the Angel appeared to speak at the end of the episode she became as frightened as everyone else. Lisa's frustration with the marketing gimmick used by the mall developers is seen by Turner's Planet Simpson: How a Cartoon Masterpiece Documented Defined a Generation as yet another example of her conflict with corporations throughout the series. Like O'Brien, Turner also analyzed the episode in the context of Lisa's questions about existentialism, self-absorption, and consumption. In The Psychology of the Simpsons: D'oh!, the authors discuss Lisa's level of anger displayed in the episode, noting that in this particular case her anger gave her the wherewithal both to confront social injustice, and keep her mind clear for critical thinking. Mark Demming of Allmovie noted that Lisa symbolically stood for the side of reason, while her mother Marge symbolized belief and spirituality in the episode. In their 2010 book The Simpsons in the Classroom, Karma Waltonen and Denise Du Vernay note that the episode is one of the best for teachers and professors to use in religion or cultural studies courses, noting the irony that though Lisa is the only skeptic through most of the episode, she is the only one who is offended at the publicity stunt. Parvin's The Gospel According to the Simpsons: Leader's Guide for Group Study is a group study guide companion to Pinsky's The Gospel According to the Simpsons. In the section pertaining to "Lisa the Skeptic", a skeptic is defined as: "a person who doubts, questions, or suspends judgment on ideas generally accepted by others". The study group is asked to debate the episode in the context of skepticism as related to other unexplained phenomena, including UFOs, the Loch Ness Monster, the Abominable Snowman, the Bermuda Triangle, Atlantis, near-death experiences, reincarnation, mediumship, psychics, and fortune-telling. In Pinsky's book itself, he noted that Lisa faced the difficult task of confronting religious hysteria and blind faith, and also attempted to reconcile science within her own belief system. He also wrote that when Lisa asks Stephen Jay Gould to estimate the age of the skeleton, the issue is never raised of why angels or other spiritual entities would even leave skeletons behind in the first place. ## Cultural references The scene in the courtroom where Lisa is put on trial for stealing the skeleton is seen as a reference to the 1920s Scopes Monkey Trial in Dayton, Tennessee, which dealt with issues of separation of church and state and the debate between creationism and evolution. The publicity stunt created by the mall developers in the episode has been compared to scientific hoaxes such as the Cardiff Giant and the Piltdown Man. When Lisa asks if the townspeople are outraged at the end of the episode for being fooled by a publicity stunt, Chief Wiggum is about to answer her but is distracted when he catches sight of a Pottery Barn in the new Heavenly Hills mall. A shot of the diggers in silhouette against the sunset is modeled after Raiders of the Lost Ark (1981). ## Reception In its original broadcast, "Lisa the Skeptic" finished 37th in ratings for the week of November 17–23, 1997, with a Nielsen rating of 9.5, equivalent to approximately 9.3 million viewing households. It was the third highest-rated show on the Fox network that week, following The X-Files and King of the Hill. Donald Liebenson wrote for the Amazon.com movie review that "Bart Sells His Soul" and "Lisa the Skeptic" were among the best episodes of The Simpsons. He also noted, "Without being preachy (or particularly funny), this episode is pretty potent stuff", citing the theme of Apocalypticism towards the end of the episode. In the July 26, 2007 issue of Nature, the scientific journal's editorial staff listed the episode among "The Top Ten science moments in The Simpsons". "Lisa the Skeptic" was utilized in a Salt Lake City Episcopal Church Sunday School class in 2003, to stimulate a discussion among fourteen-year-olds about belief in angels, and the juxtaposition of science and faith. The episode was compared and contrasted with Proverbs 14:15. The episode is used by the Farmington Trust (UK) for Christian religious education, to teach children about skepticism. The episode is used as a tool, to involve the students in a debate about religion and science, as well as to discuss Lisa's own skepticism, and her respect towards others. A group of The Simpsons enthusiasts at Calvin College have also analyzed the religious and philosophical aspects of the episode, including the issue of faith versus science. ## See also - Angels in art
5,391,564
Tropical Storm Barry (2001)
1,171,670,653
Atlantic tropical cyclone
[ "2001 Atlantic hurricane season", "2001 natural disasters in the United States", "Atlantic tropical storms", "Hurricanes in Arkansas", "Hurricanes in Florida", "Tropical cyclones in 2001" ]
Tropical Storm Barry was a strong tropical storm that made landfall on the Florida Panhandle during August 2001. The third tropical cyclone and second named storm of the 2001 Atlantic hurricane season, Barry developed from a tropical wave that moved off the coast of Africa on July 24. The wave entered the Caribbean on July 29 and spawned a low-pressure area, which organized into Tropical Storm Barry on August 3. After fluctuations in intensity and track, the storm attained peak winds of 70 mph (110 km/h) over the Gulf of Mexico. Barry headed northward and moved ashore along the Gulf Coast before degenerating into a remnant low on August 7. On the next day, Barry's remnants dissipated over Missouri. Unlike the devastating Tropical Storm Allison earlier in the season, Barry's effects were moderate. Nine deaths occurred: six in Cuba and three in Florida. As a tropical cyclone, Barry produced heavy rainfall that peaked at 8.9 in (230 mm) at Tallahassee, in Florida. Gusts in the area reached 79 mph (127 km/h), which was the highest wind speed recorded for the storm. The precursor tropical wave to Barry dropped large amounts of rain on southern Florida, leading to significant flooding and structural damage. Moderate flooding and wind damage occurred throughout the Florida Panhandle. As the storm's remnants tracked inland, parts of the Mississippi Valley received light precipitation. Barry caused an estimated \$30 million (2001 USD) in damage. ## Meteorological history On July 24, 2001, a tropical wave emerged off the west coast of Africa, and tracked westward across the Atlantic Ocean. Little cyclonic development occurred until July 28, when convection began to increase along the wave. The wave moved into the eastern Caribbean on July 29, and its convection continued to increase while it tracked west-northwest over the subsequent few days. The disturbance emerged into the Gulf of Mexico on August 1, with rainfall noted over southern Florida and the western tip of Cuba. That same day, a broad low-pressure system developed along the wave near the Dry Tortugas at the end of the Florida Keys, which began to intensify as it moved northwestward. At around 1800 UTC on August 2, an Air Force Reserve Hurricane Hunter aircraft investigating the system discovered that the low had organized into a tropical storm, which received the name Barry. Post-hurricane season reanalysis, however, revealed that the low had become a tropical depression six hours earlier. There is uncertainty as to whether Barry actually held tropical characteristics at the time of designation, because of an upper-level low that was situated over the cyclone's surface center. When Barry became a tropical cyclone, its convection wrapped around roughly half of the center. Outflow in the eastern semicircle was good, although due to upper-level wind shear, it was restricted to southeast of the circulation. The cyclone became embedded within a mid- to-upper-level trough between the ridge over the central U.S. and the ridge over the northwestern Caribbean. A strong, upper-level cyclonic shear axis extended from just south of Cape Hatteras to near Brownsville, Texas, which prevented Barry from accelerating in forward speed. The ridge over the United States weakened, thus collapsing the steering pattern; this resulted in a west-southwestward drifting motion of the tropical storm by around August 3. Early on August 3, strong westerly winds prevailed, and separated the center of circulation from what limited convection remained. The storm quickly regained some convection, although maximum sustained winds remained weak, at about 40 mph (60 km/h). Despite a slight drop in barometric pressure, post-season analysis revealed Barry weakened into a tropical depression early on August 4 due to the persistent wind shear and falling external pressure. At 1800 UTC on August 4, the cyclone re-intensified slightly, and was upgraded to a tropical storm as the shear decreased. Early on August 5, a strengthening period began as deep convection ignited over and near the low-level center. Prior to landfall, banding features developed on the eastern half of the circulation, despite some residual westerly shear. Within seven hours, the barometric pressure dropped from 1004 mb to 990 mb and overall satellite presentation had begun to improve. Barry reached its peak intensity at 1800 UTC on August 5 with winds of 70 mph (110 km/h), just shy of hurricane status. An eye formed at around the same time. At 0500 UTC on August 6, Barry increased in forward speed and made landfall at Santa Rosa Beach, Florida with winds of 70 mph (110 km/h). Moving inland, the system weakened rapidly to a tropical depression; the National Hurricane Center issued its last advisory on the storm early on August 6. By the evening hours, maximum sustained winds near the center were around 5 mph (8.0 km/h) to 10 mph (16 km/h), as the system slowed significantly and drifted northwest at about 7 mph (11 km/h). The depression turned northwestward, and steadily weakened to a remnant low near Memphis, Tennessee on August 7, and the remnant low dissipated on August 8, over southeastern Missouri. ## Preparations In advance of the storm, the National Hurricane Center issued tropical storm watches and warnings for much of the U.S. Gulf Coast. They were upgraded to a hurricane warning when the storm was predicted to reach hurricane intensity. Because that strengthening failed to occur, the hurricane warning was downgraded to a tropical storm warning shortly before landfall. Westward, the warnings for Louisiana and Mississippi were discontinued. After Tropical Storm Barry made landfall, all tropical storm warnings for the Florida Panhandle were discontinued. Flood warnings were issued for parts of Leon and Wakulla counties, while a flash flood watch was in effect for parts of southern Georgia. A tornado watch was issued for the eastern Florida Panhandle, southern Georgia, as well as portions of central and eastern Alabama. As Barry approached the Florida Panhandle, voluntary evacuations took place in eight counties. Shelters opened in six counties, though most were placed on standby. In parts of Franklin County, mandatory evacuations were ordered, and in Okaloosa County, tolls on the Mid-Bay Bridge were suspended. Forty C-130 cargo aircraft and about 300 personnel from Hurlburt Field were moved to the Little Rock Air Force Base in Arkansas to flee the storm's projected path. In Tallahassee, county officials filled sandbags in areas vulnerable to flooding. At Grand Isle State Park, park rangers moved picnic tables out of tidal range and closed the camping grounds for a period of time. Additionally, the storm forced NASA to delay a shuttle launch in southern Florida. Elsewhere, thousands of personnel were evacuated from several offshore oil platforms. The city of New Orleans closed 60 of its 72 floodgates to avoid possible flooding. Throughout southeastern Louisiana, including New Orleans, roughly 500 Red Cross volunteers and staff members were on standby. The threat of the storm forced the cancellation of an 'N Sync concert at Pro Player Stadium. ## Impact ### Cuba and Florida The precursor tropical wave to Barry dropped widespread rainfall in western Cuba, but no damage was reported. Offshore, high seas sank a Cuban refugee boat, drowning 6 of its 28 passengers. Three people in Florida were killed by the storm, and total damage is estimated at around 30 million (2001 USD). In southern Florida, the precursor to Barry produced 3 in (75 mm) to 8 in (200 mm), with rainfall peaking at 13 in (330 mm). The rain helped relieve persistent drought conditions; however, it caused significant flooding in Martin County on August 2, where a total of 300 homes received water damage. About 63 structures and 6 mobile homes in the county sustained major damage. In the Treasure Coast, catfish reportedly swam through flooded streets. Winds downed a 60 ft (18 m) radio tower, striking a house. Due to the initial slow movement of the storm, outer rainbands began affecting the Florida Panhandle on August 4, with the heaviest rainfall observed on August 5–6. The storm dropped 5 in (125 mm) to 9 in (225 mm); the highest official report was 8.9 in (230 mm) at Tallahassee, though unofficial reports ranged as high as 11 in (175 mm). The rainfall inundated several structures in Bay County due to roof damage. Flooding occurred in Leon County and parts of Apalachicola National Forest, where torrential rains flowed into the Cascade Lakes, Lake Bradford and Munson Slough; the Munson Slough rose to its highest level since 1994. Numerous county and secondary roads were closed by floodwater in Walton, Washington, and Bay counties, as well as in the Tallahassee area. In and around Tallahassee, 100 vehicles were stalled by flood waters and towed, while four residents of an apartment complex on Allen Road were forced to evacuate due to rising waters. Sporadic flooding also occurred in Franklin County and Wakulla County. An indirect death occurred from a traffic accident due to heavy rain in Jackson County. Wind gusts peaked at 79 mph (127 km/h) at the Eglin Air Force Base Range Station C-72. Light to moderate winds were widespread, causing damage throughout Walton, Washington, Bay, Calhoun, Gulf and Okaloosa counties. Trees were downed or damaged, and several structures suffered light wind damage. Window damage was reported at a high-rise condominium building in Destin, while nearby, the Mid-Bay Bridge was closed due to high winds. The Freeport Elementary School in Walton County sustained minor roof damage. Storm surge was generally light, ranging from 2 ft (0.61 m) to 3 ft (0.91 m), with only minor beach erosion as a result. As a tropical system, Barry spawned a few weak tornadoes that caused minor damage. In an outer rain band, a lightning strike in Jacksonville killed one person. Another death is blamed on a rip current off of Sanibel Island. In total, the storm left 34,000 customers in the state without power. ### Elsewhere Tropical Storm Barry dropped light to moderate rainfall across Alabama, peaking at 4.57 in (116 mm) near the town of Evergreen. About 2 in (50 mm) fell over the state's peanut-growing region, helping to alleviate drought conditions. Heavy showers were also reported in the Birmingham area. Despite moderate rainfall totals inland, coastal locations received very little precipitation. Minor street flooding occurred in Geneva, Enterprise and New Brockton. Wind gusts peaked at 39 mph (63 km/h) at Montgomery, although damage was light, mostly from downed trees. Damage to awnings and small structures was reported in Florala. Barry's remnants produced light rainfall across Mississippi and Georgia, though no damage was reported. As the storm continued to track inland, it dropped up to 3 in (75 mm) of rain throughout Arkansas, Missouri and western Tennessee. ## See also - List of Florida hurricanes - Other storms of the same name
59,166,555
Bernard A. Maguire
1,166,085,063
Irish-American Jesuit priest
[ "1818 births", "1886 deaths", "19th-century American Jesuits", "American Roman Catholic clergy of Irish descent", "Burials at the Jesuit Community Cemetery", "Christian clergy from County Longford", "Deans and Prefects of Studies of the Georgetown University College of Arts & Sciences", "Georgetown University alumni", "Irish emigrants to the United States", "People from Edgeworthstown", "Presidents of Georgetown University", "Saint John's Catholic Prep (Maryland) alumni", "St. Stanislaus Novitiate (Frederick, Maryland) alumni" ]
Bernard A. Maguire SJ (February 11, 1818 – April 26, 1886) was an Irish-American Catholic priest and Jesuit who served twice as the president of Georgetown University. Born in Ireland, he emigrated to the United States at the age of six, and his family settled in Maryland. Maguire attended Saint John's College in Frederick, Maryland, and then entered the Society of Jesus in 1837. He continued his studies at Georgetown University, where he also taught and was prefect, until his ordination to the priesthood in 1851. In 1852, Maguire was appointed president of Georgetown University. His tenure is regarded as successful; new buildings were erected, the number of students increased, and the preparatory division was partially separated from Georgetown College. Upon the end of his presidency in 1858, he engaged in pastoral and missionary work in Washington, D.C., Maryland, and Virginia, and developed a reputation as a skilled preacher. In the aftermath of the American Civil War, which devastated the university, Maguire again became president of Georgetown in 1866. The long-planned Georgetown Law School was established at the end of his presidency. His term ended in 1870, and he returned to missionary work, traveling throughout the country. He died in Philadelphia in 1886. ## Early life Bernard A. Maguire was born on February 11, 1818, in Edgeworthstown, County Longford, Ireland. He emigrated to the United States with his parents, at the age of six. They took up residence near Frederick, Maryland, where his father worked on the construction of the Chesapeake and Ohio Canal. John McElroy, a Catholic priest, periodically visited the Maguires and other families working on the canal project. He thought Bernard would be suitable for the priesthood and ensured that he received an education. McElroy enrolled Maguire at Saint John's College in Frederick, a Jesuit school of which McElroy was president. Among Maguire's professors there was Virgil Horace Barber. In school, Maguire and his classmate, Enoch Louis Lowe, were continually at the top of their class, and they participated in oratory declamations together. ### Jesuit formation On September 20, 1837, Maguire entered the Society of Jesus, and proceeded to the Jesuit novitiate in Frederick, where he was supervised by Francis Dzierozynski. He then began his higher education at Georgetown University; from 1839 to 1840, he studied rhetoric, and from 1840 to 1841, philosophy. While studying the latter, he also served as prefect of the university. His time at Georgetown was paused during the 1842–1843 academic year, while he taught mathematics and was the prefect at Saint John's College; he also oversaw the school's library and museum. Afterwards, Maguire returned to Georgetown, where he taught grammar, mathematics, and French; for the academic year of 1845 and 1846, he ceased teaching grammar so that he could again become prefect. In 1846, Maguire began his theological education for the priesthood. He took leave of his studies during the academic year of 1849–1850 to catechize the students at Georgetown. During that time, there was an uprising among the students, stemming from a dispute between the Philodemic Society and the first prefect over when the club were permitted to hold meetings. As tensions escalated, the first prefect, Burchard Villiger, expelled three students, prompting an uproar among the student body. Believing the expulsion applied to all the students involved in the dispute, 40 students left the university and took up residence in hotels in Washington. They wrote the prefect demanding that they be allowed to return without punishment and that the first prefect be replaced by someone new. After word of this standoff reached the local newspapers, Maguire met with the students to persuade them to peacefully return. Eventually, the students agreed to unconditionally return and issued an apology. At the same time, Villiger resigned as first prefect, and Maguire was selected to replace him. On September 27, 1851, he was ordained a priest by John McGill, the Bishop of Richmond. ## First presidency of Georgetown University During his tertianship from 1851 to 1852, which was supervised by Felix Cicaterri, Maguire was elected to succeed Charles H. Stonestreet as the president of Georgetown University in December 1852. Soon thereafter, the Jesuit Superior General confirmed Maguire's election by the board of directors. Maguire officially assumed the office on January 25, 1853. As president, he was well liked by the students, despite having a reputation for being stern. Some students were displeased with the prefect's imposition of discipline and Maguire's declination to overrule him; they staged another uprising, throwing stones and inkwells to break the windows. The rebellion was quickly quashed after a lecture at breakfast the following morning, in which Maguire appealed to the students' sense of honor. Six students were expelled as a result. Maguire promoted dramatic and literary societies among the students. In April 1853, the university was visited by the Catholic intellectual Orestes Brownson, and the commencement of 1854 was attended by Franklin Pierce, the president of the United States. A fire broke out on December 6, 1854, destroying the shed where the tailor and shoemaker worked. The university's vice president noticed the fire during the night and awoke others who prevented it from spreading to the other buildings. Despite the construction of new buildings, a significant increase in the number of students left Georgetown pressed for physical accommodations. Therefore, Maguire sought to erect another building, but these plans were rendered untenable by the Panic of 1857. The economic crisis also made it difficult for the university to hire a sufficient number of faculty. Historian Robert Emmett Curran regards Maguire's tenure as being overall successful. On October 5, 1858, his term came to an end and he was succeeded by John Early. ### Preparatory division Several improvements were made to the university's facilities during Maguire's presidency. The preparatory division (which later became Georgetown Preparatory School) was separated from Georgetown College in 1851, both to reduce any negative influence of the older students on the younger ones and because the intermixing of ages dissuaded some older students from attending Georgetown's higher education division. The preparatory division was further segregated with the creation of separate housing for the younger students in 1852 and the institution of a separate academic calendar in 1856. This separation was effective in producing a significant increase in the number of college-aged students enrolling. Construction on a separate building for the preparatory division began in June 1854. The five-story building connected the two buildings to its east and west, and contained a playroom, public hall, classrooms, study hall, and dormitory space. More modest than was originally envisioned several years before, the Preparatory Building cost \$20,000 and was complete by the commencement of 1855. It was outfitted with new gas lamps, rather than oil lamps. The Preparatory Building was later renamed Maguire Hall. ## Pastoral work After his first presidency at Georgetown, Maguire was sent to be the pastor at St. Joseph's Church in Baltimore in 1858. His first time engaging in pastoral work, Maguire garnered a reputation as a skilled orator. In 1859, he was transferred to St. Aloysius Church in Washington, D.C., where his renown as a preacher grew, and his sermons caused many Protestants to convert to Catholicism. Maguire left St. Aloysius at the end of 1864 for Frederick, Maryland, from where he traveled as a missionary throughout Maryland and Virginia. This missionary work also produced conversions to Catholicism. ## Return to Georgetown Maguire became the president of Georgetown University for a second time on January 1, 1866. Replacing John Early, he took office in the aftermath of the American Civil War. Enrollment at the university had declined precipitously during the war, and few students remained by the time Maguire took office. From 1859 to 1861, the number of students dropped from 313 to 17. As a result of the decreased enrollment, the university was left in a precarious financial state. By the end of Maguire's term, the number of students had begun to rebound. Georgetown's physical campus also suffered during the war, which Maguire described as being "nearly ruined." Upon the end of the 1866 academic year in July, he immediately began to repair and expand the buildings that were damaged from being used as barracks and a military hospital by the Union Army. Within three months, the work was complete. To symbolize post-war national unity, Maguire adopted the respective colors of the Union and Confederate Armies, blue and gray, as the school's official colors. Discussions about creating a law school began during Early's presidency but were suspended due to the war. At the suggestion of a future university president, Patrick F. Healy, these discussions resumed and became more concrete by 1869. Eventually, the university's board of directors approved the establishment of Georgetown Law School in March 1870. Maguire desired the law school to be more integrated with the rest of the university than the medical school, which operated largely autonomously. He selected the first six faculty members and announced the creation of the new school at the university commencement in June 1870. The law school's first classes began in October. President Ulysses S. Grant attended the commencement of 1869 and conferred the degrees. That year, the Jesuit scholasticate, which trained Jesuits in their religious formation, was moved from Georgetown to Woodstock, Maryland, becoming the independent Woodstock College. Maguire's health had begun to deteriorate by 1869, and the new provincial superior, Joseph Keller, began considering potential successors, in consultation with the Jesuit superiors in Rome. Maguire's tenure came to an end in July 1870, and John Early was again named as his successor. ## Later years Following his second presidency of the university, Maguire returned to St. Aloysius Church as the pastor. He preached regularly until retiring from the position in May 1875. He returned to missionary work, preaching in Canada and San Francisco. He resigned these duties when health prevented him from continuing in 1884. In April 1886, Maguire led a retreat on Passion Sunday at Old St. Joseph's Church in Philadelphia, Pennsylvania, after having just finished leading a triduum for men at the Cathedral of the Assumption in Baltimore. On the third day of the retreat, he fell ill and was taken to St. Joseph's Hospital, where he received last rites and died on April 26, 1886. His requiem mass was held at St. Aloysius Church in Washington, and he was buried at the Jesuit Community Cemetery at Georgetown University.
3,822,718
Three Sisters (Oregon)
1,172,380,273
Three volcanic peaks in Oregon, U.S.
[ "Cascade Range", "Cascade Volcanoes", "Deschutes National Forest", "Dormant volcanoes", "Landmarks in Oregon", "Mountains of Deschutes County, Oregon", "Mountains of Lane County, Oregon", "Mountains of Oregon", "North American 3000 m summits", "Shield volcanoes of the United States", "Stratovolcanoes of Oregon", "Subduction volcanoes", "Volcanic crater lakes", "Volcanoes of Deschutes County, Oregon", "Willamette National Forest" ]
The Three Sisters are closely spaced volcanic peaks in the U.S. state of Oregon. They are part of the Cascade Volcanic Arc, a segment of the Cascade Range in western North America extending from southern British Columbia through Washington and Oregon to Northern California. Each over 10,000 feet (3,000 meters) in elevation, they are the third-, fourth- and fifth-highest peaks in Oregon. Located in the Three Sisters Wilderness at the boundary of Lane and Deschutes counties and the Willamette and Deschutes national forests, they are about 10 miles (16 kilometers) south of the nearest town, Sisters. Diverse species of flora and fauna inhabit the area, which is subject to frequent snowfall, occasional rain, and extreme temperature variation between seasons. The mountains, particularly South Sister, are popular destinations for climbing and scrambling. Although they are often grouped together as one unit, the three mountains have their own individual geology and eruptive history. Neither North Sister nor Middle Sister has erupted in the last 14,000 years, and it is considered unlikely that either will ever erupt again. South Sister last erupted about 2,000 years ago and could erupt in the future, threatening life within the region. After satellite imagery detected tectonic uplift near South Sister in 2000, the United States Geological Survey improved monitoring in the immediate area. ## Geography The Three Sisters are at the boundary of Lane and Deschutes counties and the Willamette and Deschutes national forests in the U.S. state of Oregon, about 10 miles (16 kilometers) south of the nearest town of Sisters. The three peaks are the third-, fourth-, and fifth-highest in Oregon, and contain 16 named glaciers. Their ice volume totals 5.6 billion cubic feet (160 million cubic metres). The Sisters were named Faith, Hope and Charity by early settlers, but are now known as North Sister, Middle Sister and South Sister, respectively. ### Wilderness The Three Sisters Wilderness covers an area of 281,190 acres (1,137.9 km<sup>2</sup>), making it the second-largest wilderness area in Oregon. Designated by the United States Congress in 1964, it borders the Mount Washington Wilderness to the north and shares its southern edge with the Waldo Lake Wilderness. The area includes 260 mi (420 km) of trails and many forests, lakes, waterfalls, and streams, including the source of Whychus Creek. The Three Sisters and nearby Broken Top account for about a third of the Three Sisters Wilderness, and this area is known as the Alpine Crest Region. Rising from about 5,200 ft (1,600 m) to 10,358 ft (3,157 m) in elevation, the Alpine Crest Region features the wilderness area's most-frequented glaciers, lakes, and meadows. ### Physical geography Weather varies greatly in the area due to the rain shadow caused by the Cascade Range. Air from the Pacific Ocean rises over the western slopes, which causes it to cool and dump its moisture as rain (or snow in the winter). Precipitation increases with elevation. Once the moisture is wrung from the air, the air descends on the eastern side of the crest, which causes it to become warmer and drier. On the western slopes, precipitation ranges from 80 to 125 in (200 to 320 centimeters) annually, while precipitation over the eastern slopes varies from 40 to 80 in (100 to 200 cm) in the east. Temperature extremes reach 80 to 90 degrees Fahrenheit (27 to 32 degrees Celsius) in summers and −20 to −30 °F (−29 to −34 °C) during the winters. The Three Sisters have about 130 snowfields and glaciers ranging in altitude from 6,742 to 10,308 ft (2,055 to 3,142 m) with a cumulative surface area of about 2,500 acres (10 km<sup>2</sup>). The Linn and Villard Glaciers are north of the North Sister summit, while the Thayer Glacier is on its eastern slope. The Collier Glacier is nestled between North Sister and Middle Sister and flows to the northwest. The Renfrew and Hayden Glaciers are on the northwestern and northeastern slopes of Middle Sister, respectively, while the Diller Glacier is on its southeastern slope. The Irving, Carver and Skinner Glaciers lie between Middle Sister and South Sister. Finally, around the summit of South Sister, in a clockwise direction, are the Prouty, Lewis, Clark, Lost Creek, and Eugene Glaciers. The Collier Glacier, despite a 4,900 ft (1,500 m) retreat and a 64% loss of its surface area between 1910 and 1994, is generally considered to be the largest glacier of the Three Sisters at 160 acres (0.65 km<sup>2</sup>). Eliot Glacier on Mount Hood is now two-and-a-half times larger than the Collier Glacier. According to sources, the Prouty Glacier is sometimes considered to be larger than the Collier glacier. When Little Ice Age glaciers retreated during the 20th century, water filled in the spaces left behind, forming moraine-dammed lakes, which are more common in the Three Sisters Wilderness than anywhere else in the contiguous U.S. The local area has a history of flash floods, including an event on October 7, 1966, caused by a sudden avalanche; this flash flood reached the Cascade Lakes Scenic Byway. Concerned about the hazard of similar flooding events, scientists in the 1980s from the U.S. Geological Survey (USGS) identified that Carver Lake on South Sister could flood and breach its natural dam, producing a large mudflow that could endanger wilderness visitors and the town of Sisters. Studies at Collier Lake and Diller Lake suggested that both had breached their dams in the early 1940s and in 1970, respectively. Other moraine-dammed lakes within the wilderness area include Thayer Lake on North Sister's east flank and four members of the Chambers Lakes group between Middle and South Sister. Before settlement of the area at the end of the 19th century, wildfires frequently burned through the local forests, especially the ponderosa pine forests on the eastern slopes. Due to fire suppression over the past century, the forests have become overgrown, and at higher elevations, they are further susceptible to summertime fires, which threaten surrounding life and property. In the 21st century, wildfires have been larger and more common in the Deschutes National Forest. In September 2012, a lightning strike caused a fire that burned 41 square miles (110 km<sup>2</sup>) in the Pole Creek area within the Three Sisters Wilderness, leaving the area closed until May 2013. In August 2017, officials closed 417 sq mi (1,080 km<sup>2</sup>) in the western half of the Three Sisters Wilderness, including 24 mi (39 km) of the Pacific Crest Trail, to the public because of 11 lightning-caused fires, including the Milli Fire. As a result of the increasing incidence of fires, public officials have factored the role of wildfire into planning, including organizing prescribed fires with scientists to protect habitats at risk while minimizing adverse effects on air quality and environmental health. ## Geology The Three Sisters join several other volcanoes in the eastern segment of the Cascade Range known as the High Cascades, which trends north–south. Constructed towards the end of the Pleistocene Epoch, these mountains are underlain by more ancient volcanoes that subsided due to parallel north–south faulting in the surrounding region. Part of the Cascade Volcanic Arc and the Cascade Range, each of the three volcanoes formed at different times from several variable magmatic sources. The amount of rhyolite present in the lavas of the younger two mountains is unusual relative to nearby peaks. The Three Sisters form the leading edge of a rhyolitic crustal-melting anomaly, which might be explained by a combination of mantle flow (movement of Earth's solid silicate mantle layer caused by convection currents) and decompression that has generated similar melting and rhyolitic volcanism nearby for the past 16 million years. Like other Cascade volcanoes, the Three Sisters were fed by magma chambers produced by the subduction of the Juan de Fuca tectonic plate under the western edge of the North American tectonic plate. The three mountains were also shaped by the changing climate of the Pleistocene epoch, during which multiple glacial periods occurred and glacial advance eroded the mountains. The Three Sisters form the center of a region of closely grouped volcanic peaks. This is in contrast to the typical 40-to-60 mi (64-to-97 km) spacing between volcanoes elsewhere in the Cascades. Among the most active volcanic areas in the Cascades and one of the most densely populated volcanic centers in the world, the Three Sisters region includes nearby peaks such as Belknap Crater, Mount Washington, Black Butte, and Three Fingered Jack to the north, and Broken Top and Mount Bachelor to the south. Most of the surrounding volcanoes consist of mafic (basaltic) lavas; only South and Middle Sister have an abundance of silicic rocks such as andesite, dacite, and rhyodacite. Mafic magma is less viscous; it produces lava flows and is less prone to explosive eruptions than silicic magma. The region was active in the Pleistocene, with eruptions between about 650,000 and about 250,000 years ago from an explosively active complex known as the Tumalo volcanic center. This area features andesitic and mafic cinder cones such as Lava Butte, as well as rhyolite domes. Cinder cones accumulate from the airfall of many pyroclastic rock fragments of various sizes, while the viscous rhyolite domes extrude onto the surface like toothpaste. The Tumalo volcano spread ignimbrites and plinian deposits in ground eruptions across the area, similar to the eruption of Vesuvius that destroyed Pompeii. These deposits spread from Tumalo to the town of Bend. Basaltic lava flows from North Sister overlay the youngest Tumalo pyroclastic deposits, indicating that North Sister was active more recently than 260,000 years ago. ### North Sister North Sister, also known as "Faith", is the oldest and most highly eroded of the three, with zigzagging rock pinnacles between glaciers. It is a shield volcano that overlays a more ancient shield volcano named Little Brother. North Sister is 5 mi (8 km) wide, and its summit elevation is 10,090 ft (3,075 m). Consisting primarily of basaltic andesite, it has a more mafic composition than the other two volcanoes. Its deposits are rich in palagonite and red and black cinders, and they are progressively more iron-rich the younger they are. North Sister's lava flows demonstrate similar compositions throughout the mountain's long eruptive history. The oldest lava flows on North Sister have been dated to roughly 311,000 years ago, though the oldest reliably dated deposits are approximately 119,000 years old. Estimates place the volcano's last eruption at 46,000 years ago, in the Late Pleistocene. North Sister possesses more dikes than any similar Cascade peak, caused by lava intruding into pre-existing rocks. Many of these dikes were pushed aside by the intrusion of a 980 ft (300 m)-wide volcanic plug. The dikes and plug were exposed by centuries of erosion. At one point, the volcano stood more than 11,000 ft (3,400 m) high, but erosion reduced this volume by a quarter to a third. The plug is now exposed and forms North Sister's summits at Prouty Peak and the South Horn. ### Middle Sister Middle Sister, also known as "Hope", is a basaltic stratovolcano that has also erupted more-viscous andesite, dacite, and rhyodacite. The smallest and least studied of the three, Middle Sister began eruptive activity 48,000 years ago, and it was primarily built by eruptions occurring between 25,000 and 18,000 years ago. One of the earliest major eruptive events (about 38,000 years ago) produced the rhyolite of Obsidian Cliffs on the mountain's northwestern flank. Thick and rich in dacite, the flows extended from the northern and southern sides. They stand in contrast to older, andesitic lava remains that reach as far as 4.3 mi (7 km) from its base. With an elevation of 10,052 ft (3,064 m), the mountain is cone-shaped. The eastern side has been heavily eroded by glaciation, while the western face is mostly intact. Of the Three Sisters, Middle Sister has the largest ice cover. The Hayden and Diller glaciers continue to cut into the east face, while the Renfrew Glacier sits on the northwestern slope. The large but retreating Collier Glacier, which contains 700 million cubic feet (20×10^<sup>6</sup> m<sup>3</sup>) of ice and is the thickest and largest glacier in the region, descends along the north side of Middle Sister, cutting into the west side of North Sister. Erosion from Pleistocene and Holocene glaciation exposed a plug near the center of Middle Sister. ### South Sister South Sister, also known as "Charity", is the tallest volcano of the trio, standing at 10,363 ft (3,159 m). The eruptive products range from basaltic andesite to rhyolite and rhyodacite. It is a predominantly rhyolitic stratovolcano overlying an older shield structure. Its modern structure is no more than 50,000 years old, and it last erupted about 2,000 years ago. Although its first eruptive events from 50,000 to 30,000 years ago were predominantly rhyolitic, between 38,000 and 32,000 years ago the volcano began to alternate between dacitic/rhyodacitic and rhyolitic eruptions. The volcano built a broad andesitic cone, forming a steep summit cone of andesite about 27,000 years ago. South Sister remained dormant for 15,000 years, after which its composition shifted from dacitic to more rhyolitic lava. An eruptive episode about 2,200 years ago, termed the Rock Mesa eruptive cycle, first spread volcanic ash from flank vents from the south and southwest flanks, followed by a thick rhyolite lava flow. Next, the Devils Hill eruptive cycle consisted of explosive ash eruptions followed by viscous rhyolitic lava flows. Unlike the previous eruptive period, it was caused by the intrusion of a dike of new silicic magma that erupted from 20 vents on the southeast side and from a smaller line on the north side. These eruptions generated pyroclastic flows and lava domes from vents on the northern, southern, eastern, and southeastern sides of the volcano. These relatively recent, postglacial eruptions suggest the presence of a silicic magma reservoir under South Sister, one that could perhaps lead to future eruptions. Unlike its sister peaks, South Sister has an uneroded summit crater about 0.25 mi (0.40 km) in diameter that holds a small crater lake known as Teardrop Pool, the highest lake in Oregon. Its cone consists of basaltic andesite along with red scoria and tephra, with exposed black and red inner walls made of scoria. Hodge Crest, a false peak, formed between 28,000 and 24,000 years ago, roughly around the same time as the main cone. Despite its relatively young age, every part of South Sister other than its peak has undergone significant erosion due to Pleistocene and Holocene glaciation. Between 30,000 and 15,000 years ago, South Sister's southern flanks were covered with ice streams, and a small amount of ice extended below 3,600 ft (1,100 m). On the volcano's northern flank, erosion from these glaciers exposed a headwall about 1,200 ft (370 m) high. During the Holocene, smaller glaciers formed, alternating between advance and retreat, depositing moraines and till between 7,000 and 9,000 ft (2,100 and 2,700 m) on the mountain. The Lewis and Clark glaciers have cirques, or glacial valleys, that made the outer walls of the crater rim significantly steeper. The slopes of South Sister contain small glaciers, including the Lost Creek and Prouty glaciers. ### Recent history and potential hazards When the first geological reconnaissance of the Three Sisters region was published in 1925, its author, Edwin T. Hodge, suggested that the Three Sisters and five smaller mountains in the present-day wilderness area were the remains of an enormous collapsed volcano that had been active during the Miocene or early Pliocene epochs. Naming this ancient volcano Mount Multnomah, Hodge theorized that it had collapsed to form a caldera just as Mount Mazama collapsed to form Crater Lake. In the 1940s, Howel Williams completed an analysis of the Three Sisters vicinity and concluded that Multnomah had never existed, instead demonstrating that each volcano in the area possessed its own individual eruptive history. Williams' 1944 paper defined the basic outline of the Three Sisters vicinity, though he lacked access to chemical techniques and radiometric dating. Over the past 70 years, scientists have published several reconnaissance maps and petrographic studies of the Three Sisters, including a detailed geological map published in 2012. Neither North nor Middle Sister is likely to resume volcanic activity. An eruption from South Sister would pose a threat to nearby life, as the proximal danger zone extends 1.2 to 6.2 mi (2 to 10 km) from the volcano's summits. During an eruption, tephra could accumulate to 1 to 2 in (25 to 51 millimetres) in the city of Bend, and mudflows and pyroclastic flows could run down the sides of the mountain, threatening any life in their paths. Eruptions from South Sister could be either explosive or effusive, though an eruption of ash with local volcanic rock accumulation and slow lava flows is considered most likely. The Three Sisters area does not have fumarole activity, although there are hot springs west of South Sister. From 1986 to 1987, the USGS surveyed the Three Sisters vicinity with tilt-leveling networks and electro-optical distance meters, but South Sister was not the subject of close geodetic analysis for the next two decades. The volcano was found to be potentially active in 2000, when satellite imagery showed a deforming tectonic uplift 3 mi (4.8 km) west of the mountain. The ground began to bulge in late 1997, as magma started to pool about 4 mi (6.4 km) underground. Scientists became concerned that the volcano was awakening, but examination of interferograms, or diagrams of patterns formed by wave interference, revealed that only small amounts of deformation occurred. A 2002 study inferred that the composition of intruding magma was basaltic or rhyolitic. A map at the Lava Lands Visitor Center of the Newberry National Volcanic Monument south of Bend shows the extent of the uplift, which reaches a maximum of 11 in (28 cm). In 2004, an earthquake swarm occurred with an epicenter in the area of uplift; the hundreds of small earthquakes subsided after several days. According to a 2011 study, this swarm may have initiated a significant decrease in the peak uplift rate, which dropped by 80% after 2004. By 2007 the uplift had slowed, though the area was still considered potentially active. A study published in 2010 described the magma intrusion as having a volume of about 2.1 billion cubic feet (59,000,000 m<sup>3</sup>). Scientists determined in 2013 that the uplift had slowed to a rate of about 0.3 inches (7.6 mm) per year, compared to up to 2 in (51 mm) per year in the early 2000s. Because of the uplift at South Sister, the USGS planned to increase monitoring of the Three Sisters and their vicinity by installing a global positioning system (GPS) receiver, sampling airborne and ground-based gases, and adding seismometers. The agency completed a GPS survey campaign, including planning, documentation, and data processing and archiving, at South Sister in 2001, as well as annual InSAR radar observations from 1992 to 2001, and it followed up with campaign GPS and tilt-leveling surveys in August 2004. As of 2009, semi-permanent GPS networks have been deployed every year at the Three Sisters, continually showing that inflation persists at South Sister. ## Ecology The ecology of the Three Sisters reflect their location in central Oregon's Cascade Range. The westside slopes from 3,000 to 6,500 ft (910 to 2,000 m) lie in the Western Cascade Montane Highlands ecoregion, where precipitation is abundant. Forests here consist predominantly of Douglas-fir and western hemlock, with minor components of mountain hemlock, noble fir, subalpine fir, grand fir, Pacific silver fir, red alder, and Pacific yew. Vine maples, rhododendron, Oregon grape, huckleberry, and thimbleberry grow beneath the trees. Douglas-fir is dominant below 3,500 ft (1,100 m), while western hemlock dominates above. Timberline at the Three Sisters occurs at 6,500 ft (2,000 m), where the forest canopy opens and the subalpine zone begins. These forests lie in the Cascade Crest montane forest ecoregion. Mountain hemlock trees dominate the forest in this area, while meadows sustain sedges, dwarf willows, tufted hairgrass, lupine, red paintbrush, and Newberry knotweed. Tree line occurs at 7,500 feet (2,300 m). The vegetation in this harsh alpine zone consists of herbaceous and shrubby subalpine meadows. This zone has a large winter snowpack, with low temperatures for much of the year. There are patches of mountain hemlock, subalpine fir, and whitebark pine near the treeline, as well as wet meadows supporting Brewer's sedge, Holm's sedge, black alpine sedge, tufted hairgrass, and alpine aster. Near the peaks of the Three Sisters, there are extensive areas of bare rock. The lower eastern slopes of the Three Sisters below 5,200 ft (1,600 m) lie in the Ponderosa Pine/Bitterbrush Woodland ecoregion. This ecoregion has less precipitation than the western slopes and has soil derived from Mazama Ash (ash erupted from Mount Mazama). Stream flow changes little throughout the year due to the region's volcanic hydrogeology. These slopes support nearly pure stands of ponderosa pine. Understory vegetation includes greenleaf manzanita and snowberry at higher elevations and antelope bitterbrush at lower elevations. Mountain alder, stream dogwood, willows, and sedges grow along streams. Local fauna includes birds such as blue and ruffed grouse, small mammals like pikas, chipmunks, and golden-mantled ground squirrels, and larger species like the Columbian black-tailed deer, mule deer, Roosevelt elk, and American black bear. Bobcats, cougars, coyotes, wolverines, martens, badgers, weasels, bald eagles, and several hawk species are many of the predators throughout the Three Sisters area. ## Human history The Three Sisters area was occupied by Amerindians since the end of the last glaciation, mainly the Northern Paiute to the east and Molala to the west. They harvested berries, made baskets, hunted, and made obsidian arrowheads and spears. Traces of rock art can be seen at Devils Hill, south of South Sister. The first Westerner to discover the Three Sisters was the explorer Peter Skene Ogden of the Hudson Bay Company in 1825. He describes "a number of high mountains" south of Mount Hood. Ten months later in 1826, the botanist David Douglas reported snow-covered peaks visible from the Willamette Valley. As the Willamette Valley was gradually colonized in the 1840s, Euro-Americans approached the summits from the west and probably named them individually at that time. Explorers, such as Nathaniel Jarvis Wyeth in 1839 and John Frémont in 1843, used the Three Sisters as a landmark from the east. The area was further explored by John Strong Newberry in 1855 as part of the Pacific Railroad Surveys. In 1862, to connect the Willamette Valley to the ranches of Central Oregon and the gold mines of eastern Oregon and Idaho, Felix and Marion Scott traced a route over Scott Pass. This route was known as the Scott Trail, but was superseded in the early 20th century by the McKenzie Pass Road further north. Around 1866, there were reports that one of the Three Sisters emitted some fire and smoke. In the late 19th century, there was extensive wool production in eastern Oregon. Shepherds led their herds of 1,500 to 2,500 sheep to the Three Sisters. They arrived in eastern foothills near Whychus Creek by May or June, and then climbed to higher pastures in August and September. By the 1890s, the area was getting overgrazed. Despite regulatory measures, sheep grazing peaked in 1910 before being banned in the 1930s at North and Middle Sister, and in 1940 at South Sister. In 1892, President Grover Cleveland decided to create the Cascades Forest Reserve, based on the authority of the Forest Reserve Act of 1891. Cascades Reserve was a strip of land from 20 to 60 mi (30 to 100 km) wide around the main crest of the Cascade Range, stretching from the Columbia River almost to the border with California. In 1905, administration of the Reserve was moved from the General Land Office to the United States Forest Service. The Reserve was renamed the Cascade National Forest in 1907. In 1908, the forest was split: the eastern half became the Deschutes National Forest, while the western half merged in 1934 to form the Willamette National Forest. Most of the Three Sisters glaciers were described for the first time by Ira A. Williams in 1916. Collier Glacier, between North and Middle Sister, was first studied and mapped by Edwin Hodge. Ruth Hopsen Keen took a forty-year photographic record of Collier Glacier, documenting part of the 0.93 mi (1,500 m) retreat of the glacier from 1910 to 1994. In the 1930s, the Three Sisters were part of a proposed National Monument. To maintain its authority over the region, the Forest Service decided to create a 191,108-acre (773 km<sup>2</sup>) primitive area in 1937. The following year, at the instigation of Forest Service employee Bob Marshall, it was expanded by 55,620 acres (225 km<sup>2</sup>) in the French Pete Creek basin. In 1957, the Forest Service decided to reclassify the area as a wilderness area and removed the old-growth forest of the French Pete Creek basin, despite protests of local environmental activists. The area became part of the National Wilderness Preservation System when the Wilderness Act of 1964 was passed, but the area still excluded the French Pete Creek basin. Responding to environmental mobilization throughout the state of Oregon, Congress passed the Endangered American Wilderness Act of 1978, which led to the reinstatement of French Pete Creek and its surroundings in the Three Sisters Wilderness Area. The Oregon Wilderness Act of 1984 further expanded the wilderness with the addition of 38,100 acres (154 km<sup>2</sup>) around Erma Bell Lakes. ## Recreation The Three Sisters are a popular climbing destination for hikers and mountaineers. Due to extensive erosion and rockfall, North Sister is the most dangerous climb of the three peaks, and is often informally called the "Beast of the Cascades". One of its peaks, Little Brother, can be safely scrambled. The first recorded ascent of North Sister was made in 1857 by six people, including Oregon politicians George Lemuel Woods and James McBride, according to a story published in Overland Monthly in 1870. Today, the common trail covers 11 mi (18 km) round-trip, gaining 3,165 ft (965 m) in elevation. Middle Sister can also be scrambled, for a round trip of 16.4 mi (26.4 km) and an elevation gain of 4,757 ft (1,450 m). South Sister is the easiest of the three to climb, and has a trail all the way to the summit. The standard route up the south ridge runs for 12.6 mi (20.3 km) round trip, rising from 5,446 ft (1,660 m) at the trailhead to 10,363 ft (3,159 m) at its summit. It is a popular climb during August and September, with up to 400 people each day. ## See also - List of Ultras of the United States - List of volcanoes in the United States
15,715,762
Battle of Kaiapit
1,111,299,630
1943 engagement in New Guinea
[ "1943 in Papua New Guinea", "Battles and operations of World War II involving Papua New Guinea", "Battles of World War II involving Australia", "Battles of World War II involving Japan", "Battles of World War II involving the United States", "Conflicts in 1943", "September 1943 events", "South West Pacific theatre of World War II" ]
The Battle of Kaiapit was an action fought in 1943 between Australian and Japanese forces in New Guinea during the Markham and Ramu Valley – Finisterre Range campaign of World War II. Following the landings at Nadzab and at Lae, the Allies attempted to exploit their success with an advance into the upper Markham Valley, starting with Kaiapit. The Japanese intended to use Kaiapit to threaten the Allied position at Nadzab, and to create a diversion to allow the Japanese garrison at Lae time to escape. The Australian 2/6th Independent Company flew in to the Markham Valley from Port Moresby in 13 USAAF C-47 Dakotas, making a difficult landing on a rough airstrip. Unaware that a much larger Japanese force was also headed for Kaiapit, the company attacked the village on 19 September to secure the area so that it could be developed into an airfield. The company then held it against a strong counter-attack. During two days of fighting the Australians defeated a larger Japanese force while suffering relatively few losses. The Australian victory at Kaiapit enabled the Australian 7th Division to be flown in to the upper Markham Valley. It accomplished the 7th Division's primary mission, for the Japanese could no longer threaten Lae or Nadzab, where a major airbase was being developed. The victory also led to the capture of the entire Ramu Valley, which provided new forward fighter airstrips for the air war against the Japanese. ## Background ### Geography The Markham Valley is part of a flat, elongated depression varying from 8 to 32 km (5.0 to 19.9 mi) wide that cuts through the otherwise mountainous terrain of the interior of New Guinea, running from the mouth of the Markham River near the port of Lae, to that of the Ramu River 600 km (370 mi) away. The two rivers flow in opposite directions, separated by an invisible divide about 130 km (81 mi) from Lae. The area is flat and suitable for airstrips, although it is intercut by many tributaries of the two main rivers. Between the Ramu Valley and Madang lies the rugged and aptly named Finisterre Ranges. ### Military situation Following the landing at Nadzab, General Sir Thomas Blamey, the Allied Land Forces commander, intended to exploit his success with an advance into the upper Markham Valley, which would protect Nadzab from Japanese ground attack, and serve as a jumping off point for an overland advance into the Ramu Valley to capture airfield sites there. On 16 September 1943—the same day that Lae fell—Lieutenant General Sir Edmund Herring, commander of I Corps, Major General George Alan Vasey, commander of the 7th Division, and Major General Ennis Whitehead, commander of the Advanced Echelon, Fifth Air Force, met at Whitehead's headquarters. Whitehead wanted fighter airstrips established in the Kaiapit area by 1 November 1943 in order to bring short-range fighters within range of the major Japanese base at Wewak. The 7th Division's mission was to prevent the Japanese at Madang from using the Markham and Ramu valleys to threaten Lae or Nadzab. Vasey and Herring considered both an overland operation to capture Dumpu, and an airborne operation using paratroops of the US Army's 503rd Parachute Infantry Regiment. Blamey did not agree with their idea of capturing Dumpu first, insisting that Kaiapit be taken beforehand. Until a road could be opened from Lae, the Kaiapit area could only be supplied by air and there were a limited number of transport aircraft. Even flying in an airborne engineer aviation battalion to improve the airstrip would have involved taking aircraft away from operations supporting the 7th Division at Nadzab. Moreover, Whitehead warned that he could not guarantee adequate air support for both Kaiapit and the upcoming Finschhafen operation at the same time. However, Herring calculated that the 7th Division had sufficient reserves at Nadzab to allow maintenance flights to be suspended for a week or so after the capture of Kaiapit. He planned to seize Kaiapit with an overland advance from Nadzab by independent companies, the Papuan Infantry Battalion, and the 7th Division's 21st Infantry Brigade. Fifth Air Force commander Lieutenant General George Kenney later recalled that Colonel David W. "Photo" Hutchison, who had been the air task force commander at Marilinan and had moved over to Nadzab to take charge of air activities there, was told to work out the problem with Vasey: "I didn't care how it was done but I wanted a good forward airdrome about a hundred miles further up the Markham Valley. Photo Hutchison and Vasey were a natural team. They both knew what I wanted and Vasey not only believed that the air force could perform miracles but that the 7th Division and the Fifth Air Force working together could do anything." The airstrip at Kaiapit was reconnoitred on 11 September 1943 by No. 4 Squadron RAAF, which reported that it was apparently in good condition, with the kunai grass recently cut. Lieutenant Everette E. Frazier, USAAF, selected a level, burned-off area near the Leron River, not far from Kaiapit, and landed in an L-4 Piper Cub. He determined that it would be possible to land C-47 Dakota aircraft there. On 16 September, Hutchison approved the site for Dakotas to land. ## Prelude The 2/6th Independent Company arrived in Port Moresby from Australia on 2 August 1943. The unit had fought in Papua in 1942 in the Battle of Buna–Gona and had since conducted intensive training in Queensland. The company was under the command of Captain Gordon King, who had been its second in command at Buna. King received a warning order on 12 September alerting him to prepare for the capture of Kaiapit, and had access to detailed aerial photographs of the area. An independent company at this time had a nominal strength of 20 officers and 275 other ranks. Larger than a conventional infantry company, it was organised into three platoons, each of three sections, each of which contained two subsections. It had considerable firepower. Each subsection had a Bren light machine gun. The gunner's two assistants carried rifles and extra 30-round Bren magazines. A sniper also carried a rifle, as did one man equipped with rifle grenades. The remaining four or five men carried Owen submachine guns. Each platoon also had a section of 2-inch mortars. The company was self-supporting, with its own engineer, signals, transport, and quartermaster sections. The signals section had a powerful but cumbersome Wireless Set No. 11 for communicating with the 7th Division. Powered by lead-acid batteries which were recharged with petrol generators, it required multiple signallers to carry and the noise was liable to attract the attention of the enemy. The platoons were equipped with the new Army No. 208 Wireless Sets. These were small, portable sets developed for the communication needs of units on the move in jungle warfare. However, the 2/6th Independent Company had not had time to work with them operationally. For three days in a row, the 2/6th Independent Company prepared to fly out from Port Moresby, only to be told that its flight had been cancelled due to bad weather. On 17 September 1943, 13 Dakotas of the US 374th Troop Carrier Group finally took off for Leron. King flew in the lead plane, which was piloted by Captain Frank C. Church, whom Kenney described as "one of Hutchison's 'hottest' troop carrier pilots". As it came in to land, King spotted patrols from the Papuan Infantry Battalion in the area. One of the Dakotas blew a tyre touching down on the rough airstrip; another tried to land on one wheel. Its undercarriage collapsed and it made a belly landing. The former was subsequently salvaged, but the latter was a total loss. King sent out patrols that soon located Captain J. A. Chalk's B Company, Papuan Infantry Battalion, which was operating in the area. That evening Chalk and King received airdropped messages from Vasey instructing them to occupy Kaiapit as soon as possible, and prepare a landing strip for troop-carrying aircraft. Vasey informed them that only small Japanese parties that had escaped from Lae were in the area, and their morale was very low. Vasey flew in to Leron on 18 September to meet with King. Vasey's orders were simple: "Go to Kaiapit quickly, clean up the Japs and inform division." As it happened, the Japanese commander, Major General Masutaro Nakai of the 20th Division, had ordered a sizeable force to move to Kaiapit under the command of Major Yonekura Tsuneo. Yonekura's force included the 9th and 10th Companies of the 78th Infantry Regiment, the 5th Company of the 80th Infantry Regiment, a heavy machine-gun section, a signals section and an engineer company—a total of about 500 troops. From Kaiapit it was to threaten the Allied position at Nadzab, creating a diversion to allow the Japanese garrison at Lae time to escape. The main body left Yokopi in the Finisterre Range on 6 September but was delayed by heavy rains that forced the troops to move, soaking wet, through muddy water for much of the way. Only the advance party of this force had reached Kaiapit by 18 September, by which time Lae had already fallen. Yonekura's main body, moving by night to avoid being sighted by Allied aircraft, was by this time no further from Kaiapit than King, but had two rivers to cross. Since both were heading for the same objective, a clash was inevitable. ## Battle King assembled his troops at Sangan, about 16 km (9.9 mi) south of Kaiapit, except for one section under Lieutenant E. F. Maxwell that had been sent ahead to scout the village. On the morning of 19 September, King set out for Kaiapit, leaving behind his quartermaster, transport and engineering sections, which would move the stores left behind at the Leron River first to Sangan and then to Kaiapit on the 20th. He took one section of Papuans with him, leaving Chalk and the rest of his men to escort the native carriers bringing up the stores. King's men walked for fifty minutes at a time and then rested for ten. The going was relatively easy insofar as the ground was fairly flat, but the 2 m (6.6 ft) high kunai grass trapped the heat and humidity and the men were heavily loaded with ammunition. The company reached Ragitumkiap, a village within striking distance of Kaiapit, at 14:45. While his men had a brief rest, King attempted to contact the large Army No. 11 Wireless Set he had left behind at Sangan—and from there Vasey back at Nadzab—with the new Army No. 208 Wireless Sets he had brought with him. Unfortunately, King found that their range was insufficient. He also heard shots being fired in the distance and guessed that Maxwell's section had been discovered. The 2/6th Independent Company formed up at 15:15 in kunai grass about 1,200 m (1,300 yd) from Kaiapit. As the company advanced it came under fire from foxholes on the edge of the village. A 2-inch mortar knocked out a light machine gun. The foxholes were outflanked and taken out with hand grenades and bayonets. The Japanese withdrew, leaving 30 dead behind. The Australians suffered two killed and seven wounded, including King, who was lightly wounded. The company established a defensive position for the night. While they were doing so, Lieutenant D. B. Stuart, the commander of one of the Papuan platoons, arrived. They had become concerned when radio contact had been lost and he had been sent to find out what was going on. King ordered him to bring the Papuans up from Sangan with extra ammunition and the No. 11 set. At around 17:30, a native appeared with a message for the Japanese commander. The paper was taken from him and he was shot when he tried to escape. Later, a Japanese patrol returned to Kaiapit, unaware that it was now in Australian hands. They were killed when they stumbled across a Bren gun position. Four more Japanese soldiers returned after midnight. One of them escaped. Yonekura and his men had reached Kaiapit after an exhausting night march. Yonekura was aware that the Australians had reached Kaiapit but his main concern was not to be caught in the open by Allied aircraft. Spotting Australian positions in the pre-dawn light, the Japanese column opened fire. A torrent of fire descended on the Australians, who replied sporadically, attempting to conserve their ammunition. Although he was running low on ammunition, King launched an immediate counter-attack on the Japanese, which took them by surprise. Lieutenant Derrick Watson's C Platoon set out at around 06:15 and advanced to the edge of Village 3, a distance of about 200 yd (180 m), before becoming pinned down by heavy Japanese fire. King then sent Captain Gordon Blainey's A Platoon around the right flank, towards the high ground on Mission Hill which overlooked the battlefield. It was secured by 07:30. In the meantime, some of the 2/6th Independent Company's signallers and headquarters personnel gathered together what ammunition they could, and delivered it to C Platoon at around 07:00. C Platoon then fixed bayonets and continued its advance. The commander of No. 9 Section of C Platoon, Lieutenant Bob Balderstone, was nicked by a bullet, apparently fired by one of his own men. He led his section in an advance across 70 yd (64 m) of open ground, and attacked three Japanese machine gun posts with hand grenades. He was later awarded the Military Cross for his "high courage and leadership". Lieutenant Reg Hallion led his No. 3 Section of A Platoon against the Japanese positions at the base of Mission Hill. He was killed in an attack on a machine gun post, but his section captured the position and killed twelve Japanese. By 10:00, the action was over. After the action, King's men counted 214 Japanese bodies, and estimated that another 50 or more lay dead in the tall grass. Yonekura was among the dead. The Australians suffered 14 killed and 23 wounded. Abandoned equipment included 19 machine guns, 150 rifles, 6 grenade throwers and 12 Japanese swords. ## Aftermath ### Consolidation The 2/6th Independent Company had won a significant victory, but now had 23 wounded and was very low on ammunition. Frazier landed on the newly captured airstrip in his Piper Cub at 12:30. He rejected the airstrip as unsuitable for Dakotas, and oversaw the preparation of a new airstrip on better ground near Mission Hill. This was still a difficult approach, as aircraft had to land upwind while avoiding Mission Hill. Although it was not known if the airstrip would be ready, Hutchison flew in for a test landing there the next day, 21 September, at 15:30. He collected the wounded and flew them to Nadzab, and returned an hour later with a load of rations and ammunition. He also brought with him Brigadier Ivan Dougherty, the commander of the 21st Infantry Brigade, and his headquarters, who took charge of the area. Around 18:00, six more transports arrived. Vasey was concerned about the security of the Kaiapit area, as he believed that the Japanese were inclined to continue with a plan once it was in motion. Taking advantage of good flying weather on 22 September, 99 round trips were made between Nadzab and Kaiapit. Most of the 2/16th Infantry Battalion and some American engineers were flown in. The 2/14th Infantry Battalion and a battery of the 2/4th Field Regiment arrived on 25 September, and Brigadier Kenneth Eather's 25th Infantry Brigade began to arrive two days later, freeing Dougherty to advance on Dumpu. ### Base development Kaiapit did not become an important airbase. By the time engineering surveys of the area had been completed, as a direct consequence of the victory at Kaiapit, Dougherty's men had captured Gusap. There, the engineers found a well-drained area with soil conditions suitable for the construction of all-weather airstrips, an unobstructed air approach and a pleasant climate. It was therefore decided to limit construction at the swampy and malarial Kaiapit and concentrate on Gusap, where the US 871st, 872nd and 875th Airborne Aviation Engineer Battalions constructed ten airstrips and numerous facilities. Although some equipment was carried on the trek overland, most had to be flown in and nearly all of it was worn out by the time the work was completed. The first P-40 Kittyhawk fighter squadron began operating from Gusap in November and an all-weather fighter runway was completed in January 1944. The airstrip at Gusap "paid for itself many times over in the quantity of Japanese aircraft, equipment and personnel destroyed by Allied attack missions projected from it." ### War crimes Three natives were found at Kaiapit who had been tied with rope to the uprights of a native hut and had then been bayoneted. As a result of the Moscow Declaration, the Minister for External Affairs, Dr. H. V. Evatt, commissioned a report by William Webb on war crimes committed by the Japanese. Webb took depositions from three members of the 2/6th Independent Company about the Kaiapit incident which formed part of his report, which was submitted to the United Nations War Crimes Commission in 1944. ### Results The 2/6th Independent Company had defeated the vanguard of Nakai's force and stopped his advance down the Markham Valley. The Battle of Kaiapit accomplished Vasey's primary mission, for the Japanese could no longer threaten Nadzab. It opened the gate to the Ramu Valley for the 21st Infantry Brigade, provided new forward fighter airstrips for the air war against the Japanese, and validated the Australian Army's new training methods and the organisational emphasis on firepower. Vasey later told King that "We were lucky, we were very lucky." King countered that "if you're inferring that what we did was luck, I don't agree with you sir because I think we weren't lucky, we were just bloody good." Vasey replied that what he meant was that he, Vasey, was lucky. He confided to Herring that he felt that he had made a potentially disastrous mistake: "it is quite wrong to send out a small unit like the 2/6th Independent Company so far that they cannot be supported." The Japanese believed that they had been attacked by "an Australian force in unexpected strength". Japanese historian Tanaka Kengoro said that the mission of the Nakai Detachment achieved the objective of threatening Nadzab so as to draw Allied attention away from the troops escaping from Lae. However, Nakai failed in his intention to hold Kaiapit, while the Allies secured it as a base for future operations. Australian historian David Dexter said that the "leisurely Nakai was outwitted by the quick-thinking and aggressive Vasey." In the end, Vasey had moved faster, catching the Japanese off balance. The credit for getting to Kaiapit went first to the USAAF aircrews that managed to make a difficult landing on the rough airstrip at Leron. The 2/6th Independent Company proved to be an ideal unit for the mission, as it combined determined leadership with thorough training and effective firepower. For his part in the battle, King was awarded the Distinguished Service Order on 20 January 1944. He considered it a form of unit award, and later regretted not asking Whitehead for an American Distinguished Unit Citation, such as was awarded to D Company of the 6th Battalion, Royal Australian Regiment, for a similar action in the Battle of Long Tan in 1966.
4,685,717
Hoopoe starling
1,125,078,031
Extinct species of crested starling from Réunion Island
[ "Articles containing video clips", "Bird extinctions since 1500", "Birds described in 1783", "Birds of Réunion", "Endemic fauna of Réunion", "Extinct birds of Indian Ocean islands", "Sturnidae", "Taxa named by Pieter Boddaert" ]
The hoopoe starling (Fregilupus varius), also known as the Réunion starling or Bourbon crested starling, is a species of starling that lived on the Mascarene island of Réunion and became extinct in the 1850s. Its closest relatives were the also-extinct Rodrigues starling and Mauritius starling from nearby islands, and the three apparently originated in south-east Asia. The bird was first mentioned during the 17th century and was long thought to be related to the hoopoe, from which its name is derived. Some affinities have been proposed, but it was confirmed as a starling in a DNA study. The hoopoe starling was 30 cm (12 in) in length. Its plumage was primarily white and grey, with its back, wings and tail a darker brown and grey. It had a light, mobile crest, which curled forwards. The bird is thought to have been sexually dimorphic, with males larger and having more curved beaks. The juveniles were more brown than the adults. Little is known about hoopoe starling behaviour. Reportedly living in large flocks, it inhabited humid areas and marshes. The hoopoe starling was omnivorous, feeding on plant matter and insects. Its pelvis was robust, its feet and claws large, and its jaws strong, indicating that it foraged near the ground. The birds were hunted by settlers on Réunion, who also kept them as pets. Nineteen specimens exist in museums around the world. The hoopoe starling was reported to be in decline by the early 19th century and was probably extinct before the 1860s. Several factors have been proposed, including competition and predation by introduced species, disease, deforestation, and persecution by humans, who hunted it for food and as an alleged crop pest. ## Taxonomy The first account thought to mention the hoopoe starling is a 1658 list of birds of Madagascar written by French governor Étienne de Flacourt. He mentioned a black-and-grey "tivouch" or hoopoe; later authors have wondered whether this referred to the hoopoe starling or the Madagascan subspecies of hoopoe (Upupa epops marginata), which resembles the Eurasian subspecies. The hoopoe starling was first noted on the Mascarene island of Réunion (then called "Bourbon") by Père Vachet in 1669, and first described in detail by French traveller Sieur Dubois's in 1674: > Hoopoes or 'Calandres', having a white tuft on the head, the rest of the plumage white and grey, the bill and the feet like a bird of prey; they are a little larger than the young pigeons. This is another good game [i.e., to eat] when it is fat. Early settlers on Réunion referred to the bird as "huppe", because of the similarity of its crest and curved bill with that of the hoopoe. Little was recorded about the hoopoe starling during the next 100 years, but specimens began to be brought to Europe during the 18th century. The species was first scientifically described by Philippe Guéneau de Montbeillard in the 1779 edition of Comte de Buffon's Histoire Naturelle, and received its scientific name from Dutch naturalist Pieter Boddaert for the book's 1783 edition. Boddaert named the bird Upupa varia; its genus name is that of the hoopoe, and its specific name means "variegated", describing its black-and-white colour. Boddaert provided Linnean binomial names for plates in Buffon's works, so the accompanying 1770s plate of the hoopoe starling by French engraver François-Nicolas Martinet is considered the holotype or type illustration. Though the plate may have been based on a specimen in the National Museum of Natural History in Paris, this is impossible to determine today; the Paris museum originally had five hoopoe starling skins, some of which only arrived during the 19th century. The possibly female specimen MNHN 2000-756, one of the most-illustrated skins, has an artificially trimmed crest resulting in an unnaturally semi-circular shape, unlike its appearance in life; the type illustration has a similarly shaped crest. De Flacourt's "tivouch" led early writers to believe that variants of the bird were found on Madagascar and the Cape of Africa; they were thought to be hoopoes of the genus Upupa, which received names such as Upupa capensis and Upupa madagascariensis. Some authors also allied the bird with groups such as birds-of-paradise, bee-eaters, cowbirds, Icteridae, and choughs, resulting in its reassignment to other genera with new names, such as Coracia cristata and Pastor upupa. In 1831, French naturalist René-Primevère Lesson placed the bird in its own monotypic genus, Fregilipus, a composite of Upupa and Fregilus, the latter a defunct genus name of the chough. French naturalist Auguste Vinson established in 1868 that the bird was restricted to the island of Réunion and proposed a new binomial, Fregilupus borbonicus, referring to the former name of the island. German ornithologist Hermann Schlegel first proposed in 1857 that the species belonged to the starling family, (Sturnidae), reclassifying it as part of the genus Sturnus, S. capensis. This reclassification was observed by other authors; Swedish zoologist Carl Jakob Sundevall proposed the new genus name Lophopsarus ("crested starling") in 1872, yet Fregilupus varius—the oldest name—remains the bird's binomial, and all other scientific names are synonyms. In 1874, after a detailed analysis of the only known skeleton (held at the Cambridge University Museum of Zoology), British zoologist James Murie agreed that it was a starling. English zoologist Richard Bowdler Sharpe said in 1890 that the hoopoe starling was similar to the starling genus Basilornis, but did not note any similarities other than their crests. In 1941, American ornithologist Malcolm R. Miller found the bird's musculature similar to that of the common starling (Sturnus vulgaris) after he dissected a specimen preserved in spirits at the Cambridge Museum, but noted that the tissue was very degraded and the similarity did not necessarily confirm a relationship with starlings. In 1957, American ornithologist Andrew John Berger cast doubt on the bird's affinity with starlings because of subtle anatomical differences, after dissecting a spirit specimen at the American Museum of Natural History. Some authors proposed a relationship with vangas (Vangidae), but Japanese ornithologist Hiroyuki Morioka rejected this in 1996, after a comparative study of skulls. In 1875, British ornithologist Alfred Newton attempted to identify a black-and-white bird mentioned in an 18th-century manuscript describing a marooned sailor's stay on the Mascarene island of Rodrigues in 1726–27, hypothesising that it was related to the hoopoe starling. Subfossil bones later found on Rodrigues were correlated with the bird in the manuscript; in 1879, these bones became the basis for a new species, Necropsar rodericanus (the Rodrigues starling), named by British ornithologists Albert Günther and Edward Newton. Günther and Newton found the Rodrigues bird closely related to the hoopoe starling, but kept it in a separate genus owing to "present ornithological practice". American ornithologist James Greenway suggested in 1967 that the Rodrigues starling belonged in the same genus as the hoopoe starling, owing to their close similarity. Subfossils found in 1974 confirmed that the Rodrigues bird was a distinct genus of starling; primarily, its stouter bill warrants generic separation from Fregilupus. In 2014, British palaeontologist Julian P. Hume described a new extinct species, the Mauritius starling (Cryptopsar ischyrhynchus), based on subfossils from Mauritius, which was closer to the Rodrigues starling than to the hoopoe starling in its skull, sternal, and humeral features. ### Evolution In 1943, American ornithologist Dean Amadon suggested that Sturnus-like species could have arrived in Africa and given rise to the wattled starling (Creatophora cinerea) and the Mascarene starlings. According to Amadon, the Rodrigues and hoopoe starlings were related to Asiatic starlings—such as some Sturnus species—rather than to the glossy starlings (Lamprotornis) of Africa and the Madagascar starling (Saroglossa aurata), based on their colouration. A 2008 study by Italian zoologist Dario Zuccon and colleagues analysing the DNA of a variety of starlings confirmed that the hoopoe starling belonged in a clade of Southeast Asian starlings as an isolated lineage, with no close relatives. The following cladogram shows its position: An earlier attempt by another team could not extract viable hoopoe starling DNA. Zuccon and colleagues suggested that ancestors of the hoopoe starling reached Réunion from Southeast Asia by using island chains as "stepping stones" across the Indian Ocean, a scenario also suggested for other Mascarene birds. Its lineage diverged from that of other starlings four million years ago (about two million years before Réunion emerged from the sea), so it may have first evolved on landmasses now partially submerged. Extant relations, such as the Bali myna (Leucopsar rothschildi) and the white-headed starling (Sturnia erythropygia), have similarities in colouration and other features with the extinct Mascarene species. According to Hume, since the Rodrigues and Mauritius starlings seem morphologically closer to each other than to the hoopoe starling—which appears closer to Southeast Asian starlings—there may have been two separate migrations of starlings from Asia to the Mascarenes, with the hoopoe starling the latest arrival. Except for Madagascar, the Mascarenes were the only islands in the southwestern Indian Ocean with native starlings, probably because of their isolation, varied topography, and vegetation. ## Description The hoopoe starling was 30 cm (12 in) in length. The bird's culmen was 41 mm (1+5⁄8 in) long, its wing 147 mm (5+13⁄16 in), its tail 114 mm (4+1⁄2 in), and its tarsus about 39 mm (1+9⁄16 in) long. It was the largest of the three Mascarene starlings. A presumed adult male (NHMUK 1889.5.30.15) in the Paris museum has a light ash-grey head and back of the neck (lighter on the hind-neck), with a long crest the same colour with white shafts. Its back and tail are ash-brown, its wings darker with a greyish wash, and its uppertail covert feathers and rump have a rufous wash. Its primary coverts are white with brown tips; the bases (instead of the tips) are brown in other specimens. The superciliary stripe, lore, and most of the specimen's underside is white, with a pale rufous wash on the flanks and undertail coverts. The extent of light rufous on the underside varies by specimen. The beak and legs are lemon-yellow, with yellow-brown claws. It has a bare, triangular area of skin around the eye, which may have been yellow in life. Though the species' iris was described as bluish-brown, it has been depicted as brown, yellow, or orange. There has been confusion about which characteristics were sexually dimorphic in the species. Only three specimens were sexed (all males), with age and individual variation not considered. The male is thought to have been largest with a longer, curvier beak. In 1911, Réunion resident Eugène Jacob de Cordemoy recalled his observations of the bird about 50 years before, suggesting that only males had a white crest, but this is thought to be incorrect. A presumed female (MNHN 2000-756) in the Paris museum appears to have a smaller crest, a smaller and less-curved beak, and smaller primary coverts. A juvenile specimen (MHNT O2650) has a smaller crest and primary coverts, with a brown wash instead of ash grey on the crest, lore, and superciliary stripe, and a light-brown (instead of ash-brown) back. The juveniles of some southeast Asian starlings are also browner than adults. Vinson, who observed live hoopoe starlings when he lived on Réunion, described the crest as flexible, disunited and forward-curled barbs of various lengths, highest in the centre, and able to be erected at will. He compared the bird's crest to that of a cockatoo and to the tail feathers of a bird-of-paradise. Most mounted specimens have an erect crest, indicating its natural position. The only illustration of the hoopoe starling now thought to have been made from life was drawn by French artist Paul Philippe Sauguin de Jossigny during the early 1770s. Jossigny instructed engravers under the drawing that for accuracy, they should depict the crest angled forward from the head (not straight up). Hume believes that Martinet did this when he made the type illustration, and it was derivative of Jossigny's image rather than a life drawing. Jossigny also made the only known life drawing of the now-extinct Newton's parakeet (Psittacula exsul) after a specimen sent to him from Rodrigues to Mauritius, so this is perhaps also where he drew the hoopoe starling. Murie suggested that only the illustrations by Martinet and Jacques Barraband were "original", since he was unaware of Jossigny's drawing, but noted a crudeness and stiffness in them which made neither appear lifelike. The hoopoe starling can be distinguished skeletally from other Mascarene starlings by its cranium being rounded when seen from above, bulbous towards the back. The frontal bone was narrow, and the foramen magnum (the opening for the spinal cord) was large. The rostrum was long and narrow, with narrow, oval narial-openings (bony nostrils). The upper beak was narrow and strongly decurved, and the lower beak was narrow and sharply pointed. The mandible had a distinct retroarticular process (which connected with the skull), and there was a single large mandibular fenestra (opening). The sternum (breast-bone) was short and wide, particularly at the hind end. The coracoid was relatively reduced in length, and its shaft was robust. The humerus (upper arm bone) was robust with a straight shaft, with the upper and lower ends flattened from front to back. The radius of the lower arm was robust. The pelvis was extremely robust. The femur (thigh bone) was robust, especially at the upper and lower ends, and the shaft was straight. The tibiotarsus (lower leg bone) was long and robust, with a broad and expanded shaft, especially near the lower end. The tarsometatarsus (ankle bone) was long and robust, with a relatively straight shaft. ## Behaviour and ecology Little is known about the behaviour of the hoopoe starling. According to François Levaillant's 1807 account of the bird (which included observations from a Réunion resident) it was abundant, with large flocks inhabiting humid areas and marshes. In 1831, Lesson, without explanation, described its habits as similar to those of a crow. Its song was described as a "bright and cheerful whistle" and "clear notes", indicating a similarity to the songs of other starlings. Vinson's 1877 account relates his experiences with the bird more than 50 years earlier: > Now these daughters of the wood, when they were numerous, flew in flocks and went thus in the rain forests, while deviating little from one another, as good companions or as nymphs taking a bath: they lived on berries, seeds and insects, and the créoles, disgusted by the latter fact, held them for an impure game. Sometimes, coming from the woods to the littoral [coast], always flying and leaping from tree to tree, branch to branch, they often alighted in swarms on coffee trees in bloom, and there was in the past the testimony of an inhabitant of the Island of Bourbon, said the naturalist Levaillant, that they caused big damage in coffee trees by making the flowers fall prematurely. But it is not the white flowers of coffee that the hoopoes were searching for and thus behaving so, it was for the caterpillars and insects that devoured them; and in this they made an important service to the silviculture of the Island of Bourbon and the rich coffee plantations, with which this land was then covered, the golden age of the country! Like most other starlings, the hoopoe starling was omnivorous, feeding on fruits, seeds, and insects. Its tongue—long, slender, sharp, and frayed—may have been able to move rapidly, helpful when feeding on fruit, nectar, pollen, and invertebrates. Its pelvic elements were robust and its feet and claws large, indicating that it foraged near the ground. Its jaws were strong; Morioka compared its skull to that of the hoopoe, and it may have foraged in a similar way, probing and opening holes in substrate by inserting and opening its beak. De Montbeillard was informed of the stomach contents of a dissected specimen, consisting of seeds and the berries of "Pseudobuxus" (possibly Eugenia buxifolia, a bush with sweet, orange berries). He noted that the bird weighed 110 grams (4 oz), and was fatter around June and July. Several accounts suggest that the hoopoe starling migrated on Réunion, spending six months in the lowlands and six months in the mountains. Food may have been easier to obtain in the lowlands during winter, with the birds breeding in the mountain forests during summer. The hoopoe starling probably nested in tree cavities. The Belgian biologist Michaël P. J. Nicolaï and colleagues pointed out in 2020 that the hoopoe starling had black skin combined with light plumage and lived in high irradiation zones, which may have evolved for protection against ultraviolet irradiation in this and other black-skinned birds. Many other endemic species on Réunion became extinct after the arrival of humans and the resulting disruption of the island's ecosystem. The Hoopoe starling lived with other now-extinct birds, such as the Réunion ibis, the Mascarene parrot, the Réunion parakeet, the Réunion swamphen, the Réunion scops owl, the Réunion night heron, and the Réunion pink pigeon. Extinct Réunion reptiles include the Réunion giant tortoise and an undescribed Leiolopisma skink. The small Mauritian flying fox and the snail Tropidophora carinata lived on Réunion and Mauritius before vanishing from both islands. ## Relationship with humans The hoopoe starling was described as tame and easily hunted. In 1704, French pilot engineer Jean Feuilley explained how the birds were caught by humans and cats: > Hoopoes and merles [Hypsipetes borbonicus] are the same fatness as those in France, and are of a marvellous taste, which are fat at the same time as parrots, living on the same foods. In order to catch them, hunting was done with staffs or long thin poles from six to seven feet in length, though this hunt is infrequently seen. The marrons [escaped] cats destroy many. These birds allow themselves to be approached very closely, so the cats take them without leaving their places. The hoopoe starling was kept as a pet on Réunion and Mauritius, and although the bird was becoming scarcer, some specimens were obtained during the early 19th century. It is unknown whether any live specimens were ever transported from the Mascarenes. Cordemoy recalled that captive birds could be fed a wide variety of food, such as bananas, potatoes, and chayote, and wild birds would never enter inhabited areas. Many individuals survived on Mauritius after escaping there, and it was thought that a feral population could be established. The Mauritian population lasted less than a decade; the final specimen on the island (the last definite record of a live specimen anywhere) was taken in 1836. Specimens could still be collected on Réunion during the 1830s and, possibly, the early 1840s. There are nineteen surviving hoopoe starling specimens in museums around the world (including one skeleton and two specimens preserved in spirit), with two in the Paris museum and four in Troyes. Additional skins in Turin, Livorno, and Caen were destroyed during World War II, and four skins have disappeared from Réunion and Mauritius (which now have one each). Specimens were sent to Europe beginning in the second half of the 18th century, with most collected during the first half of the 19th century. It is unclear when each specimen was acquired, and specimens were frequently moved between collections. It is also unclear which specimens were the basis for which descriptions and illustrations. The only known subfossil hoopoe starling specimen is a femur, discovered in 1993 in a Réunion grotto. ### Extinction Several causes for the decline and sudden extinction of the hoopoe starling have been proposed, all connected to the activities of humans on Réunion, who it survived alongside for two centuries. An oft-repeated suggestion is that the introduction of the common myna (Acridotheres tristis) led to competition between these two starling species. The myna was introduced to Réunion in 1759 to combat locusts, and became a pest itself, but the hoopoe starling coexisted with the myna for nearly 100 years and they may not have shared habitat. The black rat (Rattus rattus) arrived on Reunion in the 1670s, and the brown rat (Rattus norvegicus) in 1735, multiplying rapidly and threatening agriculture and native species. Like the hoopoe starling, the rats inhabited tree cavities and would have preyed on eggs, juveniles, and nesting birds. During the mid-19th century the Réunion slit-eared skink (Gongylomorphus borbonicus) became extinct because of predation by the introduced wolf snake (Lycodon aulicum), which may have deprived the bird of a significant food source. Hoopoe starlings may have contracted diseases from introduced birds, a factor known to have triggered declines and extinctions in endemic Hawaiian birds. According to British ecologist Anthony S. Cheke, this was the chief cause of the hoopoe starling's extinction; the species had survived for generations despite other threats. Beginning in the 1830s, Réunion was deforested for plantations. Former slaves joined white peasants in cultivating pristine areas after slavery was abolished in 1848, and the hoopoe starling was pushed to the edges of its former habitat. According to Hume, over-hunting was the final blow to the species; with forests more accessible, hunting by the rapidly growing human population may have driven the remaining birds to extinction. In 1821, a law mandating the extermination of grain-damaging birds was implemented, and the hoopoe starling had a reputation for damaging crops. During the 1860s, various writers noted that the bird had almost disappeared, but it was probably already extinct by this time; in 1877, Vinson lamented that the last individuals might have been killed by recent forest fires. No attempts to preserve the species in captivity seem to have been made. The hoopoe starling survived longer than many other extinct Mascarene species, and was the last of the Mascarene starling species to face extinction. The Rodrigues and Mauritius species probably disappeared with the arrival of rats; at least five species of Aplonis starlings have disappeared from the Pacific Islands, with rats contributing to their extinction. The hoopoe starling may have survived longer because of Réunion's rugged topography and highlands, where it spent much of the year.
8,408,748
Greek battleship Salamis
1,140,207,739
Cancelled dreadnought battleship of the Greek Navy
[ "1914 ships", "Battleships of the Hellenic Navy", "Proposed ships" ]
Salamis (Greek: Σαλαμίς) was a partially constructed capital ship, referred to as either a dreadnought battleship or battlecruiser, that was ordered for the Greek Navy from the AG Vulcan shipyard in Hamburg, Germany, in 1912. She was ordered as part of a Greek naval rearmament program meant to modernize the fleet, in response to Ottoman naval expansion after the Greco-Turkish War of 1897. Salamis and several other battleships—none of which were delivered to either navy—represented the culmination of a naval arms race between the two countries that had significant effects on the First Balkan War and World War I. The design for Salamis was revised several times during the construction process, in part due to Ottoman acquisitions. Early drafts of the vessel called for a displacement of 13,500 long tons (13,700 t), with an armament of six 14-inch (356 mm) guns in three twin-gun turrets. The final version of the design was significantly larger, at 19,500 long tons (19,800 t), with an armament of eight 14-inch guns in four turrets. The ship was to have had a top speed of 23 knots (43 km/h; 26 mph), higher than that of other battleships of the period. Work began on the keel on 23 July 1913, and the hull was launched on 11 November 1914. Construction stopped in December 1914, following the outbreak of World War I in July. The German navy employed the unfinished ship as a floating barracks in Kiel. The armament for this ship was ordered from Bethlehem Steel in the United States and could not be delivered due to the British blockade of Germany. Bethlehem sold the guns to Britain instead and they were used to arm the four Abercrombie-class monitors. The hull of the ship remained intact after the war and became the subject of a protracted legal dispute. Salamis was finally awarded to the builders and the hull was scrapped in 1932. ## Development Following the Greco-Turkish War of 1897, during which the Ottoman fleet had proved incapable of challenging Greece's navy for control of the Aegean Sea, the Ottomans began a naval expansion program, initially rebuilding several old ironclad warships into more modern vessels. In response, the Greek government decided in 1905 to rebuild its fleet, which at that time was centered on the three Hydra-class ironclads of 1880s vintage. Beginning in 1908, the Greek Navy sought design proposals from foreign shipyards. Tenders from Vickers, of Britain, for small, 8,000-long-ton (8,100 t) battleships were not taken up. In 1911, a constitutional change in Greece allowed the government to hire naval experts from other countries, which led to the invitation of a British naval mission to advise the navy on its rearmament program. The British officers recommended a program of two 12,000 long tons (12,000 t) battleships and a large armored cruiser; offers from Vickers and Armstrong-Whitworth were submitted for the proposed battleships. The Vickers design was for a smaller ship armed with nine 10-inch (254 mm) guns, while Armstrong-Whitworth proposed a larger ship armed with 14-inch (356 mm) guns. The Greek government did not pursue these proposals. Later in the year, Vickers issued several proposals for smaller vessels like those it had designed in 1908. The initial step in the Greek rearmament program was completed with the purchase of the Italian-built armored cruiser Georgios Averof in October 1909. The Ottomans, in turn, purchased two German pre-dreadnought battleships, Kurfürst Friedrich Wilhelm and Weissenburg, amplifying the naval arms race between the two countries. The Greek Navy attempted to buy two older French battleships, and when that purchase failed to materialize, they tried unsuccessfully to buy a pair of British battleships. They then tried to buy ships from the United States, but were rebuffed due to concerns that such a sale would alienate the Ottomans, with whom the Americans had significant industrial and commercial interests. The Ottomans ordered the dreadnought Reşadiye in August 1911, threatening Greek control of the Aegean. The Greeks were faced with a choice of conceding the arms race, or ordering new capital ships of their own. Rear Admiral Lionel Grant Tufnell, the head of a British naval mission to Greece, advocated purchasing another armored cruiser like Georgios Averof, along with several smaller vessels, and allocating funds to modernizing the Greek naval base at Salamis; this proposal was supported by Prime Minister Eleftherios Venizelos, who sought to control naval spending in the tight Greek budget projected for 1912. The plan came to nothing, as the Greek government waited for the arrival of British advisers for the Salamis project. In early 1912, the Greek Navy convened a committee that would be in charge of acquiring a new capital ship to counter Reşadiye, initially conceived as a battlecruiser. The new ship would be limited to a displacement of 13,000 long tons (13,000 t), since that was the largest vessel the floating dry dock in Piraeus could accommodate. The program was finalized in March, and along with the new battlecruiser, the Greeks invited tenders for destroyers, torpedo boats, submarines, and a depot ship to support them. Ten British, four French, three German, three American, one Austrian, and two Italian shipyards all submitted proposals for these contracts, with Britain's Vickers and Armstrong-Whitworth submitting the same designs proposed in 1911. Tufnell was part of the committee overseeing the process, but found that the Greeks strongly opposed the British designs. Vickers eventually withdrew from the competition, and the cost of Armstrong's proposal was higher than other proposals. Still, the British had hopes of obtaining the contract due to the relationship between the Greek and British navies, reflected by the number of British officers that had been seconded to the Greek Navy in recent years. French yards, on the other hand, complained that the British were unfairly benefiting from the presence of their naval mission. During the competition, the Greek Navy decided that Vickers' hull design was best, but American guns, ammunition, and armor were superior to any of the British designs. In the end, neither got the contracts, as negotiations between Venizelos and the German Minister to Greece eventually secured the contracts for Germany. In June 1912, the Greek Navy selected tenders from Germany's AG Vulcan for two destroyers and six torpedo boats, to be completed in just three to four months. This exceptionally short timeframe was accomplished through the help of the German Navy, which allowed the Greeks to take over German ships then being constructed. The contract price was evidently low, as one British firm complained that they could not understand how Vulcan would make a profit. Then, one month later, the Greeks selected Vulcan again for the construction of their battlecruiser, with its armor and armament coming from Bethlehem Steel in the United States. British firms were furious, again alleging that it would be impossible for Vulcan to make a profit on the contract, and surmising that the German government was subsidizing the purchase to get a foothold in the shipbuilding market. The Greeks, for their part, countered that the British manufacturers were colluding to keep armor plate prices high, and so they were able to significantly decrease their costs by ordering the ship's armor in the United States. ## Design The initial design called for a ship 458 ft (140 m) long with a beam of 72 ft (22 m), a draft of 24 ft (7.3 m), and a displacement of 13,500 long tons (13,700 t). The ship was designed with 2-shaft turbines rated at 26,000 shp for a top speed of 21 knots (39 km/h; 24 mph). The armament was to be six 14-inch (356 mm) guns in twin turrets all on the centerline with one amidships, eight 6 in (152 mm), eight 3 in (76 mm), and four 37 mm (1.5 in) guns, and two 45 cm (18 in) torpedo tubes. The design was revised several times. In mid-1912, as tensions were developing that led to the First Balkan War, the Greek Navy began serious efforts to increase its strength. In August, they were seeking only minor alterations in the ship design, but early naval operations during the war convinced the naval command of the advantages a larger ship would provide. Tufnell suggested a different reason for the design changes, accusing the Germans of offering a cheap but unseaworthy design, obtaining the contract, then making a push for a more expensive but also more practical design. Hovering over all of these was the possibility that the dreadnoughts of the South American dreadnought race could be put up for sale, a prospect both countries pursued. Two, from Brazil, were already completed, and a third was under construction in Britain. Another two, for Argentina, were being built in the United States. Naval historian Paul G. Halpern wrote of this situation that "the sudden acquisition by a single power of all or even some of these ships might have been enough to tip a delicate balance of power such as that which prevailed in the Mediterranean." Both the Greeks and Ottomans were reportedly interested in the Argentine ships, and Venizelos attempted to buy one of the Rivadavia-class battleships then being built in the United States for the Argentine Navy as an alternative to redesigning Salamis, in the process delaying her completion. When the Argentine government refused to sell the ship, he agreed to redesigning Salamis, and a committee that included Greek and British naval officers was created to revise the design. The committee favored a 16,500 long tons (16,800 t) design, but Hubert Searle Cardale, the only member of the British mission drawn from the Royal Navy's active list, proposed an increase to 19,500 long tons (19,800 t), since the increase would allow for a substantially more powerful vessel. Venizelos initially approved an increase in displacement to 16,500 LT, but he opposed any further increases. The Foreign Minister, Lambros Koromilas, and the Speaker of the Parliament, Nikolaos Stratos, conspired to have the larger proposal adopted while Venizelos was attending the peace conference that resulted in the Treaty of London. Koromilas and Stratos misrepresented Venizelos' position to the rest of the cabinet and secured their approval for the new contract. Koromilas' and Stratos' deception proved effective, and the enlarged proposal was adopted on 23 December 1912. The most significant changes were a 50% increase in displacement, the addition of a fourth twin-gun turret, and the arrangement of the main battery in superfiring pairs. The ship was to be delivered to the Greek Navy by March 1915, at a cost of £1,693,000. M. K. Barnett, writing for Scientific American, remarked that the ship would "not mark any particular advance in warship design, being, rather, an effort to combine the greatest defensive and offensive qualities with the least cost." The Journal of the American Society of Naval Engineers, however, believed that the ship was designed for speed and firepower at the expensive of heavy defensive armor. Upon his return, Venizelos attempted to have the new contract cancelled, but Vulcan refused, noting that "Prime Ministers rise and fall from power and the influence of Venizelos will not be enduring." The order for Salamis, which has been referred to alternatively as a battleship or battlecruiser, made Greece the fourteenth and final country to order a dreadnought-type ship. The modifications to the design came over the objections of the British, including Prince Louis of Battenberg and the new head of the British naval mission in Greece, Rear Admiral Mark Kerr. Battenberg wrote that a Greek purchase of modern capital ships would be "undesirable from every point of view", as the country's finances could not support them and the increasing power of torpedoes were making smaller ships more dangerous. Along much the same lines, Kerr suggested to Venizelos that a fleet built around smaller warships would be better suited for the constricted Aegean Sea. Strongly opposing these views were the Greek Navy and King Constantine I of Greece, both of whom desired a regular battle fleet, as they believed that it was the only way of assuring Greek naval superiority over the Ottomans. ### General characteristics Salamis was 569 feet 11 inches (173.71 m) long at the waterline with a full flush deck, and had a beam of 81 ft (25 m) and a draft of 25 ft (7.6 m). The ship was designed to displace 19,500 long tons (19,800 t). She would have been fitted with two tripod masts. Had the battleship been completed, she was to have been powered by three AEG steam turbines, each of which drove a propeller shaft. The turbines were supplied with steam by eighteen coal-fired Yarrow boilers. The boilers would have been ducted into two widely spaced funnels. This would have provided Salamis with 40,000 shaft horsepower (30,000 kW) and a top speed of 23 knots (43 km/h; 26 mph). This speed was significantly faster than the top speed of most contemporary battleships, 21 knots (39 km/h; 24 mph), which contributed to her classification as a battlecruiser. A large crane was to be installed between the funnels to handle the ship's boats. ### Armament The primary armament of the ship was to be eight 14 in (356 mm) /45 caliber guns mounted in four twin-gun turrets, all of which were built by Bethlehem Steel. Two turrets were to be mounted in a superfiring arrangement forward of the main superstructure, with the other two mounted similarly aft of the funnels. These guns were capable of firing 1,400 lb (640 kg) armor-piercing or high-explosive shells. The shells were fired at a muzzle velocity of 2,570 feet per second (780 m/s). The guns proved to be highly resistant to wear in British service, though they suffered from significant barrel droop after around 250 shells had been fired through them, which contributed to poor accuracy after extended use. The turrets that housed the guns allowed for depression to −5° and elevation to 15°, and they were electrically operated. There is some disagreement over the nature of the ship's intended secondary battery. According to Gardiner and Gray, the battery was to consist of twelve 6 in (152 mm) /50 caliber guns, also manufactured by Bethlehem, mounted in casemates amidships, six on either side. These guns fired 105-pound (48 kg) projectiles at a muzzle velocity of 2,800 f/s (853 m/s). According to Norman Friedman, these twelve guns were sold to Britain after the war broke out, where they were used to fortify the Grand Fleet's main base at Scapa Flow. But Antony Preston disagrees, stating that the guns were to have been 5.5 in (140 mm) guns ordered from the Coventry Ordnance Works. Salamis's armament was rounded out by twelve 75 mm (3.0 in) quick-firing guns, also mounted in casemates, and five 50 cm (20 in) submerged torpedo tubes. ### Armor Salamis had an armored belt that was 9.875 in (250.8 mm) thick in the central section of the ship, where it protected critical areas, such as the ammunition magazines and machinery spaces. On either end of the ship, past the main battery gun turrets, the belt was decreased to 3.875 in (98.4 mm) thick; the height of the belt was also decreased in these areas. The main armored deck was 2.875 in (73.0 mm) in the central portion of the ship, and as with the belt armor, in less important areas the thickness was decreased to 1.5 in (38 mm). The main battery gun turrets were protected by 9.875-inch armor plate on the sides and face, and the barbettes in which they were placed were protected by the same thickness of armor. The conning tower was lightly armored, with only 1.25 in (32 mm) worth of protection. ## Construction and cancellation The keel for Salamis was laid down on 23 July 1913. The naval balance of power in the Aegean, however, was soon to change. The Brazilian Navy put their third dreadnought (Rio de Janeiro) up for sale in October 1913, and they found no shortage of countries interested in acquiring it, including Russia, Italy, Greece, and the Ottoman Empire. The British and French were also highly involved, given their interests in the Mediterranean; in November, the French agreed to back Greece with a large loan as a way of preventing Italy from acquiring the ship. Moreover, the Greek consul general in Britain claimed that the Bank of England was prepared to advance all the money needed to purchase the ship as soon as a French loan was guaranteed. Arrangements for all this took quite some time, however, and at the end of December, the Ottomans were able to secure Rio de Janeiro with a private loan from a French bank. The purchase caused a panic in Greece, as the balance of naval power would shift to the Ottomans in the near future. The Greek government pressed AG Vulcan to finish Salamis as quickly as possible, but she could not be completed before mid-1915, by which time both of the new Ottoman battleships would have been delivered. The Greeks ordered two dreadnoughts from French yards, slightly modified versions of the French Bretagne-class battleship; the first, Vasilefs Konstantinos, was laid down on 12 June 1914. As a stopgap measure, they purchased a pair of pre-dreadnought battleships from the United States: Mississippi and Idaho, which became Kilkis and Lemnos, respectively. Kerr criticized this purchase as "penny-wise and pound-foolish" for ships that were "entirely useless for war", carrying a price that could have paid for a brand-new dreadnought. The outbreak of World War I in July 1914 drastically altered the situation; the British Government declared a naval blockade of Germany in August after it entered the war. The blockade meant that the guns could not be delivered, but the ship was nevertheless launched on 11 November 1914. With no possibility of arming the ship, work was halted on 31 December 1914. In addition, manpower shortages created by the war, along with the redirection of steel production to the needs of the Army, meant that less critical projects could not be completed, especially since other warships were nearing completion and could be finished much more quickly. By this time Greece had paid AG Vulcan only £450,000. Bethlehem refused to send the main battery guns to Greece. The 14-inch guns were instead sold to the British, who used them to arm the four Abercrombie-class monitors. The wartime activities of the ship are unclear. According to a postwar report written for the Proceedings of the United States Naval Institute, the incomplete vessel was towed to Kiel, where she was used as a barracks ship. The modern naval historian René Greger states that the incomplete hull never left Hamburg. Some contemporary observers believed the ship had been completed for service with the German Navy, and British Admiral John Jellicoe, the commander of the Grand Fleet, received intelligence that the ship might have been in service by 1916. Other observers, such as Barnett, pointed to the difficulty the German Navy would have had in rearming the ship with German guns, given the fact that Germany possessed no designs for naval guns of that caliber or mountings suitable for use aboard Salamis. He regarded the claim that she had been put into service "doubtful". Barnett's assessment was correct; a substantial rebuilding of the ship's barbette structures would have been required to accommodate German guns, and since guns available for naval use were not easily available owing to the needs of the German Army, work was directed toward German vessels under construction like the battlecruiser Hindenburg. The British realized the rumor was false when the ship did not appear at the Battle of Jutland on 31 May – 1 June 1916. Regardless of the ship's wartime disposition, however, Proceedings noted in 1920 that it was "improbable" that construction would resume upon the ship. Indeed, the Greek navy refused to accept the incomplete hull, and as a result AG Vulcan sued the Greek government in 1923. A lengthy arbitration ensued. The Greek navy argued that the ship, which was designed in 1912, was now obsolete and that under the Treaty of Versailles it could not be armed by the German shipyard anyway. The Greeks requested that Vulcan return advance payments made before work had stopped. The dispute went before the Greco-German Mixed Arbitral Tribunal (established under Article 304 of the Treaty of Versailles), which dragged on throughout the 1920s. In 1924, a Dutch admiral was appointed by the tribunal to evaluate the Greek complaints, and he ultimately sided with Vulcan, probably in part due to Greek inquiries to Vulcan earlier that year as to the possibility of modernizing the design. Vulcan's response did not satisfy Greek requirements, so the proposal was dropped. In 1928, with the impending recommissioning of the Turkish battlecruiser Yavuz (ex-SMS Goeben), Greece considered responding positively to an offer from Vulcan to reach a compromise, one option being to complete and modernize Salamis. The cost of the ship would be absorbed by the war reparations Germany owed Greece for the years 1928 through 1930 and part of 1931. Admiral Periklis Argyropoulos, the Minister of Marine, wanted to accept the offer, pointing to a study by the General Staff that demonstrated that a modernized Salamis would be capable of defeating Yavuz owing to the heavier armor and more powerful main battery of the Greek ship. The British naval architect Eustace Tennyson d'Eyncourt issued a study in support of Argyropoulos, pointing out that Salamis would likely also be faster than Yavuz and would have a stronger anti-aircraft battery. Commander Andreas Kolialexis opposed acquiring Salamis, and he wrote a memorandum in mid-1929 to Venizelos, who was again the Prime Minister, where he argued that completing Salamis would take too long and that a fleet of torpedo-armed vessels, including submarines, would be preferable. Venizelos determined that the cost of completing Salamis would be too high, since it would preclude the acquisition of destroyers or a powerful naval air arm. Instead, the two old pre-dreadnoughts Kilkis and Lemnos would be retained for coastal defense against Yavuz. This decision was reinforced by the onset of the Great Depression that year, which weakened Greece's already limited finances. On 23 April 1932 the arbitrators determined that the Greek government owed AG Vulcan £30,000, and that AG Vulcan would be awarded the hull. The ship was broken up for scrap in Bremen that year. The second Greek dreadnought, Vasilefs Konstantinos, met a similar fate. As with Salamis, work on the ship was halted by the outbreak of the war in July 1914, and in the aftermath the Greek government refused to pay for the unfinished ship as well. ## Endnotes
2,973,567
Omayra Sánchez
1,172,707,190
Colombian volcano victim (1972–1985)
[ "1972 births", "1985 deaths", "Child deaths", "Colombian children", "Deaths in landslides", "Deaths in volcanic eruptions", "Filmed deaths during natural disasters", "Natural disaster deaths in Colombia", "People from Tolima Department", "People notable for being the subject of a specific photograph" ]
Omayra Sánchez Garzón (August 28, 1972 – November 16, 1985) was a Colombian girl trapped and killed by a landslide when she was 13 years old. The landslide was caused by the 1985 eruption of the volcano Nevado del Ruiz in Armero, Tolima. Volcanic debris mixed with ice to form massive lahars (volcanically induced mudflows, landslides, and debris flows), which rushed into the river valleys below the mountain, killing about 25,000 people and destroying Armero and 13 other villages. After the lahar demolished her home, Sánchez was trapped beneath the debris of her house, where she remained in water for three days, as rescue workers did not have any way to render life-saving medical care if they amputated her hopelessly pinned legs. Her plight was documented by journalists as she transformed from calmness into agony while relief workers tried to comfort her. After 60 hours of struggling, she died, likely as a result of either gangrene or hypothermia. Her death was used to dramatize allegations of the failure of officials to respond correctly to the threat of the volcano. A photograph of Sánchez taken by the photojournalist Frank Fournier shortly before she died was published in news outlets around the world. It was later designated the World Press Photo of the Year for 1986. Sánchez has been remembered by means of music, literature, and commemorative articles. ## Background On November 13, 1985, the volcano Nevado del Ruiz erupted. At 9:09 pm of that evening, pyroclastic flows exploding from the crater melted the mountain's icecap, forming lahars (volcanic mudflows and debris flows) which cascaded into river valleys below. One lahar, consisting of three pulses, did most of the damage. Traveling at 6 meters (20 ft) per second (\~13.5 miles per hour, \~22 km/h), the first pulse enveloped most of the town of Armero, killing as many as 20,000 people; the two later pulses weakened buildings. Another lahar killed 1,800 people in nearby Chinchiná. In total 23,000 people were killed and 13 villages in addition to Armero were destroyed. Loss of life was exacerbated by the authorities' failure to take costly preventive measures in the absence of clear signs of imminent danger. There had been no substantial eruption of the volcano since 1845, which contributed to complacency; locals called the volcano the "Sleeping Lion". During September 1985, as earthquakes and phreatic eruptions rocked the area around the volcano, officials began planning for evacuation. A hazard map was prepared in October; it highlighted the danger from falling ash and rock near Murillo, Santa Isabel, and Líbano, as well as the threat of lahars in Mariquita, Guayabal, Chinchiná, and Armero. The map was poorly distributed to those at greatest risk: many survivors said they had not known of it, though several major newspapers had featured it. Henry Villegas of the Colombian Institute of Mining and Geology stated that the maps clearly demonstrated Armero would be affected by the lahars, but had "met with strong opposition from economic interests". He said that the short time between the map's preparation and the eruption hindered timely distribution. The Colombian Congress criticised scientific and civil defense agencies for scaremongering, and the government and army were preoccupied with a guerrilla campaign in Bogotá, the national capital. The death toll was increased by the lack of early warnings, unwise land use, as villages were built in the likely path of lahars, and the lack of preparedness in communities near the volcano. Colombia's worst natural disaster, the Armero tragedy (as it came to be known) was the second-deadliest volcanic disaster of the 20th century (surpassed only by the 1902 eruption of Mount Pelée). It was the fourth-deadliest eruption recorded since 1500 AD. Its lahars were the deadliest in volcanic history. ## Life Omayra Sánchez lived in the neighborhood of Santander with her parents Álvaro Enrique, a rice and sorghum collector, and María Aleida, along with her brother Álvaro Enrique and aunt María Adela Garzón. Prior to the eruption, her mother had traveled to Bogotá on business. The night of the disaster, Omayra and her family were awake, worrying about the ashfall from the eruption, when they heard the sound of an approaching lahar. After it hit, Omayra became trapped under her home's concrete and other debris and could not free herself. When rescue teams tried to help her, they realized that her legs were trapped under her house's roof. Sources differ as to the degree to which Sánchez was trapped. Zeiderman (2009) said she was "trapped up to her neck", while Barragán (1987) said that she was trapped up to her waist. For the first few hours after the mudflow hit, she was covered by concrete but got her hand through a crack in the debris. After a rescuer noticed her hand protruding from a pile of debris, he and others cleared tiles and wood during the course of a day. Once the girl was freed from the waist up, her rescuers attempted to pull her out, but found the task impossible without breaking her legs in the process. Each time a person pulled her, the water pooled around her, rising so that it seemed she would drown if they let her go, so rescue workers placed a tire around her body to keep her afloat. Divers discovered that Sánchez's legs were caught under a door made of bricks, with her dead aunt's arms clutched tightly around her legs and feet. ## Death Despite her predicament, Sánchez remained relatively positive: she sang to Germán Santa María Barragán, a journalist who was working as a volunteer, asked for sweet food, drank soda, and agreed to be interviewed. At times, she was scared, and prayed or cried. On the third night, Sánchez began hallucinating, saying that she did not want to be late for school, and mentioned a math exam. Near the end of her life, Sánchez's eyes reddened, her face swelled, and her hands whitened. At one point she asked the people to leave her so they could rest. Hours later the workers returned with a pump and tried to save her, but her legs were bent under the concrete as if she was kneeling, and it was impossible to free her without severing her legs. Lacking the surgical equipment to save her from the effects of an amputation, the doctors present agreed that it would be more humane to let her die. In all, Sánchez suffered for nearly three nights (roughly 60 hours) before she died at approximately 10:05 A.M. on November 16 from exposure, most likely from gangrene or hypothermia. Her brother survived the lahars; her father and aunt died. Her mother expressed her feelings about Omayra's death: "It is horrible, but we have to think about the living ... I will live for my son, who only lost a finger." As the public became aware of Sánchez's situation through the media, her death became used as a symbol of the failure of officials to properly assist victims who allegedly could have been saved. Controversy began after descriptions of shortages of equipment were released in newspapers, disproving what officials had previously indicated: that they had used the best of their supplies. Volunteer relief workers said that there was such a lack of resources that supplies as basic as shovels, cutting tools, and stretchers were exhausted. The rescue process was impeded by large crowds and disorganization. An unnamed police officer said that the government should have depended on human resources to alleviate the problems and that the system of rescue was disorganized. Colombia's Minister of Defense, Miguel Uribe, said he "understood criticism of the rescue effort", but said that Colombia was "an undeveloped country" that did not "have that kind of equipment". ## Photograph Frank Fournier, a French reporter who landed in Bogotá on November 15, took a photograph of Sánchez during her final hours, titled "The Agony of Omayra Sánchez". When he reached Armero at dawn on the 16th, a farmer directed him to Sánchez, who by then had been trapped for nearly three days and was nearly deserted. Fournier later described the town as "very haunting," with "eerie silence" punctuated by screaming. He said that he took the photograph feeling that he could only "report properly on the courage and the suffering and the dignity of the little girl" in his attempt to publicize the disaster's need for relief efforts, feeling otherwise "powerless". At the time, there was international awareness of the disaster and the controversy concerning responsibility for the destructive aftermath. The image of Sanchez captured international attention. According to an unnamed BBC reporter, "Many were appalled at witnessing so intimately what transpired to be the last few hours of Omayra's life". After the photo was published in Paris Match, many accused Fournier of being "a vulture". He responded, > "I felt the story was important for me to report and I was happier that there was some reaction; it would have been worse if people had not cared about it. ... I believe the photo helped raise money from around the world in aid and helped highlight the irresponsibility and lack of courage of the country's leaders." The picture later won the World Press Photo of the Year for 1986. ## Legacy The Armero catastrophe happened soon after the M-19 guerrilla group's raid and subsequent Palace of Justice siege on November 6, worsening an already chaotic situation. After Sánchez's death, the Colombian government was blamed for its inaction and general indifference to warning signs prior to the volcano's eruption. The volcano Nevado del Ruiz is still active, according to the Volcano Watch Center in Colombia. Melting only 10 percent of the ice would produce mudflows with a volume of as much as 200,000,000 cubic meters (7.06×10<sup>9</sup> cu ft)—similar to the mudflow that destroyed Armero in 1985. Such lahars can travel up to 100 kilometers (62 mi) along river valleys in a few hours. Estimates show that up to 500,000 people living in the Combeima, Chinchiná, Coello-Toche, and Guali valleys are at risk, and 100,000 of these are considered to be at high risk. The city of Armero no longer exists. The site was commemorated as a memorial with Christian crosses and a small monument to Sánchez. During the years after the eruption, Sánchez was commemorated repeatedly, especially by newspapers like El Tiempo. Many victims of the disaster were commemorated, but Sánchez in particular has attracted lasting attention in popular poetry, novels, and music. For example, a punk rock band formed in Chile in 2008 named themselves Omayra Sánchez; they express their "discontent that they feel with the negligence on the part of the people who in this day and age run the world". Adiós, Omayra: La catástrofe de Armero (1988), written by Eduardo Santa as a response to the eruption, depicts the girl's last days of life in detail and cites her in its introduction as an eternal symbol of the catastrophe. In No Morirás (1994), Germán Santa María Barragán writes that of all the horrors he saw at Armero, nothing was more painful than seeing the face of Omayra Sánchez under the ruins of her house. Isabel Allende's short story, "And of Clay Are We Created" ("De barro estamos hechos"), is told from the perspective of a reporter who tries to help a girl trapped under the fireplace of her ruined home. Allende later wrote, "Her [Sánchez's] big black eyes, filled with resignation and wisdom, still pursue me in my dreams. Writing the story failed to exorcise her ghost." To try to prevent repetition of such a disaster, the government of Colombia created the Oficina Nacional para la Atención de Desastres (National Office for Disaster Preparedness), now known as the Dirección de Prevención y Atención de Desastres (Directorate for Disaster Prevention and Preparedness). All Colombian cities were directed to plan for natural disasters. A species of cricket found in the region of the Armero tragedy was newly described during 2020 and named Gigagryllus omayrae in memory of Omayra Sánchez.
19,202,869
Yugoslav torpedo boat T3
1,173,271,877
Austro-Hungarian then Yugoslav torpedo boat operating between 1921 and 1945
[ "1914 ships", "Maritime incidents in February 1945", "Naval ships of Italy captured by Germany during World War II", "Naval ships of Yugoslavia captured by Italy during World War II", "Ships built in Trieste", "Shipwrecks in the Adriatic Sea", "Torpedo boats of the Austro-Hungarian Navy", "Torpedo boats of the Royal Yugoslav Navy", "Torpedo boats sunk by aircraft", "World War I torpedo boats of Austria-Hungary", "World War II naval ships of Yugoslavia" ]
T3 was a sea-going torpedo boat that was operated by the Royal Yugoslav Navy between 1921 and 1941. Originally 78 T, a 250t-class torpedo boat of the Austro-Hungarian Navy built in 1914, she was armed with two 66 mm (2.6 in) guns, four 450 mm (17.7 in) torpedo tubes, and could carry 10–12 naval mines. She saw active service during World War I, performing convoy, escort and minesweeping tasks, anti-submarine operations and shore bombardment missions. In 1917 the suffixes of all Austro-Hungarian torpedo boats were removed, and thereafter she was referred to as 78. She was part of the escort force for the Austro-Hungarian dreadnought Szent István during the action that resulted in the sinking of that ship by Italian torpedo boats in June 1918. Following Austria-Hungary's defeat in 1918, she was allocated to the Navy of the Kingdom of Serbs, Croats and Slovenes, which later became the Royal Yugoslav Navy, and was renamed T3. At the time, she and the seven other 250t-class boats were the only modern sea-going vessels of the fledgling maritime force. During the interwar period, T3 and the rest of the navy were involved in training exercises and cruises to friendly ports, but activity was limited by reduced naval budgets. The ship was captured by the Italians during the Axis invasion of Yugoslavia in April 1941. After her main armament was modernised and her crew increased to 62, she served with the Royal Italian Navy under her Yugoslav designation, although she was only used for coastal and second-line tasks. Following the Italian capitulation in September 1943, she was captured by Germany and, after being fitted with additional anti-aircraft guns, served with the German Navy or the Navy of the Independent State of Croatia as TA48. In German/Croatian service her crew of 52 consisted entirely of Croatian officers and enlisted men. She was sunk by Allied aircraft in February 1945 while in the port of Trieste, where she had been built. ## Background In 1910, the Austro-Hungarian Naval Technical Committee initiated the design and development of a 275-tonne (271-long-ton) coastal torpedo boat, specifying that it should be capable of sustaining 30 knots (56 km/h) for 10 hours. This specification was based on expectations that the Strait of Otranto, where the Adriatic Sea meets the Ionian Sea, would be blockaded by hostile forces during a future conflict. In such circumstances, there would be a need for a torpedo boat that could sail from the Austro-Hungarian Navy base at the Bocche di Cattaro (now Kotor) to the Strait during darkness, locate and attack blockading ships and return to port before morning. Steam turbine power was selected for propulsion, as diesels with the necessary power were not available, and the Austro-Hungarian Navy did not have the practical experience to run turbo-electric boats. Stabilimento Tecnico Triestino (STT) of Triest was selected for the contract to build eight vessels, ahead of one other tenderer. The T-group designation signified that they were built at Triest. ## Description and construction The 250t-class, T-group boats had short raised forecastles and an open bridge, and were fast and agile, well designed for service in the Adriatic. They had a waterline length of 57.3 m (188 ft 0 in), a beam of 5.7 m (18 ft 8 in), and a normal draught of 1.5 m (4 ft 11 in). While their designed displacement was 237 tonnes (233 long tons), they displaced about 324 tonnes (319 long tons) fully loaded. The crew consisted of three officers and thirty-eight enlisted men. The boats were powered by two Parsons steam turbines driving two propellers, using steam generated by two Yarrow water-tube boilers, one of which burned fuel oil and the other coal. There were two boiler rooms, one behind the other. The turbines were rated at 5,000–5,700 shaft horsepower (3,700–4,300 kW) and designed to propel the boats to a top speed of 28 kn (52 km/h; 32 mph), although a maximum speed of 29.2 kn (54.1 km/h; 33.6 mph) could be achieved. They carried 18.2 t (17.9 long tons) of coal and 24.3 t (23.9 long tons) of fuel oil, which gave them a range of 1,000 nautical miles (1,900 km; 1,200 mi) at 16 kn (30 km/h; 18 mph). The T-group had one funnel rather than the two funnels of the later groups of the class, and had a large ventilation cowl under the bridge and another smaller one aft of the funnel. Due to an inadequate budget, 78 T and the rest of the 250t class were essentially large coastal vessels, despite the original intention that they would be used for "high seas" operations. They were the first small Austro-Hungarian Navy boats to use turbines, and this contributed to ongoing problems with them, which had to be progressively solved once they were in service. The boats were originally to be armed with three Škoda 66 mm (2.6 in) L/30 guns, and three 450 mm (17.7 in) torpedo tubes, but this was changed to two guns and four torpedo tubes before the first boat was completed, to standardise the armament with the F-group to follow. The torpedo tubes were mounted in pairs, with one pair mounted between the forecastle and bridge, and the other on a section of raised superstructure above the aft machinery room. They could also carry 10–12 naval mines. The fifth of its class to be built, 78 T was laid down on 22 October 1913, launched on 4 March 1914, and completed on 23 August 1914. Later that year, one 8 mm (0.31 in) machine gun was added for anti-aircraft work. ## Career ### World War I The original concept of operation for the 250t-class boats was that they would sail in a flotilla at the rear of a cruising battle formation, and were to intervene in fighting only if the battleships around which the formation was established were disabled, or in order to attack damaged enemy battleships. When a torpedo attack was ordered, it was to be led by a scout cruiser, supported by two destroyers to repel any enemy torpedo boats. A group of four to six torpedo boats would deliver the attack under the direction of the flotilla commander. During World War I, 78 T was used for convoy, escort, minesweeping tasks, anti-submarine operations and shore bombardment missions. She also conducted patrols and supported seaplane raids against the Italian Adriatic coast. On 24 May 1915, 78 T and seven other 250t-class boats were involved in the shelling of various Italian shore-based targets known as the Bombardment of Ancona, with 78 T involved in the shelling of Porto Corsini near Ravenna. In the latter action, an Italian 120 mm (4.7 in) shore battery returned fire, hitting the scout cruiser Novara and damaging one of the other 250t-class boats. On the night of 18/19 June, 78 T was part of a flotilla – consisting of two scout cruisers, three destroyers and five 250t-class boats – providing distant cover for a bombardment of Rimini and Pesaro when they encountered and sank the Italian steamer Grazia near San Benedetto del Tronto. On 23 July, 78 T and 77 T joined the scout cruiser Helgoland in bombarding Ortona as part of a 1st Torpedo Flotilla shore bombardment and landing operation on the central Adriatic coast of Italy which also targeted San Campomarino and Termoli and involved cutting the telegraph cable on the island of Tremiti. On 17 August, the 1st Torpedo Flotilla shelled the island chain of Pelagosa in the middle of the Adriatic, and 78 T was part of a force tasked to protect the southern approaches to the islands from enemy submarines. The success of this bombardment, which destroyed the only source of drinking water, caused the Italians to abandon Pelagosa. In late November 1915, the Austro-Hungarian fleet deployed a force from its main fleet base at Pola to Cattaro in the southern Adriatic; this force included six of the eight T-group torpedo boats. This force was tasked to maintain a permanent patrol of the Albanian coastline and interdict any troop transports crossing from Italy. After an attack on Durazzo in Albania in which two Austro-Hungarian destroyers were sunk after straying out of a cleared lane through a minefield, on 30 December 78 T and four other 250t-class boats were sent south with the scout cruiser Novara in order to strengthen morale and try to prevent the transfer of the captured crew of one of the destroyers to Italy. No Italian ships were encountered, and the group returned to the Bocche the following day. On 6 February 1916, Helgoland, 78 T and five other 250t-class boats were intercepted by the British light cruiser HMS Weymouth and French destroyer Bouclier north of Durazzo in Albania, during which the only damage was caused by a collision between two of the other 250t-class boats. On 24 February, 78 T was part of an Austro-Hungarian force – consisting of a scout cruiser, four destroyers and five 250t-class boats – sent to disrupt the Allied evacuation of Durazzo, but encountered no Allied ships. In 1917, one of 78 T's 66 mm guns was placed on an anti-aircraft mount. On 11 May 1917, the British submarine HMS H1 stalked 78 T off Pola, firing two torpedoes at her. The British captain had kept his submarine's periscope extended too far and for too long, and the tell-tale "feather" had alerted the crew of 78 T, allowing her crew to avoid the incoming torpedoes. That night, the Huszár-class destroyer Csikós, accompanied by 78 T and two other 250t-class boats, were pursued in the northern Adriatic by an Italian force of five destroyers, but were able to retire to safety behind a minefield. On 21 May, the suffix of all Austro-Hungarian torpedo boats was removed, and thereafter they were referred to only by the numeral. On 23 September, 77 and 78 were laying a minefield off Grado in the northern Adriatic when they had a brief encounter with an Italian MAS boat. While laying mines on routes between Venice and Ancona on 19 November, 78, along with four other 250t-class boats, was intercepted by four Italian destroyers but were able to escape damage. On 28 November, a number of 250t-class boats were involved in two shore bombardment missions. In the second mission, 78 joined seven other 250t-class boats and six destroyers for the bombardment of Porto Corsini, Marotta and Cesenatico. The bombardment damaged the railway tracks between Senigallia and Rimini and destroyed one locomotive and several wagons, but when the flotilla moved to attack two small steamers, an Italian armoured train arrived and engaged them with its 15 cm (6 in) guns, and they broke off. On the return voyage to Pola, the ships were apparently pursued by Italian warships, but the scout cruiser Admiral Spaun sailed to provide support, and the Italians withdrew. By 1918, the Allies had strengthened their ongoing blockade on the Strait of Otranto, as foreseen by the Austro-Hungarian Navy. As a result, it was becoming more difficult for the German and Austro-Hungarian U-boats to get through the strait and into the Mediterranean Sea. In response to these blockades, the new commander of the Austro-Hungarian Navy, Konteradmiral Miklós Horthy, decided to launch an attack on the Allied defenders with battleships, scout cruisers, and destroyers. During the night of 8 June, Horthy left the naval base of Pola in the upper Adriatic with the dreadnought battleships Viribus Unitis and Prinz Eugen. At about 23:00 on 9 June 1918, after some difficulties getting the harbour defence barrage opened, the dreadnoughts Szent István and Tegetthoff, escorted by one destroyer and six torpedo boats, including 78, also departed Pola and set course for Slano, north of Ragusa, to rendezvous with Horthy in preparation for a coordinated attack on the Otranto Barrage. About 03:15 on 10 June, while returning from an uneventful patrol off the Dalmatian coast, two Royal Italian Navy (Italian: Regia Marina) MAS boats, MAS 15 and MAS 21, spotted the smoke from the Austrian ships. Both boats successfully penetrated the escort screen and split to engage the dreadnoughts individually. MAS 21 attacked Tegetthoff, but her torpedoes missed. Under the command of Luigi Rizzo, MAS 15 fired two torpedoes at 03:25, both of which hit Szent István. Both boats evaded pursuit. The torpedo hits on Szent István were abreast her boiler rooms, which flooded, knocking out power to the pumps. Szent István capsized less than three hours after being torpedoed. This disaster practically ended Austro-Hungarian fleet operations in the Adriatic for the remaining months of the war. ### Inter-war years 78 survived the war intact. In 1920, under the terms of the previous year's Treaty of Saint-Germain-en-Laye by which rump Austria officially ended World War I, she was allocated to the Kingdom of Serbs, Croats and Slovenes (KSCS, later Yugoslavia). Along with three other 250t-class, T-group boats, 76, 77 and 79, and four F-group boats she served with the KSCS Navy (later the Royal Yugoslav Navy, Serbo-Croatian Latin: Kraljevska Mornarica, KM; Краљевска Морнарица). Transferred in March 1921, in KM service, 78 was renamed T3. At the time of her transfer, she and the other 250t-class torpedo boats were the only modern sea-going warships in the Yugoslav fleet. During the French occupation of Cattaro, the original torpedo tubes were destroyed or damaged, and new ones of the same size were ordered from the Strojne Tovarne factory in Ljubljana. In KM service it was intended to replace one or both guns on each boat of the 250t class with a longer Škoda 66 mm L/45 gun, and it is believed that this included the forward gun on T1. She was also fitted with one or two Zbrojovka 15 mm (0.59 in) machine guns. In KM service, the crew increased to 52. In 1925, exercises were conducted off the Dalmatian coast, involving the majority of the navy. T3 underwent a refit in 1927. In May–June 1929, six of the eight 250t-class torpedo boats accompanied the light cruiser Dalmacija, the submarine tender Hvar and the submarines Hrabri and Nebojša, on a cruise to Malta, the Greek island of Corfu in the Ionian Sea, and Bizerte in the French protectorate of Tunisia. The ships and crews made a very good impression while visiting Malta. In 1932, the British naval attaché was reporting that Yugoslav ships were engaging in few exercises or manoeuvres due to reduced budgets. By 1939, the maximum speed achieved by the 250t class in Yugoslav service had declined to 24 kn (44 km/h; 28 mph). ### World War II In April 1941, Yugoslavia entered World War II when it was invaded by the German-led Axis powers. At the time of the invasion, T3 was assigned to the Southern Sector of the KM's Coastal Defence Command based at the Bay of Kotor, along with her sister ship T1, several minesweepers and other craft. Just prior to the invasion, T3, along with the bulk of the 3rd Torpedo Division, was detached to Sibenik, in accordance with plans to attack the Italian enclave of Zara. When the invasion began on 6 April, T3 was anchored in the Sibenik channel between Jadrija and Zablace with three other torpedo boats, but she was not equipped with modern anti-aircraft guns, and so was unable to effectively engage the Italian aircraft flying over Zlarin to attack Sibenik. The torpedo boats were ordered to retreat to Zaton, but T3 was hampered by problems with one of her boilers and was sent to Primosten. The plan to attack Zara was abandoned after messages were received about the proclamation of the Independent State of Croatia (NDH) on 10 April and that Yugoslav forces were retreating on all fronts. In response to the proclamation, the crew of T3 mutinied and sailed to either Split or nearby Divulje to join the fledgling Navy of the Independent State of Croatia, but was soon seized by the Italians. T3 was operated by the Royal Italian Navy under her Yugoslav designation. She was fitted with two 76 mm (3 in) L/30 anti-aircraft guns in place of her 66 mm guns, along with a single Breda 20 mm (0.79 in) L/65 anti-aircraft gun. Her bridge was enclosed, but no other significant alterations were made to her. Due to her obsolescence, the Italians only used T3 as a guard ship, and for coastal and second-line duties against the Yugoslav Partisans. While in Italian service, her crew grew to 64. When the Italians capitulated, the German Navy (German: Kriegsmarine) seized T3 – which was undergoing repairs in the port of Rijeka – on 16 September 1953, and renamed her TA48. After a partial reconstruction and re-armament, she was transferred to the NDH navy at Trieste on 15 August 1944, but remained subordinated to the 2nd Escort Flotilla of the German 11th Security Division. The Germans removed her torpedo tubes and fitted her for anti-aircraft defence, with twin 37 mm (1.5 in) SK C/30 guns mounted forward, one quad 20 mm (0.79 in) Flakvierling 38 gun mounted where the aft torpedo tubes had been, one twin Breda 20 mm gun mounted aft, and two single Breda 20 mm guns mounted where her forward torpedo tubes had been. Sources vary on whether she was used operationally. Michael J. Whitley and Vincent P. O'Hara state that she was used for patrol and escort work in the northern Adriatic, while Zvonimir Freivogel asserts that she was never operational due to lack of spares, available workforce, and age. Her crew while under German control amounted to 35 men. The Partisans in the Rijeka area placed considerable and ongoing pressure on T3's commanding officer to defect with his boat to them, but he refused because of the ongoing mechanical problems with the vessel. The boat was transferred back to Rijeka, and was moored there on 4 December when the NDH motor torpedo boat KS 5 defected to the Partisans, and other defecting boats were stopped by the harbour boom. Almost Croatian naval personnel were brought ashore and their commanding officers were brought before a military tribunal but eventually acquitted. Regardless of this result, the NDH navy was dissolved and its personnel were mostly employed thereafter in ground units. Some Croatian naval personnel did remain aboard T3 and she was transferred back to Trieste. She was sunk there by Allied aircraft on 20 February 1945. The wreck was raised on 10 May 1946 and scrapped in 1948–1949.