pageid
int64
12
74.6M
title
stringlengths
2
102
revid
int64
962M
1.17B
description
stringlengths
4
100
categories
list
markdown
stringlengths
1.22k
148k
571,376
SMS Bayern
1,159,798,155
Battleship of the German Imperial Navy
[ "1915 ships", "Bayern-class battleships", "Maritime incidents in 1919", "Ships built in Kiel", "World War I battleships of Germany", "World War I warships scuttled at Scapa Flow" ]
SMS Bayern was the lead ship of the Bayern class of dreadnought battleships in the German Kaiserliche Marine (Imperial Navy). The vessel was launched in February 1915 and entered service in July 1916, too late to take part in the Battle of Jutland. Her main armament consisted of eight 38 cm (15 in) guns in four turrets, which was a significant improvement over the preceding König's ten 30.5 cm (12 inch) guns. The ship was to have formed the nucleus for a fourth battle squadron in the High Seas Fleet, along with three of her sister ships. Of the other ships only one—Baden—was completed; the other two were canceled later in the war when production requirements shifted to U-boat construction. Bayern was commissioned midway through the war, and had a limited service career. The first operation in which the ship took part was an abortive fleet advance into the North Sea on 18–19 August 1916, a month after she had been commissioned. The ship also participated in Operation Albion in the Gulf of Riga, but shortly after the German attack began on 12 October 1917, Bayern was mined and had to be withdrawn for repairs. She was interned with the majority of the High Seas Fleet at Scapa Flow in November 1918 following the end of World War I. On 21 June 1919, Admiral Ludwig von Reuter ordered the fleet to be scuttled; Bayern sank at 14:30. In September 1934, the ship was raised, towed to Rosyth, and scrapped. ## Design Design work on the Bayern class began in 1910 in the context of the Anglo-German naval arms race, with initial discussions focused on the caliber of the main battery; previous German battleships had carried 30.5 cm (12 in) guns, but as foreign navies adopted 34 cm (13.5 in) and 35.6 cm (14 in) weapons, the German naval command felt the need to respond with larger guns of their own. They considered 32 cm (12.6 in), 38 cm (15 in), and 40 cm (15.7 in) guns. Admiral Alfred von Tirpitz, the State Secretary of the Reichsmarineamt (Imperial Naval Office), was able to use public outcry over the Agadir Crisis to pressure the Reichstag (Imperial Diet) into appropriating additional funds for the Kaiserliche Marine (Imperial Navy) to offset the additional cost of the larger weapons. The design staff settled on the 38 cm caliber since the 40 cm was significantly more expensive and the 38 cm gun marked a significant improvement over existing German guns. Bayern was 179.4 m (588 ft 7 in) long at the waterline, and an even 180 m (590 ft 7 in) long overall. She had a beam of 30 m (98 ft 5 in) and a draft of 9.3–9.4 m (30 ft 6 in – 30 ft 10 in) Bayern displaced 28,530 metric tons (28,080 long tons) at a normal displacement; at full combat load, she displaced up to 32,200 t (31,700 long tons). Bayern was powered by three Parsons steam turbines, with steam provided by three oil-fired and eleven coal-fired Schulz-Thornycroft water-tube boilers. Her propulsion system was rated at 35,000 metric horsepower (35,000 shp) for a maximum speed of 21 knots (39 km/h; 24 mph), and on trials achieved 55,967 metric horsepower (55,201 shp) for a maximum speed of 22 knots (41 km/h; 25 mph). The ship could carry up to 3,400 t (3,300 long tons; 3,700 short tons) of coal and 620 t (610 long tons; 680 short tons) of fuel oil, which provided a maximum range of 5,000 nmi (9,300 km; 5,800 mi) at a cruising speed of 12 kn (22 km/h; 14 mph). The ship was the first German warship armed with eight 38 cm (15 in) SK L/45 guns. The main battery guns were arranged in four twin gun turrets: two superfiring turrets each fore and aft. Her secondary armament consisted of sixteen 15 cm (5.9 in) SK L/45 guns, four 8.8 cm (3.5 in) SK L/45 guns and five 60 cm (23.6 in) underwater torpedo tubes, one in the bow and two on each beam. Upon commissioning, she carried a crew of 42 officers and 1,129 enlisted men. The ship had an armored belt that was 170–350 mm (6.7–13.8 in) thick and an armored deck that was 60–100 mm (2.4–3.9 in) thick. Her forward conning tower had 400 mm (15.7 in) sides, and the main battery turrets had 350 mm thick sides and 200 mm (7.9 in) thick roofs. ## Service history Bayern was ordered with the provisional name "T" in 1912, under the fourth and final Naval Law, which was passed that year. Work began at the Howaldtswerke Dockyard in Kiel under construction number 590. The ship was laid down on 22 December 1913 and launched on 18 February 1915. After fitting-out, she was commissioned on 18 March, but remained largely idle in port for the next month, undergoing initial tests, including inclination tests to determine how the vessel responded to flooding. She got underway on 15 April for initial trials of her main battery, which lasted into the next day. Bayern conducted her first full-power speed test on 25 April off the island of Alsen; these trials continued until 2 May. After further examinations, the ship was deemed ready for service on 15 July, a month and a half too late for her to participate in the Battle of Jutland. Bayern joined III Battle Squadron of the High Seas Fleet upon her commissioning. The ship would have been available for the operation, but the ship's crew, composed largely of the crew from the recently decommissioned battleship Lothringen, was given leave. She had cost the Imperial German Government 49 million Goldmarks. Bayern was later joined in service by one sister ship, Baden. Two other ships of this class, Sachsen and Württemberg, were canceled before they were completed. At the time of her commissioning, Bayern's commander was Kapitän zur See (Captain at Sea) Max Hahn. Ernst Lindemann, who went on to command the battleship Bismarck during her only combat sortie in World War II, served aboard the ship as a wireless operator. On 25 May, Ludwig III of Bavaria, the last King of Bavaria, visited the ship. Bayern briefly served as the fleet flagship, from 7 to 16 August. Admiral Reinhard Scheer planned a fleet advance for 18–19 August 1916; the operation consisted of a bombardment conducted by I Scouting Group. This was an attempt to draw out and destroy Admiral David Beatty's battlecruisers. As Moltke and Von der Tann were the only two German battlecruisers still in fighting condition, three dreadnoughts were assigned to the unit for the operation: Bayern and the two König-class ships Markgraf and Grosser Kurfürst. Admiral Scheer and the rest of the High Seas Fleet, including 15 dreadnoughts, were to trail behind and provide cover. The makeshift I Scouting Group conducted familiarization exercises on 15 August in preparation for the operation; Hipper was displeased by the slow speed of the battleships and Scheer ordered the unit not to exceed 20 nautical miles (37 km; 23 mi) from the main fleet so as to avoid being cut off by the faster British battlecruisers. The Germans got underway late in the day on 18 August; the British were aware of the German plans and sortied the Grand Fleet to meet them. By 14:35 on 19 August, Scheer had been warned of the Grand Fleet's approach and, unwilling to engage the whole of the Grand Fleet just 11 weeks after the close call at Jutland, turned his forces around and retreated to German ports. Another sortie into the North Sea followed on 18–20 October, and the German fleet again encountered no British naval forces. The High Seas Fleet was reorganized on 6 December, and Bayern was stationed in the second position of III Squadron, since she was not outfitted to serve as a squadron flagship. Her placement as the second vessel in the line nevertheless would have allowed her to bring her greater firepower into action as quickly as possible. ### Operation Albion In early September 1917, following the German conquest of the Russian port of Riga, the German navy decided to evict the Russian naval forces that still held the Gulf of Riga. To this end, the Admiralstab (the Navy High Command) planned an operation to seize the Baltic islands of Ösel, particularly the Russian gun batteries on the Sworbe peninsula. On 18 September, the order was issued for a joint Army-Navy operation to capture Ösel and Moon islands; the primary naval component consisted of the flagship Moltke and III Battle Squadron of the High Seas Fleet. At this time, V Division included the Bayern and four König-class battleships. VI Division consisted of the five Kaiser-class battleships. Along with 9 light cruisers, 3 torpedo boat flotillas, and dozens of mine warfare ships, the entire force numbered some 300 ships, supported by over 100 aircraft and 6 zeppelins. The invasion force amounted to approximately 24,600 officers and enlisted men. Opposing the Germans were the old Russian pre-dreadnoughts Slava and Tsesarevich, the armored cruisers Bayan, Admiral Makarov, and Diana, 26 destroyers, and several torpedo boats and gunboats. The garrison on Ösel numbered some 14,000 men. The operation began on 12 October, when Bayern, along with Moltke and the four Königs, began firing on the Russian shore batteries at Tagga Bay. Simultaneously, the five Kaisers engaged the batteries on the Sworbe peninsula; the objective was to secure the channel between Moon and Dagö islands, thus blocking the only escape route of the Russian ships in the gulf. Bayern's role in the operation was cut short when she struck a naval mine at 5:07 while moving into her bombardment position at Pamerort. The mine explosion killed one Unteroffizier and six sailors, allowed 1,000 metric tons (980 long tons; 1,100 short tons) of water into the ship and caused the forecastle to sink by 2 m (6.6 ft). Despite the damage inflicted by the mine, Bayern engaged the naval battery at Cape Toffri on the southern tip of Hiiumaa. Bayern was released from her position at 14:00. Preliminary repairs were made on 13 October in Tagga Bay. The temporary repairs proved ineffective, and Bayern had to be withdrawn to Kiel for repairs; the return trip took 19 days. Repairs lasted from 3 November to 27 December, during which the forward torpedo tube room was stripped of its equipment and the torpedo ports were sealed. The room was then turned into an additional watertight compartment. Four 8.8 cm (3.5 in) SK L/30 anti-aircraft guns were also installed during the repairs. On 16 October, two König-class battleships and several smaller vessels were sent to engage the Russian battleships in the Gulf of Riga. The following day, König and Kronprinz engaged the Russian battleships—König dueled with Slava and Kronprinz fired on both Slava and the cruiser Bayan. The Russian vessels were hit dozens of times, until at 10:30 the Russian naval commander, Admiral Bakhirev, ordered their withdrawal. Slava had taken too much damage, and was unable to escape; instead, she was scuttled and her crew was evacuated on a destroyer. By 20 October, the naval operations were effectively over; the Russian fleet had been destroyed or forced to withdraw, and the German army held the islands in the gulf. ### Subsequent operations Following her return to the fleet, Bayern was assigned to security duties in the North Sea. Admiral Scheer had used light surface forces to attack British convoys to Norway beginning in late 1917. As a result, the Royal Navy attached a squadron of battleships to protect the convoys, which presented Scheer with the possibility of destroying a detached squadron of the Grand Fleet. Scheer remarked that "A successful attack on such a convoy would not only result in the sinking of much tonnage, but would be a great military success, and would ... force the English to send more warships to the northern waters." Scheer instituted strict wireless silence in preparation for the planned attack. This denied the British the ability to intercept and decrypt German signals, which had previously been a significant advantage. The operation called for Hipper's battlecruisers to attack the convoy and its escorts on 23 April while the battleships of the High Seas Fleet stood by in support. On 22 April, Bayern and the rest of the German fleet assembled in the Schillig Roads outside Wilhelmshaven and departed the following morning at 06:00. Heavy fog forced the Germans to remain inside their defensive minefields for half an hour. Hipper's forces were 60 nmi (110 km; 69 mi) west of Egerö, Norway, by 05:20 on 24 April. Despite the success in reaching the convoy route undetected, the operation failed due to faulty intelligence. Reports from U-boats indicated to Scheer that the convoys sailed at the start and middle of each week, but a west-bound convoy had left Bergen on Tuesday the 22nd and an east-bound group left Methil, Scotland, on the 24th, a Thursday. As a result, there was no convoy for Hipper to attack. The same day, one of Moltke's screws slipped off, which caused serious damage to the power plant and allowed 2,000 metric tons (2,000 long tons; 2,200 short tons) of water into the ship. Moltke was forced to break radio silence in order to inform Scheer of the ship's condition, which alerted the Royal Navy to the High Seas Fleet's activities. Beatty sortied with a force of 31 battleships and four battlecruisers, but was too late to intercept the retreating Germans. The Germans reached their defensive minefields early on 25 April, though approximately 40 nmi (74 km; 46 mi) off Helgoland Moltke was torpedoed by the submarine E42. Moltke nevertheless successfully returned to port. ### Fate From 23 September to early October, Bayern served as the flagship of III Squadron, under Vizeadmiral (Vice Admiral) Hugo Kraft. Bayern was to have taken part in what would have amounted to the "death ride" of the High Seas Fleet shortly before the end of World War I. The bulk of the High Seas Fleet was to have sortied from its base in Wilhelmshaven to engage the British Grand Fleet. Scheer—by now the Großadmiral of the fleet—intended to inflict as much damage as possible on the British navy, in order to obtain a better bargaining position for Germany, whatever the cost to the fleet. While the fleet was consolidating in Wilhelmshaven, war-weary sailors began rioting. On 24 October 1918, the order was given to sail from Wilhelmshaven. Starting on the night of 29 October, sailors on several battleships mutinied; three ships from III Squadron refused to weigh anchor, and acts of sabotage were committed on board the battleships Thüringen and Helgoland. The order to sail was rescinded in the face of this open revolt. In an attempt to suppress the mutiny, the battleship squadrons were dispersed. Bayern, along with the rest of III Squadron, was sent to Kiel. Following the capitulation of Germany in November 1918, the majority of the High Seas Fleet was to be interned in the Royal Navy base at Scapa Flow. Bayern was listed as one of the ships to be handed over. On 21 November 1918, the ships to be interned, under the command of Rear Admiral Ludwig von Reuter, sailed from their base in Germany for the last time. The fleet rendezvoused with the British light cruiser Cardiff, before meeting a flotilla of 370 British, American, and French warships for the voyage to Scapa Flow. The fleet remained in captivity during the negotiations that ultimately produced the Versailles Treaty. Reuter believed that the British intended to seize the German ships on 21 June, which was the deadline for Germany to have signed the peace treaty. Unaware that the deadline had been extended to the 23rd, Reuter ordered his ships to be sunk. On the morning of 21 June, the British fleet left Scapa Flow to conduct training maneuvers; at 11:20 Reuter transmitted the order to his ships. Bayern sank at 14:30. The ship was raised on 1 September 1934 and was broken up the following year in Rosyth. The ship's bell was eventually delivered to the German Federal Navy and is on display at Kiel Fördeklub. Some parts of the ship, including her main battery gun turrets, remain on the sea floor between 38 and 45 m (125 and 148 ft), where they can be accessed by scuba divers.
1,437,123
Red-billed quelea
1,170,120,545
Small, migratory weaver bird native to Sub-Saharan Africa
[ "Agricultural pests", "Birds described in 1758", "Birds of Sub-Saharan Africa", "Quelea", "Taxa named by Carl Linnaeus" ]
The red-billed quelea (/ˈkwiːliə/; Quelea quelea), also known as the red-billed weaver or red-billed dioch, is a small—approximately 12 cm (4.7 in) long and weighing 15–26 g (0.53–0.92 oz)—migratory, sparrow-like bird of the weaver family, Ploceidae, native to Sub-Saharan Africa. It was named by Linnaeus in 1758, who considered it a bunting, but Ludwig Reichenbach assigned it in 1850 to the new genus Quelea. Three subspecies are recognised, with Quelea quelea quelea occurring roughly from Senegal to Chad, Q. q. aethiopica from Sudan to Somalia and Tanzania, and Q. q. lathamii from Gabon to Mozambique and South Africa. Non-breeding birds have light underparts, striped brown upper parts, yellow-edged flight feathers and a reddish bill. Breeding females attain a yellowish bill. Breeding males have a black (or rarely white) facial mask, surrounded by a purplish, pinkish, rusty or yellowish wash on the head and breast. The species avoids forests, deserts and colder areas such as those at high altitude and in southern South Africa. It constructs oval roofed nests woven from strips of grass hanging from thorny branches, sugar cane or reeds. It breeds in very large colonies. The quelea feeds primarily on seeds of annual grasses, but also causes extensive damage to cereal crops. Therefore, it is sometimes called "Africa's feathered locust". The usual pest-control measures are spraying avicides or detonating fire-bombs in the enormous colonies during the night. Extensive control measures have been largely unsuccessful in limiting the quelea population. When food runs out, the species migrates to locations of recent rainfall and plentiful grass seed; hence it exploits its food source very efficiently. It is regarded as the most numerous undomesticated bird on earth, with the total post-breeding population sometimes peaking at an estimated 1.5 billion individuals. It feeds in huge flocks of millions of individuals, with birds that run out of food at the rear flying over the entire group to a fresh feeding zone at the front, creating an image of a rolling cloud. The conservation status of red-billed quelea is least concern according to the IUCN Red List. ## Taxonomy and naming The red-billed quelea was one of the many birds described originally by Linnaeus in the landmark 1758 10th edition of his Systema Naturae. Classifying it in the bunting genus Emberiza, he gave it the binomial name of Emberiza quelea. He incorrectly mentioned that it originated in India, probably because ships from the East Indies picked up birds when visiting the African coast during their return voyage to Europe. It is likely that he had seen a draft of Ornithologia, sive Synopsis methodica sistens avium divisionem in ordines, sectiones, genera, species, ipsarumque varietates, a book written by Mathurin Jacques Brisson that was to be published in 1760, and which contained a black and white drawing of the species. The erroneous type locality of India was corrected to Africa in the 12th edition of Systema Naturae of 1766, and Brisson was cited. Brisson mentions that the bird originates from Senegal, where it had been collected by Michel Adanson during his 1748-1752 expedition. He called the bird Moineau à bec rouge du Senegal in French and Passer senegalensis erythrorynchos in Latin, both meaning "red-billed Senegalese sparrow". Also in 1766, George Edwards illustrated the species in colour, based on a live male specimen owned by a Mrs Clayton in Surrey. He called it the "Brazilian sparrow", despite being unsure whether it came from Brazil or Angola. In 1850, Ludwig Reichenbach thought the species was not a true bunting, but rather a weaver, and created the genus name Quelea, as well as the new combination Q. quelea. The white-faced morph was described as a separate species, Q. russii by Otto Finsch in 1877 and named after the aviculturist Karl Russ. Three subspecies are recognised. In the field, these are distinguished by differences in male breeding plumage. - The nominate subspecies, Quelea quelea quelea, is native to west and central Africa, where it has been recorded from Mauritania, western and northern Senegal, Gambia, central Mali, Burkina Faso, southwestern and southern Niger, northern Nigeria, Cameroon, south-central Chad and northern Central African Republic. - Loxia lathamii was described by Andrew Smith in 1836, but later assigned to Q. quelea as its subspecies lathamii. It ranges across central and southern Africa, where it has been recorded from southwestern Gabon, southern Congo, Angola (except the northeast and arid coastal southwest), southern Democratic Republic of Congo and the mouth of the Congo River, Zambia, Malawi and western Mozambique across to Namibia (except the coastal desert) and central, southern and eastern South Africa. - Ploceus aethiopicus was described by Carl Jakob Sundevall in 1850, but later assigned to Q. quelea as its subspecies aethiopica. It is found in eastern Africa where it occurs in southern Sudan, eastern South Sudan, Ethiopia and Eritrea south to the northeastern parts of the Democratic Republic of Congo, Uganda, Kenya, central and eastern Tanzania and northwestern and southern Somalia. Formerly, two other subspecies have been described. Q. quelea spoliator was described by Phillip Clancey in 1960 on the basis of more greyish nonbreeding plumage of populations of wetter habitats of northeastern South Africa, Eswatini and southern Mozambique. However, further analysis indicated no clear distinction in plumage between it and Q. quelea lathamii, with no evidence of genetic isolation. Hence it is not recognised as distinct. Q. quelea intermedia, described by Anton Reichenow in 1886 from east Africa, is regarded a synonym of subspecies aethiopica. ### Etymology and vernacular names Linnaeus himself did not explain the name quelea. Quelea quelea is locally called kwelea domo-jekundu in Swahili, enzunge in Kwangali, chimokoto in Shona, inyonyane in Siswati, thaha in Sesotho and ndzheyana in the Tsonga language. M.W. Jeffreys suggested that the term came from medieval Latin qualea, meaning "quail", linking the prodigious numbers of queleas to the hordes of quail that fed the Israelites during the Exodus from Egypt. The subspecies lathamii is probably named in honor of the ornithologist John Latham. The name of the subspecies aethiopica refers to Ethiopia, and its type was collected in the neighbouring Sennar province in today's Sudan. "Red-billed quelea" has been designated the official name by the International Ornithological Committee (IOC). Other names in English include black-faced dioch, cardinal, common dioch, Latham's weaver-bird, pink-billed weaver, quelea finch, quelea weaver, red-billed dioch, red-billed weaver, Russ' weaver, South-African dioch, Sudan dioch and Uganda dioch. ### Phylogeny Based on recent DNA analysis, the red-billed quelea is the sister group of a clade that contains both other remaining species of the genus Quelea, namely the cardinal quelea (Q. cardinalis) and the red-headed quelea (Q. erythrops). The genus belongs to the group of true weavers (subfamily Ploceinae), and is most closely related to the fodies (Foudia), a genus of six or seven species that occur on the islands of the western Indian Ocean. These two genera are in turn the sister clade to the Asian species of the genus Ploceus. The following tree represents current insight of the relationships between the species of Quelea, and their closest relatives. Interbreeding between red-billed and red-headed queleas has been observed in captivity. ## Description The red-billed quelea is a small sparrow-like bird, approximately 12 cm (4.7 in) long and weighing 15–26 g (0.53–0.92 oz), with a heavy, cone-shaped bill, which is red (in females outside the breeding season and males) or orange to yellow (females during the breeding season). Over 75% of males have a black facial "mask", comprising a black forehead, cheeks, lores and higher parts of the throat. Occasionally males have a white mask. The mask is surrounded by a variable band of yellow, rusty, pink or purple. White masks are sometimes bordered by black. This colouring may only reach the lower throat or extend along the belly, with the rest of the underparts light brown or whitish with some dark stripes. The upperparts have light and dark brown longitudinal stripes, particularly at midlength, and are paler on the rump. The tail and upper wing are dark brown. The flight feathers are edged greenish or yellow. The eye has a narrow naked red ring and a brown iris. The legs are orangey in colour. The bill is bright raspberry red. Outside the breeding season, the male lacks bright colours; it has a grey-brown head with dark streaks, whitish chin and throat, and a faint light stripe above the eyes. At this time, the bill becomes pink or dull red and the legs turn flesh-coloured. The females resemble the males in non-breeding plumage, but have a yellow or orangey bill and eye-ring during the breeding season. At other times, the female bill is pink or dull red. Newborns have white bills and are almost naked with some wisps of down on the top of the head and the shoulders. The eyes open during the fourth day, at the same time as the first feathers appear. Older nestlings have a horn-coloured bill with a hint of lavender, though it turns orange-purple before the post-juvenile moult. Young birds change feathers two to three months after hatching, after which the plumage resembles that of non-breeding adults, although the head is grey, the cheeks whitish, and wing coverts and flight feathers have buff margins. At an age of about five months they moult again and their plumage starts to look like that of breeding adults, with a pinkish-purple bill. Different subspecies are distinguished by different colour patterns of the male breeding plumage. In the typical subspecies, Q. quelea quelea, breeding males have a buff crown, nape and underparts and the black mask extends high up the forehead. In Q. quelea lathamii the mask also extends high up the forehead, but the underparts are mainly white. In Q. quelea aethiopica the mask does not extend far above the bill, and the underparts may have a pink wash. There is much variability within subspecies, and some birds cannot be ascribed to a subspecies based on outward appearance alone. Because of interbreeding, specimens intermediate between subspecies may occur where the ranges of the subspecies overlap, such as at Lake Chad. The female pin-tailed whydah could be mistaken for the red-billed quelea in non-breeding plumage, since both are sparrow-like birds with conical red-coloured bills, but the whydah has a whitish brow between a black stripe through the eye and a black stripe above. ### Sound Flying flocks make a distinct sound due to the many wing beats. After arriving at the roost or nest site, birds keep moving around and make a lot of noise for about half an hour before settling in. Both males and females call. The male sings in short bursts, starting with some chatter, followed by a warbling tweedle-toodle-tweedle. ## Distribution and habitat The red-billed quelea is mostly found in tropical and subtropical areas with a seasonal semi-arid climate, resulting in dry thornbush grassland, including the Sahel, and its distribution covers most of sub-Saharan Africa. It avoids forests, however, including miombo woodlands and rainforests such as those in central Africa, and is generally absent from western parts of South Africa and arid coastal regions of Namibia and Angola. It was introduced to the island of Réunion in 2000. Occasionally, it can be found as high as 3,000 m (9,800 ft) above sea-level, but mostly resides below 1,500 m (4,900 ft). It visits agricultural areas, where it feeds on cereal crops, although it is thought to prefer seeds of wild annual grasses. It needs to drink daily and can only be found within about 30 km (19 mi) distance of the nearest body of water. It is found in wet habitats, congregating at the shores of waterbodies, such as Lake Ngami, during flooding. It needs shrubs, reeds or trees to nest and roost. Red-billed queleas migrate seasonally over long distances in anticipation of the availability of their main natural food source, seeds of annual grasses. The presence of these grass seeds is the result of the beginning of rains weeks earlier, and the rainfall varies in a seasonal geographic pattern. The temporarily wet areas do not form a single zone that periodically moves back and forth across the entirety of Sub-Saharan Africa, but rather consist of five or six regions, within which the wet areas "move" or "jump". Red-billed quelea populations thus migrate between the temporarily wet areas within each of these five to six geographical regions. Each of the subspecies, as distinguished by different male breeding plumage, is confined to one or more of these geographical regions. In Nigeria, the nominate subspecies generally travels 300–600 km (190–370 mi) southwards during the start of the rains in the north during June and July, when the grass seed germinates, and is no longer eaten by the queleas. When they reach the Benoue River valley, for instance, the rainy season has already passed and the grass has produced new seeds. After about six weeks, the birds migrate northwards to find a suitable breeding area, nurture a generation, and then repeat this sequence moving further north. Some populations may also move northwards when the rains have started, to eat the remaining ungerminated seeds. In Senegal migration is probably between the southeast and the northwest. In eastern Africa, the subspecies aethiopica is thought to consist of two sub-populations. One moves from Central Tanzania to southern Somalia, to return to breed in Tanzania in February and March, followed by successive migrations to breed ever further north, the season's last usually occurring in central Kenya during May. The second group moves from northern and central Sudan and central Ethiopia in May and June, to breed in southern Sudan, South Sudan, southern Ethiopia and northern Kenya, moving back north from August to October. In southern Africa, the total population of the subspecies Q. quelea lathamii in October converges on the Zimbabwean Highveld. In November, part of the population migrates to the northwest to northwestern Angola, while the remainder migrates to the southeast to southern Mozambique and eastern South-Africa, but no proof has been found that these migration cohorts are genetically or morphologically divergent. ## Ecology and behaviour The red-billed quelea is regarded as the most numerous undomesticated bird on earth, with the total post-breeding population sometimes peaking at an estimated 1.5 billion individuals. The species is specialised on feeding on seeds of annual grass species, which may be ripe, or still green, but have not germinated yet. Since the availability of these seeds varies with time and space, occurring in particular weeks after the local off-set of rains, queleas migrate as a strategy to ensure year-round food availability. The consumption of a lot of food with a high energy content is needed for the queleas to gain enough fat to allow migration to new feeding areas. When breeding, it selects areas such as lowveld with thorny or spiny vegetation—typically Acacia species—below 1,000 m (3,300 ft) elevation. While foraging for food, they may fly 50–65 km (31–40 mi) each day and return to the roosting or nesting site in the evening. Small groups of red-billed queleas often mix with different weaver birds (Ploceus) and bishops (Euplectes), and in western Africa they may join the Sudan golden sparrow (Passer luteus) and various estrildids. Red-billed queleas may also roost together with weavers, estrildids and barn swallows. Their life expectancy is two to three years in the wild, but one captive bird lived for eighteen years. ### Breeding The red-billed quelea needs 300–800 mm (12–31 in) of precipitation to breed, with nest building usually commencing four to nine weeks after the onset of the rains. Nests are usually built in stands of thorny trees such as umbrella thorn acacia (Vachellia tortilis), blackthorn (Senegalia mellifera) and sicklebush (Dichrostachys cinerea), but sometimes in sugar cane fields or reeds. Colonies can consist of millions of nests, in densities of 30,000 per ha (12,000 per acre). Over 6000 nests in a single tree have been counted. At Malilangwe in Zimbabwe one colony was 20 km (12 mi) long and 1 km (0.6 mi) wide. In southern Africa, suitable branches are stripped of leaves a few days in advance of the onset of nest construction. The male starts the nest by creating a ring of grass by twining strips around both branches of a hanging forked twig, and from there bridging the gaps in the circle his beak can reach, having one foot on each of the branchlets, using the same footholds and the same orientation throughout the building process. Two parallel stems of reeds or sugar cane can also be used to attach the nest from. They use both their bills and feet in adding the initial knots needed. As soon as the ring is finished the male displays, trying to attract a female, after which the nest may be completed in two days. The nest chamber is created in front of the ring. The entrance may be constructed after the egg laying has started, while the male works from the outside. A finished nest looks like a small oval or globular ball of grass, around 18 cm (7 in) high and 16 cm (6 in) wide, with a 2.5 cm (1 in) wide entrance high up one side, sheltered by a shallow awning. About six to seven hundred fresh, green grass strips are used for each nest. This species may nest several times per year when conditions are favourable. In the breeding season, males are diversely coloured. These differences in plumage do not signal condition, probably serving instead for the recognition of individual birds. However, the intensity of the red on the bills is regarded an indicator of the animal's quality and social dominance. Red-billed quelea males mate with one female only within one breeding cycle. There are usually three eggs in each clutch (though the full range is one to five) of approximately 18 mm (0.71 in) long and 13 mm (0.51 in) in diameter. The eggs are light bluish or greenish in colour, sometimes with some dark spots. Some clutches contain six eggs, but large clutches may be the result of other females dumping an egg in a stranger's nest. Both sexes share the incubation of the eggs during the day, but the female alone does so during the cool night, and feeds during the day when air temperatures are high enough to sustain the development of the embryo. The breeding cycle of the red-billed quelea is one of the shortest known in any bird. Incubation takes nine or ten days. After the chicks hatch, they are fed for some days with protein-rich insects. Later the nestlings mainly get seeds. The young birds fledge after about two weeks in the nest. They are sexually mature in one year. ### Feeding Flocks of red-billed queleas usually feed on the ground, with birds in the rear constantly leap-frogging those in the front to exploit the next strip of fallen seeds. This behaviour creates the impression of a rolling cloud, and enables efficient exploitation of the available food. The birds also take seeds from the grass ears directly. They prefer grains of 1–2 mm (0.04–0.08 in) in size. Red-billed queleas feed mainly on grass seeds, which includes a large number of annual species from the genera Echinochloa, Panicum, Setaria, Sorghum, Tetrapogon and Urochloa. One survey at Lake Chad showed that two-thirds of the seeds eaten belonged to only three species: African wild rice (Oryza barthii), Sorghum purpureosericeum and jungle rice (Echinochloa colona). When the supply of these seeds runs out, seeds of cereals such as barley (Hordeum disticum), teff (Eragrostis tef), sorghum (Sorghum bicolor), manna (Setaria italica), millet (Panicum miliaceum), rice (Oryza sativa), wheat (Triticum), oats (Avena aestiva), as well as buckwheat (Phagopyrum esculentum) and sunflower (Helianthus annuus) are eaten on a large scale. Red-billed queleas have also been observed feeding on crushed corn from cattle feedlots, but entire maize kernels are too big for them to swallow. A single bird may eat about 15 g (0.53 oz) in seeds each day. As much as half of the diet of nestlings consists of insects, such as grasshoppers, ants, beetles, bugs, caterpillars, flies and termites, as well as snails and spiders. Insects are generally eaten during the breeding season, though winged termites are eaten at other times. Breeding females consume snail-shell fragments and calcareous grit, presumably to enable egg-shell formation. One colony in Namibia, of an estimated five million adults and five million chicks, was calculated to consume roughly 13 t (29,000 lb) of insects and 1,000 t (2,200,000 lb) of grass seeds during its breeding cycle. At sunrise they form flocks that co-operate to find food. After a successful search, they settle to feed. In the heat of the day, they rest in the shade, preferably near water, and preen. Birds seem to prefer drinking at least twice a day. In the evening, they once again fly off in search of food. ### Predators and parasites Natural enemies of the red-billed quelea include other birds, snakes, warthogs, squirrels, galagos, monkeys, mongooses, genets, civets, foxes, jackals, hyaenas, cats, lions and leopards. Bird species that prey on queleas include the lanner falcon, tawny eagle and marabou stork. The diederik cuckoo is a brood parasite that probably lays eggs in nests of queleas. Some predators, such as snakes, raid nests and eat eggs and chicks. Nile crocodiles sometimes attack drinking queleas, and an individual in Ethiopia hit birds out of the vegetation on the bank into the water with its tail, subsequently eating them. Queleas drinking at a waterhole were grabbed from below by African helmeted turtles in Etosha. Among the invertebrates that kill and eat youngsters are the armoured bush cricket (Acanthoplus discoidalis) and the scorpion Cheloctonus jonesii. Internal parasites found in queleas include Haemoproteus and Plasmodium. ## Interactions with humans The red-billed quelea is caught and eaten in many parts of Africa. Around Lake Chad, three traditional methods are used to catch red-billed queleas. Trappers belonging to the Hadjerai tribe use triangular hand-held nets, which are both selective and efficient. Each team of six trappers caught about twenty thousand birds each night. An estimated five to ten million queleas are trapped near N'Djamena each year, representing a market value of approximately US\$37,500–75,000. Between 13 June and 21 August 1994 alone, 1.2 million queleas were caught. Birds were taken from roosts in the trees during the moonless period each night. The feathers were plucked and the carcasses fried the following morning, dried in the sun, and transported to the city to be sold on the market. The Sara people use standing fishing nets with a very fine mesh, while Masa and Musgum fishermen cast nets over groups of birds. The impact of hunting on the quelea population (about 200 million individuals in the Lake Chad Basin) is deemed insignificant. Woven traps made from star grass (Cynodon nlemfuensis) are used to catch hundreds of these birds daily in the Kondoa District, Tanzania. Guano is collected from under large roosts in Nigeria and used as a fertiliser. Tourists like to watch the large flocks of queleas, such as during visits of the Kruger National Park. The birds themselves eat pest insects such as migratory locusts, and the moth species Helicoverpa armigera and Spodoptera exempta. The animal's large distribution and population resulted in a conservation status listed as least concern on the IUCN Red List. ### Aviculture The red-billed quelea is sometimes kept and bred in captivity by hobbyists. It thrives if kept in large and high cages, with space to fly to minimise the risk of obesity. A sociable bird, the red-billed quelea tolerates mixed-species aviaries. Keeping many individuals mimics its natural occurrence in large flocks. This species withstands frosts, but requires shelter from rain and wind. Affixing hanging branches, such as hawthorn, in the cage facilitates nesting. Adults are typically given a diet of tropical seeds enriched with grass seeds, augmented by living insects such as mealworms, spiders, or boiled shredded egg during the breeding season. Fine stone grit and calcium sources, such as shell grit and cuttlebone, provide nutrients as well. If provided with material like fresh grass or coconut fibre they can be bred. ### Pest management Sometimes called "Africa's feathered locust", the red-billed quelea is considered a serious agricultural pest in Sub-Saharan Africa. The governments of Botswana, Ethiopia, Kenya, South Africa, Sudan, Tanzania, and Zimbabwe have regularly made attempts to lessen quelea populations. The most common method to kill members of problematic flocks was by spraying the organophosphate avicide fenthion from the air on breeding colonies and roosts. In Botswana and Zimbabwe, spraying was also executed from ground vehicles and manually. Kenya and South Africa regularly used fire-bombs. Attempts during the 1950s and '60s to eradicate populations, at least regionally, failed. Consequently, management is at present directed at removing those congregations that are likely to attack vulnerable fields. In eastern and southern Africa, the control of quelea is often coordinated by the Desert Locust Control Organization for Eastern Africa (DLCO-EA) and the International Red Locust Control Organization for Central and Southern Africa (IRLCO-CSA), which make their aircraft available for this purpose. ## Gallery
157,626
Peregrine falcon
1,173,843,242
Widely distributed bird of prey
[ "Birds described in 1771", "Birds of Asia", "Birds of Europe", "Birds of North Africa", "Birds of North America", "Birds of South America", "Birds of prey of Africa", "Birds of the Dominican Republic", "Cosmopolitan birds", "Diurnal raptors of Australia", "Falco (genus)", "Falconry", "Native birds of the Rocky Mountains", "Taxa named by Marmaduke Tunstall" ]
The peregrine falcon (Falco peregrinus), also known simply as the peregrine, and historically as the duck hawk in North America, is a cosmopolitan bird of prey (raptor) in the family Falconidae. A large, crow-sized falcon, it has a blue-grey back, barred white underparts, and a black head. The peregrine is renowned for its speed. It can reach over 320 km/h (200 mph) during its characteristic hunting stoop (high-speed dive), making it the fastest member of the animal kingdom. According to a National Geographic TV program, the highest measured speed of a peregrine falcon is 389 km/h (242 mph). As is typical for bird-eating (avivore) raptors, peregrine falcons are sexually dimorphic, with females being considerably larger than males. The peregrine's breeding range includes land regions from the Arctic tundra to the tropics. It can be found nearly everywhere on Earth, except extreme polar regions, very high mountains, and most tropical rainforests; the only major ice-free landmass from which it is entirely absent is New Zealand. This makes it the world's most widespread raptor and one of the most widely found bird species. In fact, the only land-based bird species found over a larger geographic area is not always naturally occurring, but one widely introduced by humans, the rock pigeon, which in turn now supports many peregrine populations as a prey species. The peregrine is a highly successful example of urban wildlife in much of its range, taking advantage of tall buildings as nest sites and an abundance of prey such as pigeons and ducks. Both the English and scientific names of this species mean "wandering falcon", referring to the migratory habits of many northern populations. Experts recognize 17 to 19 subspecies, which vary in appearance and range; disagreement exists over whether the distinctive Barbary falcon is represented by two subspecies of Falco peregrinus or is a separate species, F. pelegrinoides. The two species' divergence is relatively recent, during the time of the last ice age, therefore the genetic differential between them (and also the difference in their appearance) is relatively tiny. They are only about 0.6–0.8% genetically differentiated. Although its diet consists almost exclusively of medium-sized birds, the peregrine will sometimes hunt small mammals, small reptiles, or even insects. Reaching sexual maturity at one year, it mates for life and nests in a scrape, normally on cliff edges or, in recent times, on tall human-made structures. The peregrine falcon became an endangered species in many areas because of the widespread use of certain pesticides, especially DDT. Since the ban on DDT from the early 1970s, populations have recovered, supported by large-scale protection of nesting places and releases to the wild. The peregrine falcon is a well-respected falconry bird due to its strong hunting ability, high trainability, versatility, and availability via captive breeding. It is effective on most game bird species, from small to large. It has also been used as a religious, royal, or national symbol across multiple eras and areas of human civilization. ## Description The peregrine falcon has a body length of 34 to 58 cm (13–23 in) and a wingspan from 74 to 120 cm (29–47 in). The male and female have similar markings and plumage but, as with many birds of prey, the peregrine falcon displays marked sexual dimorphism in size, with the female measuring up to 30% larger than the male. Males weigh 330 to 1,000 g (12–35 oz) and the noticeably larger females weigh 700 to 1,500 g (25–53 oz). In most subspecies, males weigh less than 700 g (25 oz) and females weigh more than 800 g (28 oz), and cases of females weighing about 50% more than their male breeding mates are not uncommon. The standard linear measurements of peregrines are: the wing chord measures 26.5 to 39 cm (10.4–15.4 in), the tail measures 13 to 19 cm (5.1–7.5 in) and the tarsus measures 4.5 to 5.6 cm (1.8–2.2 in). The back and the long pointed wings of the adult are usually bluish black to slate grey with indistinct darker barring (see "Subspecies" below); the wingtips are black. The white to rusty underparts are barred with thin clean bands of dark brown or black. The tail, coloured like the back but with thin clean bars, is long, narrow, and rounded at the end with a black tip and a white band at the very end. The top of the head and a "moustache" along the cheeks are black, contrasting sharply with the pale sides of the neck and white throat. The cere is yellow, as are the feet, and the beak and claws are black. The upper beak is notched near the tip, an adaptation which enables falcons to kill prey by severing the spinal column at the neck. An immature bird is much browner, with streaked, rather than barred, underparts, and has a pale bluish cere and orbital ring. A study shows that their black malar stripe exists to reduce glare from solar radiation, allowing them to see better. Photos from The Macaulay Library and iNaturalist showed that the malar stripe is thicker where there is more solar radiation. That supports the solar glare hypothesis. ## Taxonomy and systematics Falco peregrinus was first described under its current binomial name by English ornithologist Marmaduke Tunstall in his 1771 work Ornithologia Britannica. The scientific name Falco peregrinus is a Medieval Latin phrase that was used by Albertus Magnus in 1225. The specific name is taken from the fact that juvenile birds were taken while journeying to their breeding location rather than from the nest, as falcon nests were difficult to get at. The Latin term for falcon, falco, is related to , meaning "sickle", in reference to the silhouette of the falcon's long, pointed wings in flight. The peregrine falcon belongs to a genus whose lineage includes the hierofalcons and the prairie falcon (F. mexicanus). This lineage probably diverged from other falcons towards the end of the Late Miocene or in the Early Pliocene, about 5–8 million years ago (mya). As the peregrine-hierofalcon group includes both Old World and North American species, it is likely that the lineage originated in western Eurasia or Africa. Its relationship to other falcons is not clear, as the issue is complicated by widespread hybridization confounding mtDNA sequence analyses. One genetic lineage of the saker falcon (F. cherrug) is known to have originated from a male saker ancestor producing fertile young with a female peregrine ancestor, and the descendants further breeding with sakers. Today, peregrines are regularly paired in captivity with other species such as the lanner falcon (F. biarmicus) to produce the "perilanner", a somewhat popular bird in falconry as it combines the peregrine's hunting skill with the lanner's hardiness, or the gyrfalcon to produce large, strikingly coloured birds for the use of falconers. As can be seen, the peregrine is still genetically close to the hierofalcons, though their lineages diverged in the Late Pliocene (maybe some 2.5–2 mya in the Gelasian). ### Subspecies Numerous subspecies of Falco peregrinus have been described, with 19 accepted by the 1994 Handbook of the Birds of the World, which considers the Barbary falcon of the Canary Islands and coastal North Africa to be two subspecies (pelegrinoides and babylonicus) of Falco peregrinus, rather than a distinct species, F. pelegrinoides. The following map shows the general ranges of these 19 subspecies. - Falco peregrinus anatum, described by Bonaparte in 1838, is known as the American peregrine falcon or "duck hawk"; its scientific name means "duck peregrine falcon". At one time, it was partly included in leucogenys. It is mainly found in the Rocky Mountains. It was formerly common throughout North America between the tundra and northern Mexico, where current reintroduction efforts are being made to restore the population. Most mature anatum, except those that breed in more northern areas, winter in their breeding range. Most vagrants that reach western Europe seem to belong to the more northern and strongly migratory tundrius, only considered distinct since 1968. It is similar to the nominate subspecies but is slightly smaller; adults are somewhat paler and less patterned below, but juveniles are darker and more patterned below. Males weigh 500 to 700 g (1.1–1.5 lb), while females weigh 800 to 1,100 g (1.8–2.4 lb). It has become extinct in eastern North America and populations there are hybrids as a result of reintroductions of birds from elsewhere. - Falco peregrinus babylonicus, described by P.L. Sclater in 1861, is found in eastern Iran along the Hindu Kush and the Tian Shan to the Mongolian Altai ranges. A few birds winter in northern and northwestern India, mainly in dry semi-desert habitats. It is paler than pelegrinoides and somewhat similar to a small, pale lanner falcon (Falco biarmicus). Males weigh 330 to 400 grams (12 to 14 oz), while females weigh 513 to 765 grams (18.1 to 27.0 oz). - ''''', described by Sharpe in 1873, is also known as the Mediterranean peregrine falcon or the Maltese falcon. It includes caucasicus and most specimens of the proposed race punicus, though others may be pelegrinoides (Barbary falcons), or perhaps the rare hybrids between these two which might occur around Algeria. They occur from the Iberian Peninsula around the Mediterranean, except in arid regions, to the Caucasus. They are non-migratory. It is smaller than the nominate subspecies and the underside usually has a rusty hue. Males weigh around 445 g (0.981 lb), while females weigh up to 920 g (2.03 lb). - ''''', described by John Latham in 1790, it was formerly called leucogenys and includes caeruleiceps. It breeds in the Arctic tundra of Eurasia from Murmansk Oblast to roughly Yana and Indigirka Rivers, Siberia. It is completely migratory and travels south in winter as far as South Asia and sub-Saharan Africa. It is often seen around wetland habitats. It is paler than the nominate subspecies, especially on the crown. Males weigh 588 to 740 g (1.296–1.631 lb), while females weigh 925 to 1,333 g (2.039–2.939 lb). - Falco peregrinus cassini, described by Sharpe in 1873, is also known as the austral peregrine falcon. It includes kreyenborgi, the pallid falcon, a leucistic colour morph occurring in southernmost South America, which was long believed to be a distinct species. Its range includes South America from Ecuador through Bolivia, northern Argentina and Chile to Tierra del Fuego and the Falkland Islands. It is non-migratory. It is similar to the nominate subspecies, but slightly smaller with a black ear region. The pallid falcon morph kreyenborgi is medium grey above, has little barring below and has a head pattern like the saker falcon (Falco cherrug), but the ear region is white. - Falco peregrinus ernesti, described by Sharpe in 1894, is found from the Sunda Islands to the Philippines and south to eastern New Guinea and the nearby Bismarck Archipelago. Its geographical separation from nesiotes requires confirmation. It is non-migratory. It differs from the nominate subspecies in the very dark, dense barring on its underside and its black ear coverts. - Falco peregrinus furuitii, described by Momiyama in 1927, is found on the Izu and Ogasawara Islands south of Honshū, Japan. It is non-migratory. It is very rare and may only remain on a single island. It is a dark form, resembling pealei in colour, but darker, especially on the tail. - Falco peregrinus japonensis, described by Gmelin in 1788, includes kleinschmidti, pleskei, and harterti, and seems to refer to intergrades with calidus. It is found from northeast Siberia to Kamchatka (though it is possibly replaced by pealei on the coast there) and Japan. Northern populations are migratory, while those of Japan are resident. It is similar to the nominate subspecies, but the young are even darker than those of anatum. - Falco peregrinus macropus, described by Swainson in 1837, is the Australian peregrine falcon. It is found in Australia in all regions except the southwest. It is non-migratory. It is similar to brookei in appearance, but is slightly smaller and the ear region is entirely black. The feet are proportionally large. - Falco peregrinus madens, described by Ripley and Watson in 1963, is unusual in having some sexual dichromatism. If the Barbary falcon (see below) is considered a distinct species, it is sometimes placed therein. It is found in the Cape Verde Islands and is non-migratory; it is also endangered, with only six to eight pairs surviving. Males have a rufous wash on the crown, nape, ears and back; the underside is conspicuously washed pinkish-brown. Females are tinged rich brown overall, especially on the crown and nape. - Falco peregrinus minor, first described by Bonaparte in 1850. It was formerly often known as perconfusus. It is sparsely and patchily distributed throughout much of sub-Saharan Africa and widespread in Southern Africa. It apparently reaches north along the Atlantic coast as far as Morocco. It is non-migratory and dark-coloured. This is the smallest subspecies, with smaller males weighing as little as approximately 300 g (11 oz). - ''''', described by Mayr in 1941, is found in Fiji and probably also Vanuatu and New Caledonia. It is non-migratory. - ''''', described by Ridgway in 1873, is Peale's falcon and includes rudolfi. It is found in the Pacific Northwest of North America, northwards from Puget Sound along the British Columbia coast (including the Haida Gwaii), along the Gulf of Alaska and the Aleutian Islands to the far eastern Bering Sea coast of Russia, and may also occur on the Kuril Islands and the coasts of Kamchatka. It is non-migratory. It is the largest subspecies and it looks like an oversized and darker tundrius or like a strongly barred and large anatum. The bill is very wide. Juveniles occasionally have pale crowns. Males weigh 700 to 1,000 g (1.5–2.2 lb), while females weigh 1,000 to 1,500 g (2.2–3.3 lb). - Falco peregrinus pelegrinoides, first described by Temminck in 1829, is found in the Canary Islands through North Africa and the Near East to Mesopotamia. It is most similar to brookei, but is markedly paler above, with a rusty neck, and is a light buff with reduced barring below. It is smaller than the nominate subspecies; females weigh around 610 g (1.34 lb). - Falco peregrinus peregrinator, described by Sundevall in 1837, is known as the Indian peregrine falcon, black shaheen, Indian shaheen or shaheen falcon. It was formerly sometimes known as Falco atriceps or Falco shaheen. Its range includes South Asia from across the Indian subcontinent to Sri Lanka and southeastern China. In India, the shaheen falcon is reported from all states except Uttar Pradesh, mainly from rocky and hilly regions. The shaheen falcon is also reported from the Andaman and Nicobar Islands in the Bay of Bengal. It has a clutch size of 3 to 4 eggs, with the chicks fledging time of 48 days with an average nesting success of 1.32 chicks per nest. In India, apart from nesting on cliffs, it has also been recorded as nesting on man-made structures such as buildings and cellphone transmission towers. A population estimate of 40 breeding pairs in Sri Lanka was made in 1996. It is non-migratory and is small and dark, with rufous underparts. In Sri Lanka this species is found to favour the higher hills, while the migrant calidus is more often seen along the coast. - ''''', the nominate (first-named) subspecies, described by Tunstall in 1771, breeds over much of temperate Eurasia between the tundra in the north and the Pyrenees, Mediterranean region and Alpide belt in the south. It is mainly non-migratory in Europe, but migratory in Scandinavia and Asia. Males weigh 580 to 750 g (1.28–1.65 lb), while females weigh 925 to 1,300 g (2.039–2.866 lb). It includes brevirostris, germanicus, rhenanus and riphaeus. - Falco peregrinus radama, described by Hartlaub in 1861, is found in Madagascar and the Comoros. It is non-migratory. - Falco peregrinus submelanogenys, described by Mathews in 1912, is the Southwest Australian peregrine falcon. It is found in southwestern Australia and is non-migratory. - ''''', described by C.M. White in 1968, was at one time included in leucogenys. It is found in the Arctic tundra of North America to Greenland, and migrates to wintering grounds in Central and South America. Most vagrants that reach western Europe belong to this subspecies, which was previously considered synonymous with anatum. It is the New World equivalent to calidus. It is smaller and paler than anatum; most have a conspicuous white forehead and white in ear region, but the crown and "moustache" are very dark, unlike in calidus. Juveniles are browner and less grey than in calidus and paler, sometimes almost sandy, than in anatum. Males weigh 500 to 700 g (1.1–1.5 lb), while females weigh 800 to 1,100 g (1.8–2.4 lb). Despite its current recognition as a valid subspecies, a population genetic study of both pre-decline (i.e., museum) and recovered contemporary populations failed to distinguish genetically the anatum and tundrius subspecies. ### Barbary falcon The Barbary falcon is a subspecies of the peregrine falcon that inhabits parts of North Africa; namely, from the Canary Islands to the Arabian Peninsula. There is discussion concerning the taxonomic status of the bird, with some considering it a subspecies of the peregrine falcon and others considering it a full species with two subspecies (White et al. 2013). Compared to the other peregrine falcon subspecies, Barbary falcons sport a slimmer body and a distinct plumage color pattern. Despite numbers and range of these birds throughout the Canary Islands generally increasing, they are considered endangered, with human interference through falconry and shooting threatening their well-being. Falconry can further complicate the speciation and genetics of these Canary Islands falcons, as the practice promotes genetic mixing between individuals from outside the islands with those originating from the islands. Population density of the Barbary falcons on Tenerife, the biggest of the seven major Canary Islands, was found to be 1.27 pairs/100 km2, with the mean distance between pairs being 5869 ± 3338 m. The falcons were only observed near large and natural cliffs with a mean altitude of 697.6 m. Falcons show an affinity for tall cliffs away from human-mediated establishments and presence. Barbary falcons have a red neck patch, but otherwise differ in appearance from the peregrine falcon proper merely according to Gloger's rule, relating pigmentation to environmental humidity. The Barbary falcon has a peculiar way of flying, beating only the outer part of its wings like fulmars sometimes do; this also occurs in the peregrine falcon, but less often and far less pronounced. The Barbary falcon's shoulder and pelvis bones are stout by comparison with the peregrine falcon and its feet are smaller. Barbary falcons breed at different times of year than neighboring peregrine falcon subspecies, but they are capable of interbreeding. There is a 0.6–0.7% genetic distance in the peregrine falcon-Barbary falcon ("peregrinoid") complex. ## Ecology and behaviour The peregrine falcon lives mostly along mountain ranges, river valleys, coastlines, and increasingly in cities. In mild-winter regions, it is usually a permanent resident, and some individuals, especially adult males, will remain on the breeding territory. Only populations that breed in Arctic climates typically migrate great distances during the northern winter. The peregrine falcon reaches faster speeds than any other animal on the planet when performing the stoop, which involves soaring to a great height and then diving steeply at speeds of over 320 km/h (200 mph), hitting one wing of its prey so as not to harm itself on impact. The air pressure from such a dive could possibly damage a bird's lungs, but small bony tubercles on a falcon's nostrils are theorized to guide the powerful airflow away from the nostrils, enabling the bird to breathe more easily while diving by reducing the change in air pressure. To protect their eyes, the falcons use their nictitating membranes (third eyelids) to spread tears and clear debris from their eyes while maintaining vision. The distinctive malar stripe or 'moustache', a dark area of feathers below the eyes, is thought to reduce solar glare and improve contrast sensitivity when targeting fast moving prey in bright light condition; the malar stripe has been found to be wider and more pronounced in regions of the world with greater solar radiation supporting this solar glare hypothesis. Peregrine falcons have a flicker fusion frequency of 129 Hz (cycles per second), very fast for a bird of its size, and much faster than mammals. A study testing the flight physics of an "ideal falcon" found a theoretical speed limit at 400 km/h (250 mph) for low-altitude flight and 625 km/h (388 mph) for high-altitude flight. In 2005, Ken Franklin recorded a falcon stooping at a top speed of 389 km/h (242 mph). The life span of peregrine falcons in the wild is up to 19 years 9 months. Mortality in the first year is 59–70%, declining to 25–32% annually in adults. Apart from such anthropogenic threats as collision with human-made objects, the peregrine may be killed by larger hawks and owls. The peregrine falcon is host to a range of parasites and pathogens. It is a vector for Avipoxvirus, Newcastle disease virus, Falconid herpesvirus 1 (and possibly other Herpesviridae), and some mycoses and bacterial infections. Endoparasites include Plasmodium relictum (usually not causing malaria in the peregrine falcon), Strigeidae trematodes, Serratospiculum amaculata (nematode), and tapeworms. Known peregrine falcon ectoparasites are chewing lice, Ceratophyllus garei (a flea), and Hippoboscidae flies (Icosta nigra, Ornithoctona erythrocephala). In the Arctic Peregrine falcons chasing away small rodent predators from their nesting territory and Rough-legged Buzzards (Buteo lagopus) could use these hot spots as a nesting territory. ### Feeding The peregrine falcon's diet varies greatly and is adapted to available prey in different regions. However, it feeds almost exclusively on medium-sized birds such as pigeons and doves, waterfowl, gamebirds, songbirds, parrots, seabirds, and waders. Worldwide, it is estimated that between 1,500 and 2,000 bird species, or roughly a fifth of the world's bird species, are predated somewhere by these falcons.The peregrine falcon preys on the most diverse range of bird species of any raptor in North America, with over 300 species and including nearly 100 shorebirds. Its prey can range from 3 g (0.11 oz) hummingbirds (Selasphorus and Archilochus ssp.) to the 3.1 kg (6.8 lb) sandhill crane, although most prey taken by peregrines weigh between 20 g (0.71 oz) (small passerines) and 1,100 g (2.4 lb) (ducks, geese, loons, gulls, capercaillies, ptarmigans and other grouse). Smaller hawks (such as sharp-shinned hawks) and owls are regularly predated, as well as smaller falcons such as the American kestrel, merlin and, rarely, other peregrines. In urban areas, where it tends to nest on tall buildings or bridges, it subsists mostly on a variety of pigeons. Among pigeons, the rock or feral pigeon comprises 80% or more of the dietary intake of peregrines. Other common city birds are also taken regularly, including mourning doves, common wood pigeons, common swifts, northern flickers, common starlings, American robins, common blackbirds, and corvids such as magpies, jays or carrion, house, and American crows. Coastal populations of the large subspecies pealei feed almost exclusively on seabirds. In the Brazilian mangrove swamp of Cubatão, a wintering falcon of the subspecies tundrius was observed successfully hunting a juvenile scarlet ibis. Among mammalian prey species, bats in the genera Eptesicus, Myotis, Pipistrellus and Tadarida are the most common prey which taken at night. Though peregrines generally do not prefer terrestrial mammalian prey, in Rankin Inlet, peregrines largely take northern collared lemmings (Dicrostonyx groenlandicus) along with a few Arctic ground squirrels (Urocitellus parryii). Other small mammals including shrews, mice, rats, voles, and squirrels are more seldom taken. Peregrines occasionally take rabbits, mainly young individuals and juvenile hares. Additionally, remains of red fox kits and adult female American marten were found among prey remains. Insects and reptiles such as small snakes make up a small proportion of the diet, and salmonid fish have been taken by peregrines. The peregrine falcon hunts most often at dawn and dusk, when prey are most active, but also nocturnally in cities, particularly during migration periods when hunting at night may become prevalent. Nocturnal migrants taken by peregrines include species as diverse as yellow-billed cuckoo, black-necked grebe, virginia rail, and common quail. The peregrine requires open space in order to hunt, and therefore often hunts over open water, marshes, valleys, fields, and tundra, searching for prey either from a high perch or from the air. Large congregations of migrants, especially species that gather in the open like shorebirds, can be quite attractive to hunting peregrines. Once prey is spotted, it begins its stoop, folding back the tail and wings, with feet tucked. Prey is typically struck and captured in mid-air; the peregrine falcon strikes its prey with a clenched foot, stunning or killing it with the impact, then turns to catch it in mid-air. If its prey is too heavy to carry, a peregrine will drop it to the ground and eat it there. If they miss the initial strike, peregrines will chase their prey in a twisting flight. Although previously thought rare, several cases of peregrines contour-hunting, i.e. using natural contours to surprise and ambush prey on the ground, have been reported and even rare cases of prey being pursued on foot. In addition, peregrines have been documented preying on chicks in nests, from birds such as kittiwakes. Prey is plucked before consumption. A recent study showed the presence of peregrines benefits non-preferred species while at the same time causing a decline in its preferred prey. As of 2018, the fastest recorded falcon was at 242 mph (nearly 390 km/h). Researchers at the University of Groningen in the Netherlands and at Oxford University used 3D computer simulations in 2018 to show that the high speed allows peregrines to gain better maneuverability and precision in strikes. ### Reproduction The peregrine falcon is sexually mature at one to three years of age, but in larger populations they breed after two to three years of age. A pair mates for life and returns to the same nesting spot annually. The courtship flight includes a mix of aerial acrobatics, precise spirals, and steep dives. The male passes prey it has caught to the female in mid-air. To make this possible, the female actually flies upside-down to receive the food from the male's talons. During the breeding season, the peregrine falcon is territorial; nesting pairs are usually more than 1 km (0.62 mi) apart, and often much farther, even in areas with large numbers of pairs. The distance between nests ensures sufficient food supply for pairs and their chicks. Within a breeding territory, a pair may have several nesting ledges; the number used by a pair can vary from one or two up to seven in a 16-year period. The peregrine falcon nests in a scrape, normally on cliff edges. The female chooses a nest site, where she scrapes a shallow hollow in the loose soil, sand, gravel, or dead vegetation in which to lay eggs. No nest materials are added. Cliff nests are generally located under an overhang, on ledges with vegetation. South-facing sites are favoured. In some regions, as in parts of Australia and on the west coast of northern North America, large tree hollows are used for nesting. Before the demise of most European peregrines, a large population of peregrines in central and western Europe used the disused nests of other large birds. In remote, undisturbed areas such as the Arctic, steep slopes and even low rocks and mounds may be used as nest sites. In many parts of its range, peregrines now also nest regularly on tall buildings or bridges; these human-made structures used for breeding closely resemble the natural cliff ledges that the peregrine prefers for its nesting locations. The pair defends the chosen nest site against other peregrines, and often against ravens, herons, and gulls, and if ground-nesting, also such mammals as foxes, wolverines, felids, bears, wolves, and mountain lions. Both nests and (less frequently) adults are predated by larger-bodied raptorial birds like eagles, large owls, or gyrfalcons. The most serious predators of peregrine nests in North America and Europe are the great horned owl and the Eurasian eagle-owl. When reintroductions have been attempted for peregrines, the most serious impediments were these two species of owls routinely picking off nestlings, fledglings and adults by night. Peregrines defending their nests have managed to kill raptors as large as golden eagles and bald eagles (both of which they normally avoid as potential predators) that have come too close to the nest by ambushing them in a full stoop. In one instance, when a snowy owl killed a newly fledged peregrine, the larger owl was in turn killed by a stooping peregrine parent. The date of egg-laying varies according to locality, but is generally from February to March in the Northern Hemisphere, and from July to August in the Southern Hemisphere, although the Australian subspecies macropus may breed as late as November, and equatorial populations may nest anytime between June and December. If the eggs are lost early in the nesting season, the female usually lays another clutch, although this is extremely rare in the Arctic due to the short summer season. Generally three to four eggs, but sometimes as few as one or as many as five, are laid in the scrape. The eggs are white to buff with red or brown markings. They are incubated for 29 to 33 days, mainly by the female, with the male also helping with the incubation of the eggs during the day, but only the female incubating them at night. The average number of young found in nests is 2.5, and the average number that fledge is about 1.5, due to the occasional production of infertile eggs and various natural losses of nestlings. After hatching, the chicks (called "es") are covered with creamy-white down and have disproportionately large feet. The male (called the "") and the female (simply called the "falcon") both leave the nest to gather prey to feed the young. The hunting territory of the parents can extend a radius of 19 to 24 km (12 to 15 mi) from the nest site. Chicks fledge 42 to 46 days after hatching, and remain dependent on their parents for up to two months. ## Relationship with humans ### Use in falconry The peregrine falcon is a highly admired falconry bird, and has been used in falconry for more than 3,000 years, beginning with nomads in central Asia. Its advantages in falconry include not only its athleticism and eagerness to hunt, but an equable disposition that leads to it being one of the easier falcons to train. The peregrine falcon has the additional advantage of a natural flight style of circling above the falconer ("waiting on") for game to be flushed, and then performing an effective and exciting high-speed diving stoop to take the quarry. The speed of the stoop not only allows the falcon to catch fast flying birds, it also enhances the falcon's ability to execute maneuvers to catch highly agile prey, and allows the falcon to deliver a knockout blow with a fist-like clenched talon against game that may be much larger than itself. Additionally the versatility of the species, with agility allowing capture of smaller birds and a strength and attacking style allowing capture of game much larger than themselves, combined with the wide size range of the many peregrine subspecies, means there is a subspecies suitable to almost any size and type of game bird. This size range, evolved to fit various environments and prey species, is from the larger females of the largest subspecies to the smaller males of the smallest subspecies, approximately five to one (approximately 1500 g to 300 g). The males of smaller and medium-sized subspecies, and the females of the smaller subspecies, excel in the taking of swift and agile small game birds such as dove, quail, and smaller ducks. The females of the larger subspecies are capable of taking large and powerful game birds such as the largest of duck species, pheasant, and grouse. Peregrine falcons handled by falconers are also occasionally used to scare away birds at airports to reduce the risk of bird-plane strikes, improving air-traffic safety. They were also used to intercept homing pigeons during World War II. Peregrine falcons have been successfully bred in captivity, both for falconry and for release into the wild. Until 2004 nearly all peregrines used for falconry in the US were captive-bred from the progeny of falcons taken before the US Endangered Species Act was enacted and from those few infusions of wild genes available from Canada and special circumstances. Peregrine falcons were removed from the United States' endangered species list in 1999. The successful recovery program was aided by the effort and knowledge of falconers – in collaboration with The Peregrine Fund and state and federal agencies – through a technique called hacking. Finally, after years of close work with the US Fish and Wildlife Service, a limited take of wild peregrines was allowed in 2004, the first wild peregrines taken specifically for falconry in over 30 years. The development of captive breeding methods has led to peregrines being commercially available for falconry use, thus mostly eliminating the need to capture wild birds for support of falconry. The main reason for taking wild peregrines at this point is to maintain healthy genetic diversity in the breeding lines. Hybrids of peregrines and gyrfalcons are also available that can combine the best features of both species to create what many consider to be the ultimate falconry bird for the taking of larger game such as the sage-grouse. These hybrids combine the greater size, strength, and horizontal speed of the gyrfalcon with the natural propensity to stoop and greater warm weather tolerance of the peregrine. ### Decline due to pesticides The peregrine falcon became an endangered species over much of its range because of the use of organochlorine pesticides, especially DDT, during the 1950s, '60s, and '70s. Pesticide biomagnification caused organochlorine to build up in the falcons' fat tissues, reducing the amount of calcium in their eggshells. With thinner shells, fewer falcon eggs survived until hatching. In addition, the PCB concentrations found in these falcons is dependent upon the age of the falcon. While high levels are still found in young birds (only a few months old) and even higher concentrations are found in more mature falcons, further increasing in adult peregrine falcons. These pesticides caused falcon prey to also have thinner eggshells (one example of prey being the Black Petrels). In several parts of the world, such as the eastern United States and Belgium, this species became extirpated (locally extinct) as a result. An alternate point of view is that populations in the eastern North America had vanished due to hunting and egg collection. Following the ban of organochlorine pesticides, the reproductive success of Peregrines increased in Scotland in terms of territory occupancy and breeding success, although spatial variation in recovery rates indicate that in some areas Peregrines were also impacted by other factors such as persecution. ### Recovery efforts Peregrine falcon recovery teams breed the species in captivity. The chicks are usually fed through a chute or with a hand puppet mimicking a peregrine's head, so they cannot see to imprint on the human trainers. Then, when they are old enough, the rearing box is opened, allowing the bird to train its wings. As the fledgling gets stronger, feeding is reduced, forcing the bird to learn to hunt. This procedure is called hacking back to the wild. To release a captive-bred falcon, the bird is placed in a special cage at the top of a tower or cliff ledge for some days or so, allowing it to acclimate itself to its future environment. Worldwide recovery efforts have been remarkably successful. The widespread restriction of DDT use eventually allowed released birds to breed successfully. The peregrine falcon was removed from the U.S. Endangered Species list on 25 August 1999. Some controversy has existed over the origins of captive breeding stock used by the Peregrine Fund in the recovery of peregrine falcons throughout the contiguous United States. Several peregrine subspecies were included in the breeding stock, including birds of Eurasian origin. Due to the extirpation of the eastern population of Falco peregrinus anatum, the near-extirpation of anatum in the Midwest and the limited gene pool within North American breeding stock, the inclusion of non-native subspecies was justified to optimize the genetic diversity found within the species as a whole. During the 1970s, peregrine falcons in Finland experienced a population bottleneck as a result of large declines associated with bio-accumulation of organochloride pesticides. However, the genetic diversity of peregrines in Finland is similar to other populations, indicating that high dispersal rates have maintained the genetic diversity of this species. Since peregrine falcon eggs and chicks are still often targeted by illegal poachers, it is common practice not to publicize unprotected nest locations. ### Current status Populations of the peregrine falcon have bounced back in most parts of the world. In the United Kingdom, there has been a recovery of populations since the crash of the 1960s. This has been greatly assisted by conservation and protection work led by the Royal Society for the Protection of Birds. The RSPB has estimated that there are 1,402 breeding pairs in the UK. In Canada, where peregrines were identified as endangered in 1978 (in the Yukon territory of northern Canada that year, only a single breeding pair was identified), the Committee on the Status of Endangered Wildlife in Canada declared the species no longer at risk in December 2017. Peregrines now breed in many mountainous and coastal areas, especially in the west and north, and nest in some urban areas, capitalising on the urban feral pigeon populations for food. In Southampton, a nest prevented restoration of mobile telephony services for several months, after Vodafone engineers despatched to repair a faulty transmitter mast discovered a nest in the mast, and were prevented by the Wildlife and Countryside Act – on pain of a possible prison sentence – from proceeding with repairs until the chicks fledged. In many parts of the world peregrine falcons have adapted to urban habitats, nesting on cathedrals, skyscraper window ledges, tower blocks, and the towers of suspension bridges. Many of these nesting birds are encouraged, sometimes gathering media attention and often monitored by cameras. ## Cultural significance Due to its striking hunting technique, the peregrine has often been associated with aggression and martial prowess. The Ancient Egyptian solar deity Ra was often represented as a man with the head of a peregrine falcon adorned with the solar disk, although most Egyptologists agree that it's most likely a Lanner falcon. Native Americans of the Mississippian culture (c. 800–1500) used the peregrine, along with several other birds of prey, in imagery as a symbol of "aerial (celestial) power" and buried men of high status in costumes associating to the ferocity of raptorial birds. In the late Middle Ages, the Western European nobility that used peregrines for hunting, considered the bird associated with princes in formal hierarchies of birds of prey, just below the gyrfalcon associated with kings. It was considered "a royal bird, more armed by its courage than its claws". Terminology used by peregrine breeders also used the Old French term , "of noble birth; aristocratic", particularly with the peregrine. The peregrine falcon is the national animal of the United Arab Emirates. Since 1927, the peregrine falcon has been the official mascot of Bowling Green State University in Bowling Green, Ohio. The 2007 U.S. Idaho state quarter features a peregrine falcon. The peregrine falcon has been designated the official city bird of Chicago. The Peregrine, by J. A. Baker, is widely regarded as one of the best nature books in English written in the twentieth century. Admirers of the book include Robert Macfarlane, Mark Cocker, who regards the book as "one of the most outstanding books on nature in the twentieth century" and Werner Herzog, who called it "the one book I would ask you to read if you want to make films", and said elsewhere "it has prose of the calibre that we have not seen since Joseph Conrad". In the book, Baker recounts, in diary form, his detailed observations of peregrines (and their interaction with other birds) near his home in Chelmsford, Essex, over a single winter from October to April. An episode of the hour-long TV series Starman in 1986 titled "Peregrine" was about an injured peregrine falcon and the endangered species program. It was filmed with the assistance of the University of California's peregrine falcon project in Santa Cruz. ## See also - List of birds by flight speed - Perilanner, a hybrid of the peregrine falcon and the lanner falcon (Falco biarmicus) - Perlin, a hybrid of the peregrine falcon and the merlin (Falco columbarius) ## Explanatory notes
10,639,793
Dream Days at the Hotel Existence
1,096,426,378
null
[ "2007 albums", "ARIA Award-winning albums", "Albums produced by Rob Schnapf", "Powderfinger albums" ]
Dream Days at the Hotel Existence is the sixth studio album by Australian rock band Powderfinger, released by Universal Music on 2 June 2007 in Australia, 19 November 2007 in the United Kingdom, and 11 November 2008 in the United States on the Dew Process label. It was released in Australia with a limited edition bonus DVD, titled Powderfinger's First XI, featuring eleven music videos spanning the band's career, from the first single, "Tail" to "Bless My Soul", the band's latest single before the release of the album. A collector's edition, including a CD and DVD, was released on 18 April 2008. Powderfinger reunited in late 2006, after a three-year hiatus, to write songs for Dream Days at the Hotel Existence, which was recorded in Los Angeles, California, in early 2007 by producer Rob Schnapf. The first single from the album, "Lost and Running" was released on 12 May 2007, and reached number five on the ARIA singles chart. Three further singles were released; "I Don't Remember", "Nobody Sees", and "Who Really Cares (Featuring the Sound of Insanity)", though they failed to equal "Lost and Running"'s chart performance. The album received critical acclaim, with many reviewers commenting that the album was "consistent" and "distinctly Australian". The album encountered controversy relating to the song "Black Tears" with claims that it may have influenced the Palm Island death in custody trial. Powderfinger released an abridged version of the song as a result of these accusations. ## Background Bernard Fanning stated in television interviews in 2006 that Powderfinger was working on a new album to be released the following year. On Powderfinger's website, guitarist Ian Haug said the upcoming album was an "exciting new direction" for the band's music. After a month of recording, on 2 March 2007, Fanning made an announcement on Australian radio station Triple J that tracking was complete, mixing the album was to follow, and the approximate release date was June. Fanning also stated that several of the tracks on the album feature session pianist Benmont Tench. The title of the album was drawn from the book Brooklyn Follies by Paul Auster, which Fanning had read during the recording. He stated the concept of the title related to escapism, and that he felt it an appropriate sentiment to attach to the music of the album. ## Recording and production Following their hiatus, which commenced after the release of Fingerprints: The Best of Powderfinger, 1994-2000, the band reconvened in late 2006 to write songs for Dream Days at the Hotel Existence. The band sought a new sound on the album, causing the recording process to be different from prior albums; Melbourne's Sing Sing studios were not used and Nick DiDia was no longer the producer. Dream Days at the Hotel Existence was recorded at Sunset Sound Studio, Los Angeles, California, in early 2007 by producer Rob Schnapf, best known for his work with Beck and The Vines. Powderfinger had already written most of the album before departing to the United States. In particular, Powderfinger wrote songs in parts and brought them together; some songs were written in pairs or trios, while others were written in parts by different people, and then combined. According to the band, this brought a "diverse" and "fresh" approach to songwriting. The band used different methods in putting the album together as "it comes back to the sound the five of us can make together". Powderfinger guitarist Darren Middleton commented that as a rule they preferred not to put together an album that was just "plain". As the style of writing differed, the band identified the need for piano performances in many of their songs, enlisting veteran pianist Benmont Tench to play parts throughout. ## Artwork Dream Days at the Hotel Existence's cover art was designed by Aaron Hayward & David Homer of Debaser, a New South Wales-based design organisation. The recipient of the 2007 ARIA Award for "Best Cover Art", the album art features a photograph of a road leading into the Australian outback horizon. In the centre, placed in the sky in relation to the background, there is a window with a crimson curtain. Within this window is a hotel room, as per the name of the album, in which a man with no head in a suit is seated at the end of the bed while watching the television. Above the window is the album title and at the top of the cover is the band's name in a typeface more crafty than on previous album covers. Though the general design of the cover is that of a 1930s hotel in style, the typeface juxtaposes the general design with being a rather futuristic, science-fiction styled typeset. This is the second futuristic style that the band has used for their name, the first appearing on Vulture Street. ## Album and single releases The album was released in Australia on 2 June 2007, and in the United Kingdom on 19 November of the same year. A "limited edition" version of the album included a DVD featuring a collection of Powderfinger music videos, titled Powderfinger's First XI. The music video for "Lost and Running" was also included, and was dubbed The Twelfth Man. A collector's edition, including a CD and DVD, was released on 18 April 2008. Several songs from the album were launched to Perth fans as free music downloads via PerthNow, a Perth-based newspaper. Fans were required to obtain a codeword from the newspaper, then submit it online to download the tracks. The first single from Dream Days at the Hotel Existence was "Lost and Running" and the video clip, which was directed by Damon Escott and Stephen Lance of Head Pictures, began showing in Australia on 21 April 2007. The single made its Australian radio debut on 16 April 2007, but had been available for several days beforehand on the Powderfinger's MySpace web page. An exclusive early release of the song was played by Triple J on 13 April 2007. "Lost and Running" reached number five on the ARIA singles chart. The second single from the album was "I Don't Remember". The film clip for the song was created by Fifty Fifty Films, who have created music videos for the group before including "Passenger" and "Like a Dog". The song was aired on radio on 9 July 2007, the music video was released in July, and the CD single was released for sales on 4 August 2007. The video was shot at Samford State School in Powderfinger's home city of Brisbane and features many of the school's students. On 16 November 2007, it was announced that the third single from Dream Days at the Hotel Existence would be the album's sixth track, "Nobody Sees". A video was released on the same day as the announcement and the single is set to be released as a digital single on 1 December 2007. In February 2008, Powderfinger announced the release of the album's fourth single, "Who Really Cares (Featuring the Sound of Insanity)". ## Critical reception Sydney Morning Herald commentator Bernard Zuel described Dream Days at the Hotel Existence as Powderfinger's first dull album, noting that on numerous songs "It promises to become exciting but never quite gets there." He complained that most of the songs were uneventful, or uninspiring, and that they do not "lift you as a listener." PerthNows Jay Hanna disagreed, claiming the album was "rippling with emotions". He said the album contained some "incredible moments", praising "Head Up in the Clouds", and calling "Nobody Sees" "Powderfinger at their devastating best", while giving the album four stars. Cameron Adams of Herald Sun HiT stated that the album contained no new directions for the band, and was highly consistent. He noted that the album contained less "rough edges and attitude" than predecessor Vulture Street, and likening the album more to Odyssey Number Five. Sputnikmusic's James Bishop agreed, claiming the band should be concerned by the "lack of experimentation or ambition" on the album. He again stated that the album was consistent, noting that "there actually isn't a bad song present". The review, which gave the album three and a half stars, commented that it seemed the band were trying to move towards the bluegrass genre, and "edging their way into the adult-contemporary section" of a music store, something they had not shown on their previous works. AllMusic's Clayton Bolger drew comparisons to Internationalist in his review, which gave the album three and a half stars. He said the album contained "all the trademarks of classic Powderfinger", praising Fanning's vocals, Middleton and Haug's "twin-guitar attack", Collins' basslines and Coghill's "powerhouse drum work". While praising "I Don't Remember" as an excellent anthem, and "Surviving" for containing "a sonic blast of rock", he was critical of "Lost and Running", which he said felt "tired and sluggish", while "Ballad of a Dead Man" was described as "tedious". ## Track listing All songs were written and performed by Powderfinger with performances by pianist Benmont Tench. 1. "Head Up in the Clouds" – 3:47 2. "I Don't Remember" – 3:41 3. "Lost and Running" – 3:42 4. "Wishing on the Same Moon" – 4:32 5. "Who Really Cares (Featuring the Sound of Insanity)" – 5:10 6. "Nobody Sees" – 4:14 7. "Surviving" – 3:45 8. "Long Way to Go" – 3:46 9. "Black Tears" – 2:30 10. "Ballad of a Dead Man" – 5:29 11. "Drifting Further Away" – 3:40 Bonus tracks - "Down by the Dam" – 4:29<sup>[A]</sup> - "Glory Box" – 4:32<sup>[B]</sup> ### Limited edition bonus DVD Released under the titles Powderfinger's First XI and The Twelfth Man, the bonus DVD features eleven music videos by Powderfinger spanning their entire recording career, and also includes the launch single to Dream Days at the Hotel Existence, "Lost and Running". Powderfinger's First XI 1. "Tail" – 4:24 2. "Living Type" – 3:25 3. "Pick You Up" – 3:30 4. "Passenger" – 4:39 5. "Good Day Ray"<sup>[C]</sup> – 1:50 6. "Don't Wanna Be Left Out" – 2:18 7. "My Kind of Scene" – 4:31 8. "Like a Dog" – 4:41 9. "On My Mind"<sup>[D]</sup> – 3:40 10. "Sunsets" (Acoustic version) – 3:57 11. "Bless My Soul" – 4:06 - The Twelfth Man''': "Lost and Running" – 3:52 ## Commercial performance The album debuted in the ARIA Album Charts on 11 June 2007 at number one, becoming Powderfinger's fourth album to peak at the top spot. The album was certified platinum in its first week of sales, and its double platinum certification was announced later. A week after its release, the album achieved the highest first-week sales figures of any new release in 2007, with total sales of 40,847, thus making it the fastest selling album of the year in Australia. In its first week of release, Dream Days at the Hotel Existence broke the Australian digital album sales record, with over 3,000 digital sales. ## Charts ### Weekly charts ### Year-end charts ## Certifications ## "Black Tears" controversy On 2 May 2007, "Black Tears", the ninth song on Dream Days at the Hotel Existence, sparked controversy after claims that its lyrics could invoke prejudice in the Palm Island death in custody trial. Lawyers for the accused, Senior Sergeant Chris Hurley, lodged a complaint to the Queensland Attorney-General relating to the lyrics of the song. According to Hurley's legal team, the initial lyrics dealt with the "death of a Palm Island man, Mulrunji Doomadgee", in stating "an island watch-house bed, a black man's lying dead". Bernard Fanning made a media statement in response to the complaint, stating that the band had never intended for the song to contain "even the slightest suggestion of any prejudice". He also said the band would still release the album on the planned date, but with an alternate version of "Black Tears". Fanning later said he was not angry about having to change his lyrics, but he lamented the lack of Australian musicians willing to challenge the status quo. ## Touring Tickets for a nationwide tour of launch shows for Dream Days at the Hotel Existence went on sale on 10 May 2007 on the band's website, with tickets to the general public being released a day later. Powderfinger also toured in New South Wales and northern Victoria. Australian pianist Lachlan Doley was enlisted to play piano and keyboard parts on their live performances in these shows. His performances were welcomed by critics and audiences, with AdelaideNow commenting that "local ring-in Lachlan Doley added shimmering keys to the band's richly textured sound". Powderfinger and Doley performed the single "Lost and Running" on popular Australian variety show Rove on 17 June 2007. The group performed at Splendour in the Grass on 4 August 2007, and then followed it up by performing at Triple J's AWOL Concert in Karratha, Western Australia on 18 August 2007. Powderfinger announced the Across the Great Divide'' tour on 12 June 2007. The band were accompanied on the nationwide concert tour by Australian rock group Silverchair. The tour is featured not only in the capital cities, but in fourteen Australian and New Zealand regional centres as well. According to Fanning, "the idea is to show both bands are behind the idea of reconciliation [of Indigenous Australians]." ## Personnel ### Powderfinger - Bernard Fanning – guitar and vocals - John Collins – bass guitar - Ian Haug – guitars - Darren Middleton – guitars and backing vocals - Jon Coghill – drums - Cody Anderson – backup drummer ### Additional musicians - Benmont Tench – piano and keyboards ### Production - Rob Schnapf – producer - Doug Boehm – engineer ## See also - Powderfinger albums - Full discography
53,017,272
Roosevelt dime
1,070,756,646
US ten-cent coin (1946 to present)
[ "Cultural depictions of Franklin D. Roosevelt", "Currencies introduced in 1946", "Monuments and memorials to Franklin D. Roosevelt in the United States", "Sculptures of presidents of the United States", "Ten-cent coins of the United States" ]
The Roosevelt dime is the current dime, or ten-cent piece, of the United States. Struck by the United States Mint continuously since 1946, it displays President Franklin D. Roosevelt on the obverse and was authorized soon after his death in 1945. Roosevelt had been stricken with polio, and was one of the moving forces of the March of Dimes. The ten-cent coin could be changed by the Mint without the need for congressional action, and officials moved quickly to replace the Mercury dime. Chief Engraver John R. Sinnock prepared models, but faced repeated criticism from the Commission of Fine Arts. He modified his design in response, and the coin went into circulation in January 1946. Since its introduction, the Roosevelt dime has been struck continuously in large numbers. The Mint transitioned from striking the coin in silver to base metal in 1965, and the design remains essentially unaltered from when Sinnock created it. Without rare dates or silver content, the dime is less widely sought by coin collectors than other modern U.S. coins. ## Inception and preparation President Franklin D. Roosevelt died on April 12, 1945, after leading the United States through much of the Great Depression and World War II. Roosevelt had suffered from polio since 1921 and had helped found and strongly supported the March of Dimes to fight that crippling disease, so the ten-cent piece was an obvious way of honoring a president popular for his war leadership. On May 3, Louisiana Representative James Hobson Morrison introduced a bill for a Roosevelt dime. On May 17, Treasury Secretary Henry Morgenthau Jr. announced that the Mercury dime (also known as the Winged Liberty dime) would be replaced by a new coin depicting Roosevelt, to go into circulation about the end of the year. Approximately 90 percent of the letters received by Stuart Mosher, editor of The Numismatist (the journal of the American Numismatic Association), were supportive of the change, but he himself was not, arguing that the Mercury design was beautiful and that the limited space on the dime would not do justice to Roosevelt; he advocated a commemorative silver dollar instead. Others objected that despite his merits, Roosevelt had not earned a place alongside Washington, Jefferson and Lincoln, the only presidents honored on the circulating coinage to that point. As the Mercury design, first coined in 1916, had been struck for at least 25 years, it could be changed under the law by the Bureau of the Mint. No congressional action was required, though the committees of each house with jurisdiction over the coinage were informed. Creating the new design was the responsibility of Chief Engraver John R. Sinnock, who had been in his position since 1925. Much of the work in preparing the new coin was done by Sinnock's assistant, later chief engraver Gilroy Roberts. In early October 1945, Sinnock submitted plaster models to Assistant Director of the Mint F. Leland Howard (then acting as director), who transmitted them to the Commission of Fine Arts. This commission reviews coin designs because it was tasked by a 1921 executive order by President Warren G. Harding with rendering advisory opinions on public artworks. The models initially submitted by Sinnock showed a bust of Roosevelt on the obverse and, on the reverse, a hand grasping a torch, and also clutching sprigs of olive and oak. Sinnock had prepared several other sketches for the reverse, including one flanking the torch with scrolls inscribed with the Four Freedoms. Other drafts showed representations of the goddess Liberty, and one commemorated the United Nations Conference of 1945, displaying the War Memorial Opera House where it took place. Numismatist David Lange described most of the alternative designs as "weak". The models were sent on October 12 by Howard to Gilmore Clarke, chairman of the commission, who consulted with its members and responded on the 22nd, rejecting them, stating that "the head of the late President Roosevelt, as portrayed by the models, is not good. It needs more dignity." Sinnock had submitted an alternative reverse design similar to the eventual coin, with the hand omitted and the sprigs placed on either side of the torch; Clarke preferred this. Sinnock attended a conference at the home of Lee Lawrie, sculptor member of the commission, with a view to resolving the differences, and thereafter submitted a new model for the obverse, addressing the concerns about Roosevelt's head. The Mint Director, Nellie Tayloe Ross, sent photographs to the commission, which rejected it and proposed a competition among five artists, including Adolph A. Weinman (designer of the Mercury dime and Walking Liberty half dollar) and James Earle Fraser (who had sculpted the Buffalo nickel). Ross declined, as the Mint was under great pressure to have the new coins ready for the March of Dimes campaign in January 1946. The new Treasury Secretary, Fred Vinson, was appealed to, but he also disliked the models and rejected them near the end of December. Sinnock swapped the positions of the date and the word LIBERTY, allowing an enlargement of the head. He made other changes as well; according to numismatic author Don Taxay, "Roosevelt had never looked better!" Lawrie and Vinson approved the models. On January 8, Ross telephoned the commission, informing them of this. With Sinnock ill (he died in 1947) and the March of Dimes campaign under way, Ross did not wait for a full meeting of the commission, but authorized the start of production. This caused some ill-feeling between the Mint and the commission, but she believed that she had fulfilled her obligations under the executive order. ## Design The obverse of the dime depicts President Roosevelt, with the inscriptions LIBERTY and IN GOD WE TRUST. Sinnock's initials, JS, are found by the cutoff of the bust, to the left of the date. The reverse shows a torch in the center, representing liberty, flanked by an olive sprig representing peace, and one of oak symbolizing strength and independence. The inscription E PLURIBUS UNUM (out of many, one) stretches across the field. The name of the country and the value of the coin are the legends that surround the reverse design, which is symbolic of the victorious end of World War II. Numismatist Mark Benvenuto suggested that the image of Roosevelt on the coin is more natural than other such presidential portraits, resembling that on an art medal. Walter Breen, in his comprehensive volume on U.S. coins, argued that "the new design was ... no improvement at all on Weinman's [Mercury dime] except for eliminating the fasces [on its reverse] and making the vegetation more recognizably an olive branch for peace." Art historian Cornelius Vermeule called the Roosevelt dime "a clean, satisfying and modestly stylish, no-nonsense coin that in total view comes forth with notes of grandeur". Some, at the time of design and since, have seen similarities between the dime and a plaque depicting Roosevelt sculpted by African-American sculptor Selma Burke, unveiled in September 1945, which is in the Recorder of Deeds Building in Washington; Burke was among those alleging her work was used by Sinnock to create the dime. She advocated for this position until her death in 1994, and persuaded a number of numismatists and politicians, including Roosevelt's son James. Numismatists who support her point to the fact that Sinnock took his depiction of the Liberty Bell, which appears on the 1926 Sesquicentennial half dollar and Franklin half dollar (1948–1963), from another designer without giving credit. However, Robert R. Van Ryzin, in his book on mysteries about U.S. coins, pointed out that Sinnock had sketched Roosevelt from life in 1933 for his first presidential medal (designed by Sinnock), and accounts from the time of issuance of the dime state that Sinnock used those, as well as photographs of the president, to prepare the dime. A 1956 obituary in The New York Times credits Marcel Sternberger with taking the photograph that Sinnock adapted for the dime. According to Van Ryzin, the passage of time has made it impossible to verify or invalidate Burke's assertion. ## Production The Roosevelt dime was first struck on January 19, 1946, at the Philadelphia Mint. It was released into circulation on January 30, which would have been President Roosevelt's 64th birthday. The planned release date had been February 5; it was moved up to coincide with the anniversary. With its debut, Sinnock became the first chief engraver to be credited with the design of a new circulating U.S. coin since those designed by Charles E. Barber were first issued in 1892. The release of the coins was a newsworthy event, and demand for the new design remained strong, although many of Roosevelt's opponents, particularly Republicans, were outraged. There were reports of the new dime being rejected in vending machines, but no changes to the coin were made. The dime's design has not changed much in its over seventy years of production, the most significant alterations being minor changes to Roosevelt's hair and the shifting of the mint mark from reverse to obverse in the 1960s. At the time the dimes were released, relations with the USSR were deteriorating, and Sinnock's initials, JS, were deemed by some to refer to Soviet dictator Joseph Stalin, placed there by a communist sympathizer. Once these rumors reached Congress, the Mint sent out press releases debunking this myth. Despite the Mint's denial, there were rumors into the 1950s that there had been a secret deal at the Yalta Conference to honor Stalin on a U.S. coin. The controversy was given fresh life in 1948 with the posthumous release of Sinnock's Franklin half dollar, which bears his initials JRS. Although usually more coins were struck at Philadelphia than at the other mints during the years the coin was struck in silver, only 12,450,181 were struck there in 1955, fewer than at the Denver Mint or at San Francisco. This was due to a sagging economy and a lackluster demand for coins that caused the Mint to announce in January that the San Francisco Mint would be shuttered at the end of the year. The 1955 dimes from the three facilities are the lowest mintages by date and mint mark among circulating coins in the series, but are not rare, as collectors stored them away in rolls of 50. With the Coinage Act of 1965, the Mint transitioned to striking clad coins, made from a sandwich of copper nickel around a core of pure copper. There are no mint marks on coins dated from 1965 to 1967, as the Mint made efforts to discourage the hoarding that it blamed for the coin shortages that had preceded the 1965 act. The Mint modified the master hub only slightly when it began clad coinage, but starting in 1981, made minor changes that lowered the coin's relief considerably, leading to a flatter look to Roosevelt's profile. This was done so that coinage dies would last longer. Mint marks resumed in 1968 at Denver and for proof coins at San Francisco. Although the California facility beginning in 1965 occasionally struck dimes for commerce, those bore no mint marks and are indistinguishable from ones minted at Philadelphia. The only dimes to bear the "S" mint mark for San Francisco since 1968 have been proof coins, resuming a series coined from 1946 to 1964 without mint mark at Philadelphia. Starting in 1992, silver dimes with the pre-1965 composition were struck at San Francisco for inclusion in annual proof sets featuring silver coins. Beginning in 2019, these silver dimes are struck in .999 silver, rather than .900, which the Mint no longer uses. In 1980, the Philadelphia Mint began using a mint mark "P" on dimes. Dimes had been struck intermittently during the 1970s and 1980s at the West Point Mint, in Roosevelt's home state of New York, to meet demand, but none bore a "W" mint mark. This changed in 1996, when dimes were struck there for the 50th anniversary of the Roosevelt design. Just under a million and a half clad 1996-W dimes were minted; these were not released to circulation, but were included in the year's mint set for collectors. In 2015, silver dimes were struck at West Point for inclusion in a special set of coins for the March of Dimes, including a dime struck at Philadelphia and a silver dollar depicting Roosevelt and polio vaccine developer Dr. Jonas Salk. Mintages generally remained high, with a billion coins each struck at Philadelphia and at Denver in many of the clad years. In 2003, Indiana Representative Mark Souder proposed that former president Ronald Reagan, who was then dying of Alzheimer's disease, replace Roosevelt on the dime once he died, stating that Reagan was as iconic to conservatives as Roosevelt was to liberals. Reagan's wife Nancy expressed her opposition, stating that she was certain the former president would not have favored it either. After Ronald Reagan died in 2004, there was support for a design change, but Souder declined to pursue his proposal. The Circulating Collectible Coin Redesign Act of 2020 () was signed by President Donald Trump on January 13, 2021. It provides for, among other things, special one-year designs for the circulating coinage in 2026, including the dime, for the United States Semiquincentennial (250th anniversary), with one of the designs to depict women. ## Collecting Due to the large numbers struck, few regular-issue Roosevelt dimes command a premium, and the series has received relatively little attention from collectors. Though silver issues remain legal tender and can be removed from circulation and collected via coin roll hunting, clad coins form the majority of the dimes in circulation. Prominent among these are the dimes struck at Philadelphia in 1982, erroneously minted and released without the mint mark "P"; these may sell for \$50 to \$75. As no official mint sets were issued in 1982 or 1983, even ordinary dimes of those years from Philadelphia or Denver in pristine condition command a significant premium (worn ones do not). Far more expensive are the dimes erroneously issued in proof condition in 1970, 1975 and 1983 that lack the "S" mint mark. One of only two known from 1975 sold at auction in 2011 for \$349,600.
2,093,520
Resident Evil 5
1,166,511,919
2009 video game
[ "2000s horror video games", "2009 video games", "Action-adventure games", "Bioterrorism in fiction", "Capcom games", "Cooperative video games", "Fiction about parasites", "Games for Windows", "Japan Game Award winners", "Mercenary Technology games", "Multiplayer and single-player video games", "Nintendo Switch games", "PlayStation 3 games", "PlayStation 4 games", "PlayStation Move-compatible games", "Race-related controversies in video games", "Resident Evil games", "Split-screen multiplayer games", "Third-person shooters", "Video game sequels", "Video games developed in Japan", "Video games featuring black protagonists", "Video games featuring female protagonists", "Video games scored by Akihiko Narita", "Video games scored by Wataru Hokoyama", "Video games set in 2009", "Video games set in Africa", "Video games set in Europe", "Video games set in a fictional country", "Video games using Havok", "Windows games", "Xbox 360 games", "Xbox One games" ]
Resident Evil 5 is a 2009 third-person shooter video game developed and published by Capcom. It is a major installment in the Resident Evil series, and was announced in 2005—the same year its predecessor Resident Evil 4 was released. Resident Evil 5 was released for the PlayStation 3 and Xbox 360 consoles in March 2009 and for Windows in September 2009. It was re-released for PlayStation 4 and Xbox One in June 2016. The plot involves an investigation of a terrorist threat by Bioterrorism Security Assessment Alliance agents Chris Redfield and Sheva Alomar in Kijuju, a fictional region of West Africa. Chris learns that he must confront his past in the form of an old enemy, Albert Wesker, and his former partner, Jill Valentine. The gameplay of Resident Evil 5 is similar to that of the previous installment, though it is the first in the series designed for two-player cooperative gameplay. It has also been considered the first game in the main series to depart from the survival horror genre, with critics saying it bore more resemblance to an action game. Motion capture was used for the cutscenes, and it was the first video game to use a virtual camera system. Several staff members from the original Resident Evil worked on Resident Evil 5. The Windows version was developed by Mercenary Technology. Resident Evil 5 received a positive reception, despite some criticism for its control scheme. The game received some complaints of racism, though an investigation by the British Board of Film Classification found the complaints were unsubstantiated. As of December 2022, when including the original, special and remastered versions, the game had sold 13.5 million units. It is the best-selling game of the Resident Evil franchise, and the original version remained the best-selling individual Capcom release until March 2018, when it was outsold by Monster Hunter: World. A sequel, Resident Evil 6, was released in 2012. ## Plot In 2009, five years after the events of Resident Evil 4, Chris Redfield, now an agent of the Bioterrorism Security Assessment Alliance (BSAA), is dispatched to Kijuju in West Africa. He and his new partner Sheva Alomar are tasked with apprehending Ricardo Irving before he can sell a bio-organic weapon (BOW) on the black market. When they arrive, they discover that the locals have been infected by the parasites Las Plagas (those infected are called "Majini") and the BSAA Alpha Team have been killed. Chris and Sheva are rescued by BSAA's Delta Team, which includes Sheva's mentor Captain Josh Stone. In Stone's data Chris sees a photograph of Jill Valentine, his old partner, who has been presumed dead after a confrontation with Albert Wesker. Chris, Sheva and Delta Team close in on Irving, but he escapes with the aid of a hooded figure. Irving leaves behind documents that lead Chris and Sheva to marshy oilfields, where Irving's deal is to occur, but they discover that the documents are a diversion. When Chris and Sheva try to regroup with Delta Team, they find the team slaughtered by a BOW; Sheva cannot find Stone among the dead. Determined to learn if Jill is still alive, Chris does not report to headquarters. Continuing through the marsh, they find Stone and track down Irving's boat with his help. Irving injects himself with a variant of the Las Plagas parasite and mutates into a huge octopus-like beast. Chris and Sheva defeat him, and his dying words lead them to a nearby cave. The cave is the source of a flower used to create viruses previously used by the Umbrella Corporation, as well as a new strain named Uroboros. Chris and Sheva find evidence that Tricell, the company funding the BSAA, took over a former Umbrella underground laboratory and continued Umbrella's research. In the facility, they discover thousands of capsules holding human test subjects. Chris finds Jill's capsule, but it is empty. When they leave, they discover that Tricell CEO Excella Gionne has been plotting with Wesker to launch missiles with the Uroboros virus across the globe; it is eventually revealed that Wesker hopes to take a chosen few from the chaos of infection and rule them, creating a new breed of humanity. Chris and Sheva pursue Gionne but are stopped by Wesker and the hooded figure, who is revealed to be a brainwashed Jill. Gionne and Wesker escape to a Tricell oil tanker; Chris and Sheva fight Jill, subduing her and removing the mind-control device before she urges Chris to follow Wesker. Chris and Sheva board the tanker and encounter Gionne, who escapes after dropping a case of syringes; Sheva keeps several. When Chris and Sheva reach the main deck, Wesker announces over the ship's intercom that he has betrayed Gionne and infected her with Uroboros. She mutates into a giant monster, which Chris and Sheva defeat. Jill radios in, telling Chris and Sheva that Wesker must take precise, regular doses of a serum to maintain his strength and speed; a larger or smaller dose would poison him. Sheva realizes that Gionne's syringes are doses of the drug. Chris and Sheva follow Wesker to a bomber loaded with missiles containing the Uroboros virus, injecting him with the syringes Gionne dropped. Wesker tries to escape on the bomber; Chris and Sheva disable it, making him crash-land in a volcano. Furious‚ Wesker exposes himself to Uroboros and chases Chris and Sheva through the volcano. They fight him, and the weakened Wesker falls into the lava before Chris and Sheva are rescued by a helicopter, which is piloted by Jill and Stone. As a dying Wesker attempts to drag the helicopter into the volcano, Chris and Sheva fire rocket-propelled grenades at Wesker, killing him. In the game's final cutscene, Chris wonders if the world is worth fighting for. Looking at Sheva and Jill, he decides to live in a world without fear. ## Gameplay Resident Evil 5 is a third-person shooter with an over-the-shoulder perspective. Players can use several weapons including handguns, shotguns, automatic rifles, sniper rifles, and grenade launchers, as well as melee attacks. Players can make quick 180-degree turns to evade enemies. The game involves boss battles, many of which contain quick time events. As in its predecessor Resident Evil 4, players can upgrade weapons with money and treasure collected in-game and heal themselves with herbs, but cannot run and shoot at the same time. New features include infected enemies with guns and grenades, the ability to upgrade weapons at any time from the inventory screen without having to find a merchant, and the equipping of weapons and items in real-time during gameplay. Each player can store nine items. Unlike the previous games, the item size is irrelevant; a herb or a grenade launcher each occupy one space, and four items may be assigned to the D-pad. The game features puzzles, though fewer than previous titles. Resident Evil 5 is the first game in the Resident Evil series designed for two-player cooperative gameplay. The player controls Chris, a former member of the fictional Special Tactics and Rescue Service (STARS) and member of the BSAA, and a second player can control Sheva, who is introduced in this game. If a person plays alone, Sheva is controlled by the game's artificial intelligence (AI). When the game has been completed once, there is an option to make Sheva the primary character. Two-player mode is available online or split screen with a second player using the same console. A second player joining a split screen game in progress will make the game reload the last checkpoint (the point at which the game was last saved); the second player joining an online game will have to wait until the first player reaches the next checkpoint, or restarts the previous one, to play. In split-screen mode, one player's viewpoint is presented in the top half of the screen, and the other in the bottom half, but each viewpoint is presented in widescreen format, rather than using the full width of the screen, resulting in unused space to the left and right of the two windows. If one player has critical health, only their partner can resuscitate them, and they will die if their partner cannot reach them. At certain points, players are deliberately separated. Players can trade items during gameplay, although weapons cannot be traded with online players. The game's storyline is linear, and interaction with other characters is mostly limited to cutscenes. A version of the Mercenaries minigame, which debuted in Resident Evil 3: Nemesis, is included in Resident Evil 5. This minigame places the player in an enclosed environment with a time limit. Customized weapons cannot be used and players must search for weapons, ammunition, and time bonuses while fighting a barrage of enemies, to score as many points as possible within the time limit. The minigame multiplayer mode was initially offline only; a release-day patch needed to be downloaded to access the online multiplayer modes. Mercenaries is unlocked when the game's story mode has been completed. ## Development Resident Evil 5 was developed by Capcom and produced by Jun Takeuchi, who directed Onimusha: Warlords and produced Lost Planet: Extreme Condition. Keiji Inafune, promotional producer for Resident Evil 2 and executive producer of the PlayStation 2 version of Resident Evil 4, supervised the project. Production began in 2005 and at its peak, over 100 people were working on the project. In February 2007, some members of Capcom's Clover Studio began working on Resident Evil 5 while others were working on Resident Evil: The Umbrella Chronicles, which debuted for the Wii. Yasuhiro Anpo, who worked as a programmer on the original Resident Evil, directed Resident Evil 5. He was one of several staff members who worked on the original game to be involved in Resident Evil 5's development. The game's scenario was written by Haruo Murata and Yoshiaki Hirabayashi, based on a story idea by concept director Kenichi Ueda. Takeuchi announced that the game would retain the gameplay model introduced in Resident Evil 4, with "thematic tastes" from both Resident Evil 4 and the original Resident Evil. While previous Resident Evil games are mainly set at night, the events of Resident Evil 5 occur almost entirely during the day. The decision for this was a combination of the game being set in Africa and advances in hardware improvements which allowed increasingly detailed graphics. On the subject of changes to Jill and Chris's appearance, production director Yasuhiro Anpo explained that designers tried "to preserve their image and imagined how they would have changed over the passage of time". Their new designs retained the character's signature colors; green for Chris and blue for Jill. Sheva was redesigned several times during production, though all versions tried to emphasize a combination of "feminine attraction and the strength of a fighting woman". The Majini were designed to be more violent than the "Ganado" enemies in Resident Evil 4. The decision for cooperative gameplay was made part-way through development, for a new experience in a Resident Evil game. Despite initial concern that a second player would dampen the game's tension and horror, it was later realized that this could actually increase such factors where one player had to be rescued. The decision to retain wide-screen proportions in two-player mode was made to avoid having the first player's screen directly on top of the second, which might be distracting, and the restriction on simultaneously moving and shooting was retained to increase player tension by not allowing them to maneuver freely. Takeuchi cited the film Black Hawk Down as an influence on the setting of Resident Evil 5 and his experience working on Lost Planet: Extreme Condition as an influence on its development. When questioned as to why the game was not being released on the Wii, which was the most popular gaming console at that time, Takeuchi responded that although that may have been a good decision "from a business perspective", the Wii was not the best choice in terms of power and visual quality, concluding that he was happy with the console choices they had made. Resident Evil 5 runs on version 1.4 of Capcom's MT Framework engine and scenes were recorded by motion capture. It was the first video game to use a virtual camera system, which allowed the developers to see character movements in real time as the motion-capture actors recorded. Actors Reuben Langdon, Karen Dyer and Ken Lally portrayed Chris Redfield, Sheva Alomar and Albert Wesker respectively. Dyer also voiced Sheva, while Chris's voice was performed by Roger Craig Smith. Dyer's background training in circus skills helped her win the role of Sheva, as Capcom were searching for someone who could handle the physical skills her motion capture required. She performed her own stunts, and worked in production on the game for over a year, sometimes working 14 hours a day. All of the human character motions were based on motion capture, while the non-human characters in the game were animated by hand. Kota Suzuki was the game's principal composer and additional music was contributed by Hideki Okugawa, Akihiko Narita and Seiko Kobuchi. The electronic score includes 15 minutes of orchestral music, recorded at the Newman Scoring Stage of 20th Century Fox Studios in Los Angeles with the 103-piece Hollywood Studio Symphony. Other orchestral music and arrangements were by Wataru Hokoyama, who conducted the orchestra. Capcom recorded in Los Angeles because they wanted a Hollywood-style soundtrack to increase the game's cinematic value and global interest. Resident Evil 5's soundtrack features an original theme song, titled "Pray", which was composed by Suzuki and sung by Oulimata Niang. ## Marketing and release Capcom announced Resident Evil 5 on July 20, 2005, and the company showed a brief trailer for the game at the Electronic Entertainment Expo (E3) in July 2007. The full E3 trailer became available on the Xbox Live Marketplace and the PlayStation Store that same month. A new trailer debuted on Spike TV's GameTrailers TV in May 2008, and on the GameTrailers website. A playable game demo was released in Japan on December 5, 2008, for the Xbox 360, in North America and Europe for the Xbox 360 on January 26, 2009, and on February 2 for the PlayStation 3. Worldwide downloads of the demo exceeded four million for the two consoles; over 1.8 million were downloaded between January 26 and January 29. In January 2009, D+PAD Magazine reported that Resident Evil 5 would be released with limited-edition Xbox 360 box art; pictures of the limited-edition box claimed that it would allow two to sixteen players to play offline via System Link. Although Capcom said that their "box art isn't lying", the company did not provide details. Capcom soon issued another statement that the box-art information was incorrect, and System Link could support only two players. Microsoft released a limited-edition, red Xbox 360 Elite console which was sold with the game. The package included an exclusive Resident Evil theme for the Xbox 360 Dashboard and a download voucher for Super Street Fighter II Turbo HD Remix from Xbox Live. Resident Evil 5 was released for PlayStation 3 and Xbox 360 in March 2009, alongside a dedicated Game Space on PlayStation Home. The space, Resident Evil 5 "Studio Lot" (Biohazard 5 "Film Studio" in Japan), had as its theme the in-game location of Kijuju. Its lounge offered Resident Evil 5-related items for sale, events and full game-launching support. Some areas of the space were available only to owners of Resident Evil 5. A Windows version was released in September 2009. This version, using Nvidia's 3D Vision technology through DirectX 10, includes more costumes and a new mode in the Mercenaries minigame. Resident Evil 5 was re-released on Shield Android TV in May 2016, and was re-released on PlayStation 4 and Xbox One the following month, with a physical disc copy following in America that July. It was also released for Nintendo Switch on October 29, 2019. ## Additional content Shortly before the release of Resident Evil 5, Capcom announced that a competitive multiplayer mode called Versus would be available for download in several weeks. Versus became available for download in Europe and North America on April 7, 2009, through the Xbox Live Marketplace and the PlayStation Store. Versus has two online game types: "Slayers", a point-based game challenging players to kill Majini, and "Survivors", where players hunt each other while dodging and attacking Majini. Both modes can be played by two-player teams. The Windows version of Resident Evil 5 originally did not support downloadable content (DLC). During Sony's press conference at the 2009 Tokyo Game Show Capcom announced that a special edition of the game, Biohazard 5: Alternative Edition, would be released in Japan for the PlayStation 3 in the spring of 2010. This edition supports the PlayStation Move accessory and includes a new scenario, "Lost in Nightmares", where Chris Redfield and Jill Valentine infiltrate one of Umbrella Corporation co-founder Oswell E. Spencer's estates in 2006. Another special edition of the game, Resident Evil 5: Gold Edition, was released for the Xbox 360 and PlayStation 3 in North America and Europe. Gold Edition includes "Lost in Nightmares" and another campaign-expansion episode, "Desperate Escape", where players control Josh Stone and Jill Valentine as they assist Chris and Sheva. The edition also includes the previously released Versus mode, four new costumes and an alternate Mercenaries mode with eight new playable characters, new items and maps. Like Alternative Edition, Gold Edition supports the PlayStation Move accessory with a patch released on September 14, 2010. The Xbox 360 version of Gold Edition came in a DVD with a token allowing free download of all DLC, while the PlayStation 3 version had all of the new content on a single Blu-ray disc. On November 5, 2012, Resident Evil 5: Gold Edition was placed on the PlayStation Network as a free download for PlayStation Plus users during that month. As part of the game's conversion to Steamworks, Gold Edition was released for Microsoft Windows on March 26, 2015. Owners of the game from Steam or as a boxed retail Games for Windows – Live can acquire a free Steamworks copy of the base game and purchase the new Gold Edition content. The Steamworks version did not allow the use of Nvidia's 3D Vision technology or fan modifications, though Capcom later confirmed a way to work around these issues. In 2023, an update was released for the Windows version that removed Games for Windows – Live, thus restoring the split screen co-op feature to the game. ## Reception `Resident Evil 5 received generally favourable reviews, according to review aggregator Metacritic. Reviewers praised the game's visuals and content. Corey Cohen of Official Xbox Magazine complimented the game's fast pace, and called the graphics gorgeous. It was praised by Joe Juba and Matt Miller of Game Informer, who said that it had the best graphics of any game to date and that the music and voice acting helped bring the characters to life, and Brian Crecente of Kotaku said it was one of the most visually stunning games he had ever played. Adam Sessler of X-Play said the game's graphics were exceptional, and Edge praised the gameplay as exhilarating and frantic. For IGN, Ryan Geddes wrote that the game had a surprisingly high replay value, and GameZone's Louis Bedigian said the game was "worth playing through twice in one weekend".` While still giving favorable reviews of the game, several reviewers considered it to be a departure from the survival horror genre, a decision they lamented. Chris Hudak of Game Revolution considered the game to be a "full-on action blockbuster", and Brian Crecente said that about halfway through the game it "dropp[ed] all pretense of being a survival horror title and unmask[ed] itself as an action shooter title". Kristan Reed of Eurogamer said the game "morphs what was a survival horror adventure into a survival horror shooter", and believed that this attempt to appeal to action gamers would upset some of the series' fans. Aspects of the game's control scheme were viewed negatively by critics. James Mielke of 1UP.com criticized several inconsistencies in the game, such as only being able to take cover from enemy fire in very specific areas. Mielke also criticized its controls, saying that aiming was too slow and noting the inability to strafe away from (or quickly jump back from) enemies. Despite the problems he found it was "still a very fun game". Kristan Reed also had criticism of some controls, such as the speed at which 180-degree turns were performed and difficulty accessing inventory. Joe Juba said that the inability to move and shoot at the same time seemed more "like a cheap and artificial way to increase difficulty than a technique to enhance tension." While praising some aspects of the AI control of Sheva, Ryan Geddes thought that it also had its annoyances, such as its tendency to recklessly expend ammunition and health supplies. Reception of the downloadable content was favorable. Steven Hopper of GameZone rated the "Lost in Nightmares" DLC eight out of ten, saying that despite the episode's brevity it had high replay value and the addition of new multiplayer elements made it a "worthy investment for fans of the original game." Samuel Claiborn of IGN rated the "Desperate Escape" DLC seven out of ten: "Despite Desperate Escape's well-crafted action sequences, I actually found myself missing the unique vibe of Lost in Nightmares. The dynamic between Jill and Josh isn't particularly thrilling, and the one-liners, banter and endearing kitsch are kept to a minimum." ### Allegations of racism Resident Evil 5's 2007 E3 trailer was criticized for depicting a white protagonist killing black enemies in a small African village. According to Newsweek editor N'Gai Croal, "There was a lot of imagery in that trailer that dovetailed with classic racist imagery", although he acknowledged that only the preview had been released. Takeuchi said the game's producers were completely surprised by the complaints. The second trailer for the game, released on May 31, 2008, revealed a more racially diverse group of enemies and the African BSAA agent Sheva, who assists the protagonist. Critics felt that Sheva's character was added to address the issue of racism, though Karen Dyer said the character had been in development before the first trailer was released. Takeuchi denied that complaints about racism had any effect in altering the design of Resident Evil 5. He acknowledged that different cultures may have had differing opinions about the trailer, though said he did not expect there to be further complaints once the game was released and people were "able to play the game and see what it is for themselves". In a Computer and Video Games interview, producer Masachika Kawata also addressed the issue: "We can't please everyone. We're in the entertainment business—we're not here to state our political opinion or anything like that. It's unfortunate that some people felt that way." In Eurogamer's February 2009 preview of Resident Evil 5, Dan Whitehead expressed concern about controversy the game might generate: "It plays so blatantly into the old clichés of the dangerous 'dark continent' and the primitive lust of its inhabitants that you'd swear the game was written in the 1920s". Whitehead said that these issues became more "outrageous and outdated" as the game progressed and that the addition of the "light-skinned" Sheva just made the overall issue worse. Hilary Goldstein from IGN believed that the game was not deliberately racist, and though he did not personally find it offensive, he felt that others would due to the subjective nature of offensiveness. Chris Hudak dismissed any allegations of racism as "stupid". Karen Dyer, who is of Jamaican descent, also dismissed the claims. She said that in over a year of working on the game's development she never encountered anything racially insensitive, and would not have continued working there if she had. Wesley Yin-Poole of VideoGamer.com said that despite the controversy the game was attracting due to alleged racism, no expert opinion had been sought. He asked Glenn Bowman, senior lecturer in social anthropology at the University of Kent, whether he thought the game was racist. Bowman considered the racism accusations "silly", saying that the game had an anti-colonial theme and those complaining about the game's racism might be expressing an "inverted racism which says that you can't have scary people who are black". It was reported that one cutscene in the game scene showed "black men" dragging off a screaming white woman; according to Yin-Poole, the allegation was incorrect and the single man dragging the woman was "not obviously black". The scene was submitted to the British Board of Film Classification for evaluation. BBFC head of communications Sue Clark said, "There is only one man pulling the blonde woman in from the balcony [and he] is not black either. As the whole game is set in Africa it is hardly surprising that some of the characters are black ... we do take racism very seriously, but in this case, there is no issue around racism." Academic journals and conferences, however, have continued to comment on the theme of race within the game. In 2011, André Brock from Games and Culture said that the game drew from well-established racial and gender stereotypes, saying that the African people were only depicted as savage, even before transitioning into zombies. Writing for the Digital Games Research Association in 2011, Geyser and Tshabalala noted that racial stereotyping had never been intended by Capcom, though compared their depiction of Africa to that of the 1899 novel Heart of Darkness. Post-colonial Africa, they opined, was portrayed as being unable to take care of itself, and at the mercy of Western influences. Writing for The Philosophy of Computer Games Conference in 2015, Harrer and Pichlmair considered Resident Evil 5 to be "yet another moment in the history of commodity racism, which from the late 19th century onwards allowed popular depictions of racial stereotypes to enter the most intimate spaces of European homes". The authors state that Africa is presented from a Western gaze; "what is presented as 'authentic' blackness conforms to the projected fantasy of predominantly white gaming audience". In 2016, Paul Martin from Games and Culture said that the theme of the game could be described as "dark continent", stating it drew on imagery of European colonialism and depictions of "Blackness" reminiscent of 19th-century European theories on race. ### Sales The PlayStation 3 version of Resident Evil 5 was the top-selling game in Japan in the two weeks following its release, with 319,590 units sold. In March 2009, it became the fastest-selling game of the franchise in the United Kingdom, and the biggest Xbox 360 and PlayStation 3 game release in the country. By December 2022, Resident Evil 5 had sold 8.6 million units worldwide on PlayStation 3 and Xbox 360 with its original release. The Gold Edition had sold an additional 2.3 million units on PlayStation 3 and Xbox 360. The PlayStation 4 and Xbox One versions sold another 2.6 million units combined, bringing the total sales to 13.5 million units. The original release of Resident Evil 5 was Capcom's best-selling individual edition of a game until March 2018, when Monster Hunter: World's sales reached 7.5 million units, compared to 7.3 million for Resident Evil 5 at the time. As of June 2020, when taking into consideration the sales of all versions and re-releases of titles, Resident Evil 5 was the third-best-selling Capcom game overall, behind Monster Hunter: World (16.1 million) and Street Fighter II (14.05 million), and it remains the best-selling title in the Resident Evil franchise, outselling its closest rivals Resident Evil 4 and Resident Evil 6 by 1.6 million and 2 million units, respectively. ### Awards Resident Evil 5 won the "Award of Excellence" at the 2009 Japan Game Awards. It was nominated for both Best Action/Adventure Game and Best Console Game at the 2008 Game Critics Awards, Best Action Game at the 2009 IGN Game of the Year Awards, and Best Sound Editing in Computer Entertainment at the 2010 Golden Reel Awards. It received five nominations at the 2010 Game Audio Network Guild Awards: Audio of the Year, Best Cinematic/Cut-Scene Audio, Best Dialogue, Best Original Vocal Song – Pop (for the theme song "Pray") and Best Use of Multi-Channel Surround in a Game. Karen Dyer's portrayal of Sheva Alomar was nominated for Outstanding Achievement in Character Performance at the 13th Annual Interactive Achievement Awards, while the game itself garnered a nomination for Outstanding Achievement in Art Direction.
151,608
Ian Chappell
1,173,103,605
Australian cricketer
[ "1943 births", "Australia One Day International cricketers", "Australia Test cricket captains", "Australia Test cricketers", "Australian Cricket Hall of Fame inductees", "Australian baseball players", "Australian cricket commentators", "Australian cricketers", "Australian expatriate sportspeople in England", "Australian republicans", "Chappell family", "Cricketers at the 1975 Cricket World Cup", "Cricketers from Adelaide", "Lancashire cricketers", "Living people", "People educated at Prince Alfred College", "South Australia cricketers", "Sport Australia Hall of Fame inductees", "Sportsmen from South Australia", "Wisden Cricketers of the Year", "World Series Cricket players" ]
Ian Michael Chappell (born 26 September 1943) is a former cricketer who played for South Australia and Australia. He captained Australia between 1971 and 1975 before taking a central role in the breakaway World Series Cricket organisation. Born into a cricketing family—his grandfather and brother also captained Australia—Chappell made a hesitant start to international cricket playing as a right-hand middle-order batsman and spin bowler. He found his niche when promoted to bat at number three. Known as "Chappelli", he earned a reputation as one of the greatest captains the game has seen. Chappell's blunt verbal manner led to a series of confrontations with opposition players and cricket administrators; the issue of sledging first arose during his tenure as captain, and he was a driving force behind the professionalisation of Australian cricket in the 1970s. John Arlott called him "a cricketer of effect rather than the graces". An animated presence at the batting crease, he constantly adjusted his equipment and clothing, and restlessly tapped his bat on the ground as the bowler ran in. Basing his game on a sound defence learned during many hours of childhood lessons, Chappell employed the drive and square cut to full effect. He had an idiosyncratic method of playing back and across to a ball of full length and driving wide of mid-on, but his trademark shot was the hook, famously saying "three bouncers an over should be worth 12 runs to me". A specialist slip fielder, he was the fourth player to take one hundred Test catches. Since his retirement in 1980, he has pursued a high-profile career as a sports journalist and cricket commentator, predominantly with Channel Nine. He remains a key figure in Australian cricket: in 2006, Shane Warne called Chappell the biggest influence on his career. Chappell was inducted into the Sport Australia Hall of Fame in 1986, the FICA Cricket Hall of Fame in 2000 and the Australian Cricket Hall of Fame in 2003. On 9 July 2009, Ian Chappell was inducted into the ICC Cricket Hall of Fame. ## Family and early career The first of four sons (Ian, Greg, Trevor, Michel) born in Unley, near Adelaide, to Martin and Jeanne (), Chappell was steeped in the game from an early age. His father was a noted Adelaide grade cricketer who put a bat in his hands as soon as he could walk, and his maternal grandfather was famous all-round sportsman Vic Richardson, who captained Australia at the end of a nineteen-Test career. Chappell was given weekly batting lessons from the age of five, as were younger brothers Greg and Trevor, who both also went on to play for Australia. Chappell grew up in the beachside suburb of Glenelg and attended the local St Leonard's Primary School where he played his first competitive match at the age of seven. He was later selected for the South Australian state schoolboys team. He then enrolled at Prince Alfred College, a private secondary school noted for producing many Test cricketers, including the Australian captains Joe Darling and Clem Hill. His other sporting pursuits included Australian football and baseball: Chappell's performances for South Australia in the Claxton Shield won him All-Australian selection in 1964 and 1966 as a catcher. He credits Vic Richardson, who had represented both SA and Australia in baseball during the 1920s, for his love of the sport. At the age of 18, his form in grade cricket for Glenelg led to his first-class debut for South Australia (SA) against Tasmania in early 1962. The aggressive style of Sobers and of South Australia captain Les Favell heavily influenced Chappell during his formative years in senior cricket. In 1962–63, Chappell made his initial first-class century against a New South Wales team led by Australian captain Richie Benaud, who was bemused by the young batsman's habit of gritting his teeth as he faced up; to Benaud, it looked as if he was grinning. Chappell spent the northern summer of 1963 as a professional in England's Lancashire League with Ramsbottom and played a single first-class match for Lancashire against Cambridge University. ## International career In 1963–64, Chappell batted at number three for SA for the first time, in a match against Queensland at Brisbane, and scored 205 not out. He was the youngest member of the SA team that won the Sheffield Shield that season. A century against Victoria early the following season resulted in Chappell's selection for a one-off Test against Pakistan at Melbourne in December 1964. He made 11 and took four catches, but was dropped until the Fourth Test in the 1965–66 Ashes series. Chappell supplemented his aggressive batting with brilliant fielding in the slips, and he showed promise as a leg-spinner. At this point, the selectors and captain Bob Simpson considered him an all-rounder: he batted at number seven and bowled 26 (eight-ball) overs for the match. ### Hesitant start He retained his place for the following Test and for the tour of South Africa in summer 1966–67. Playing in a side defeated 1–3, Chappell struggled to make an impression. His highest score in ten Test innings was 49, while his five wickets cost 59 runs each. On the advice of Simpson, he ceased playing the hook shot as it was often leading to his dismissal. In the first Test of 1967–68 against India, he failed twice batting in the middle order. Heading into the second Test at Melbourne, Chappell's place was in jeopardy, but he rode his luck to score 151 – his innings contained five chances that the Indians failed to take. However, in the remainder of the series, he managed only 46 runs in four innings, so his selection for the 1968 tour of England was based as much on potential as form. In England, Chappell rewarded the faith of the selectors by scoring the most first-class runs on the tour (1,261 runs, including 202 not out against Warwickshire), leading the Australian Test aggregates with 348 runs (at 43.50). His top score was 81 in the fourth Test at Leeds. Wisden lauded his play off the back foot and judged him the most difficult Australian batsman to dismiss. ### Promotion to number three A string of big scores and a record number of catches during the 1968–69 season earned Chappell the Australian Cricketer of the Year award. Against the touring West Indies, Chappell hit 188 not out, 123, 117, 180 and 165 before the New Year. Two of these centuries came in the Test series, when Chappell's average for 548 runs was 68.50. Chappell was elevated to number three in the batting order and became a less-frequent bowler; he was also appointed vice-captain of the team. Following up with a successful tour of India in late 1969, Chappell demonstrated his fluency against spin bowling by compiling Test innings of 138 at Delhi and 99 at Kolkata. His ability against both fast and slow bowling earned high praise, including from his captain Bill Lawry. When the Australians arrived in South Africa in early 1970, following their victory over India, Lawry told the local media that Chappell was the best all-round batsman in the world. His appraisal looked misguided when Chappell managed just 92 runs (at 11.5 average), with a top score of 34, as Australia lost 0–4. On this tour, Chappell clashed with cricket administrators over pay and conditions for the first time. The South African authorities requested that an extra Test be added to the fixture and the Australian Board of Control consented. Incensed that the players were not consulted about the change, Chappell led a group of his teammates in a demand for more money to play the proposed game. Eventually the match was cancelled after Chappell and his supporters refused to back down. ### Captaincy Chappell became South Australian captain when the long-serving Les Favell retired at the start of the 1970–1971 season. His younger brother Greg made his debut in the second Test of the summer against Ray Illingworth's England. Facing an English attack led by the hostile fast bowling of John Snow, Chappell scored a half-century in each of the first two Tests, but failed to capitalise on good starts while Greg Chappell scored 108 in his initial innings. Rain caused the abandonment of the third Test without a ball being bowled. Temporarily promoted to open the batting, Chappell failed in the fourth Test as Australia lost. In the fifth Test at Melbourne, he returned to number three and started nervously. Dropped on 0 and 14, Chappell found form and went on to post his maiden Ashes century (111 from 212 balls), which he followed with scores of 28 and 104 in the sixth Test. The washed-out Test resulted in a late change to the schedule and an unprecedented seventh Test was played at Sydney in February 1971. Trailing 0–1 in the series, Australia could retain The Ashes by winning this game. Australia's performances were hampered by playing slow, defensive cricket. In a radical attempt to breathe some aggression into the team, the selectors sacked captain Bill Lawry and appointed Chappell in his stead. Dismayed by the manner of Lawry's dismissal, Chappell responded with an attacking performance as captain, he won the toss, put England in and dismissed them for 184, and Australia led the first innings by 80 runs, but set 223 to win they folded for 160 and lost The Ashes after holding them for 12 years. Chappell gained some consolation at the end of a dramatic summer when he led SA to the Sheffield Shield, the team's first win for seven years. Chappell's battles against the short-pitched bowling of Snow during the season compelled him to reappraise his game. Following a conversation with Sir Donald Bradman, he decided to reinstate the hook shot and spent the winter months practising the stroke by hitting baseballs thrown by his brother Greg. ### A team in his own image > Ian Chappell fashioned an Australian team in his own image between 1971 and 1975: aggressive, resourceful and insouciant. Australia lost an unofficial Test series to a Rest of the World team led by Gary Sobers that toured in 1971–72 as a replacement for the politically unacceptable South Africans. Chappell was the outstanding batsman of the series, with four centuries included in his 634 runs, at an average of 79.25. He took the team to England in 1972 and was unlucky not to regain The Ashes in a rubber that ended 2–2. The series began disastrously for Chappell when he was out hooking from the first ball he faced in the opening Test at Manchester. He fell the same way in the second innings and Australia lost the match. However, the team regrouped and had the better of the remaining matches, apart from the fourth Test at Leeds, played on a controversial pitch that the Australians believed was "doctored" to suit the England team. Greg Chappell emerged as a prolific batsman during the series, batting one place below his brother in the order. The siblings shared several crucial partnerships, most notably 201 at the Oval in the last Test when they became the first brothers to score centuries in the same Test innings. Australia won the game, an effort that Chappell later cited as the turning point in the team's performances. In 1972–73, Australia had resounding victories against Pakistan (at home) and the West Indies (away). Chappell's leadership qualities stood out in a number of tight situations. He hit his highest Test score of 196 (from 243 balls) in the first Test against Pakistan at Adelaide. Pakistan "appeared probable winners of the last two Tests on the second last day of each game", yet Chappell's team managed to win on both occasions. On indifferent pitches in the Caribbean, Chappell was the highest-scoring batsman of the Test series with 542 runs (at 77.4 average). He hit 209 in a tour match against Barbados, two Test centuries and a "glorious" 97 on a poor pitch at Trinidad in the third Test, batting with an injured ankle. This set up a dramatic last day when the West Indies needed just 66 runs to win with six wickets in hand at lunch. The home team collapsed against an inspired Australian bowling attack supported by Chappell's aggressive field-placements. ### The ugly Australians Australia played six Tests against New Zealand on both sides of the Tasman in 1973–74. Chappell led his team to a 2–0 victory in the three Tests played in Australia. During the third Test at Adelaide, he equalled the world record of six catches in a Test match by a fielder, which was beaten by his brother Greg the following season. In the drawn first Test at Wellington, the Chappells became the first brothers to each score a century in both innings of a Test match. The Australians lost to the Kiwis for the first time ever in the second Test at Christchurch, when Chappell was involved in a verbal confrontation with the leading New Zealand batsman, Glenn Turner. The Australians then played an ill-tempered tour match at Dunedin that didn't enhance the reputation of Chappell or his team, before winning the final Test at Auckland. On this tour, the behaviour of the team was questioned with some journalists labelling them "ugly Australians". In 1976, Chappell wrote about his attitude to the opposition: > ... although we didn't deliberately set out to be a 'bunch of bastards' when we walked on to the field, I'd much prefer any team I captained to be described like that than as 'a nice bunch of blokes on the field.' As captain of Australia my philosophy was simple: between 11.00am and 6.00pm there was no time to be a nice guy. I believed that on the field players should concentrate on giving their best to the team, to themselves and to winning; in other words, playing hard and fairly within the rules. To my mind, doing all that left no time for being a nice guy. The increasing prevalence of verbal confrontation on the field (later known as sledging) concerned cricket administrators and became a regular topic for the media. Its instigation is sometimes attributed to Chappell. By his own admission, he was a frequent user of profanity who was often at "boiling point" on the field, but claims that the various incidents he was involved in were not a premeditated tactic. Rather, they were a case of him losing his temper with an opponent. ### The Ashes regained and the first World Cup The highlight of Chappell's career was Australia's 4–1 win over England in 1974–75 that reclaimed The Ashes. Strengthened by the new fast bowling partnership of Dennis Lillee and Jeff Thomson, the Australians played aggressive cricket and received criticism for the amount of short-pitched bowling they employed. Chappell scored 90 on an "unreliable" pitch on the first day of the opening Test at Brisbane. He finished the six Tests with 387 runs at 35.18 average, and took 11 catches in the slips. The Test matches attracted big crowds and record gate takings, enabling Chappell to negotiate a bonus for the players from the Australian Cricket Board (ACB). Although this more than doubled the players' pay, their remuneration amounted to only 4.5% of the revenue generated by the series. Within months, Chappell was back in England leading Australia in the inaugural World Cup. His dislike of the defensive nature of limited-over cricket led to the Australians placing a full slips cordon for the new ball and employing Test-match style tactics in the tournament. Despite the apparent unsuitability of this approach, Chappell guided the team to the final where they lost a memorable match to the West Indies. The workload of the captaincy was telling on Chappell and the four-Test Ashes series that followed the World Cup dampened his appetite for the game. After winning the only completed match of the series, the first Test at Birmingham, Australia's retention of the Ashes was anticlimactic: the third Test at Leeds was abandoned due to vandalism of the pitch during the night before the last day's play. In the last Test at the Oval, Chappell scored 192 from 367 balls to set up an apparent victory. However, England managed to bat for almost 15 hours to grind out a draw and Chappell announced his resignation from the captaincy on the final day of the match. In 30 Tests as captain, he scored 2,550 runs at an average of 50, with seven centuries. ### First retirement Remaining available for Test cricket, he played in the 1975–76 series against the West Indies under the captaincy of his brother Greg. Australia avenged their loss in the World Cup final by winning 5–1, claiming the unofficial title of best team in the world. During the season, Chappell incurred censure for his behaviour in a Sheffield Shield match and was warned not to continue wearing a pair of adidas boots with the three stripes clearly visible. This breached the prevailing protocol of cricketers wearing all white. His highest innings of the summer was 156 during Australia's only loss, at Perth in the second Test. Wisden nominated him as the most influential player of the series for his 449 runs at an average of 44.90. Throughout the course of the series, Chappell passed two significant milestones when he became the fourth Australian to make 5,000 runs in Test cricket and the first player to hold one hundred Test catches for Australia. The summer ended in controversy and triumph in the domestic competition. During a dispute with the SACA over team selection, he threatened a "strike" action by the SA team. After the matter was resolved, Chappell led the side to the Sheffield Shield title for the second time in his career and shared the inaugural Sheffield Shield player of the season award with his brother Greg. He retired from first-class cricket at the end of the season, aged only 32. ## World Series Cricket and aftermath In 1976, Chappell toured South Africa with Richie Benaud's International Wanderers team, released his autobiography Chappelli and was named as one of five Wisden Cricketers of the Year. He was hired to spend the summer of 1976–77 as a guest professional in the Melbourne district competition where he was paid more than he had been as Australian captain. During the season, he was involved in a famous altercation with a young English all-rounder who was in Victoria on a cricketing scholarship, Ian Botham. Both men have put forward vastly different versions as to what happened during the physical confrontation in a Melbourne pub. The animosity between them continues and Channel Nine used it as a marketing ploy when Botham temporarily partnered Chappell as a television commentator during the 1998–99 season. Botham again revived the feud in his 2007 autobiography with another version of the incident. ### Rebel skipper Throughout his career, Chappell found the ACB obdurate in his attempts to make a living from the game. In 1969 and 1970, they refused his applications to play professionally in England. As Australian captain, he made several unsuccessful representations at ACB meetings in an effort to secure a more realistic financial deal for the Australian players. In consultations with the then-president of the ACTU, Bob Hawke, he explored the possibility of unionising the players. Approached to lead an Australian team in World Series Cricket (WSC), a breakaway professional competition organised by Kerry Packer for Channel Nine, Chappell signed a three-year contract worth A\$75,000 in 1976. His participation was, "fundamental to the credibility of the enterprise". Chappell devised the list of Australian players to be signed, and was involved in the organisation and marketing of WSC. His central role was the result of, "years of personal disaffection with cricket officialdom", in particular Don Bradman. Recently, Chappell wrote: > While captaining Australia, I was approached on three separate occasions before WSC to play 'professional' cricket, and each time I advised the entrepreneurs to meet the appropriate cricket board because they controlled the grounds. On each occasion, the administrators sent the entrepreneurs packing and it quickly became clear they weren't interested in a better deal for the players. > > That's why I say the players didn't stab the ACB in the back. The administrators had numerous opportunities to reach a compromise but displayed little interest in the welfare of the players. It wasn't really surprising then that more than 50 players from around the world signed lucrative WSC contracts and a revolution was born. About half of the WSC players were from Australia and this high ratio can, in part, be attributed to Bradman's tight-fisted approach to the ACB's money. In WSC's debut season of 1977–78, Chappell hit the first Supertest century and finished fifth in overall averages. The prevalence of short-pitched fast bowling and a serious injury to Australian David Hookes led to the innovation of batting helmets; Chappell was one of the many batsmen to use one. Following their 1975–76 tour of Australia, the West Indies adopted a four-man fast bowling attack, while the World XI contained fast bowlers of the calibre of Imran Khan, Mike Procter, Garth Le Roux, Clive Rice and Sarfraz Nawaz. The constant diet of pace bowling undermined the confidence of some batsmen during WSC. Chappell's form fell away during the second season and he scored only 181 runs at 25.85 in four Supertests. During the last six days of the season, the WSC Australians lost the finals of both the limited-overs competition (to the West Indies XI) and the Supertest series (to the World XI), thus forfeiting the winner-takes-all prize money. After the latter match, Chappell vented his frustrations on World XI captain Tony Greig by refusing to shake his hand and criticising Greig's inconsequential contribution to his team's victory. The final act of the competition was a series between the WSC Australians and the WSC West Indies played in the Caribbean in the spring of 1979. After the Australians suffered a heavy defeat in the first Supertest at Jamaica, Chappell rallied his team to draw the five match series one-all. His best effort were scores of 61 and 86 at Barbados. ### Return to Tests Convinced to return to official cricket when WSC ended, Chappell resumed as captain of SA in 1979–80, a decision he later regretted. It was a season too far for the increasingly irascible Chappell. Reported by an umpire for swearing in a match against Tasmania, he received a three-week suspension. In his first match after the ban, he was again reported for his conduct in a game against the touring English team. Given a suspended ban by the ACB, he was then selected for Australia's last three Tests of the season. His Test career finished with scores of 75 and 26 not out at the MCG against England in February 1980. In his final first-class match, SA needed to beat Victoria to win the Sheffield Shield. Although Chappell scored 112, SA lost the match and the shield. Ironically, the umpires voted him the competition's player of the season for a second time. ### ODI record Chappell's aggressive approach suited limited-overs cricket: he scored his runs at a strike-rate of 77 runs per hundred balls. The timing of his career limited him to 16 ODI matches, but he appeared in a number of historic fixtures such as the first ODI (at the MCG in 1971), the first World Cup final (at Lord's in 1975) and the first day/night match (during WSC, at VFL Park in 1978). He passed fifty in half of his innings with a top score of 86 at Christchurch in 1973–74. In his final season of international cricket, he scored 63 not out (from 65 balls) against the West Indies at the SCG to win the player of the match award; five days later he hit an unbeaten 60 from 50 balls in his penultimate ODI appearance, against England. As captain, he recorded six wins and five losses from 11 matches. He is also credited to have hit the first ever six in an ODI match (which is in fact the first ODI match ever played). ### Captaincy statistics ### Legacy The title of the ABC's documentary The Chappell Era, broadcast in 2002, encapsulated Chappell's significance to Australian cricket. Subtitled Cricket in the '70s, it chronicled the rise of the Australian cricket team under Chappell, the fight for better pay for the players, and professionalisation of the game through WSC. During the program, Chappell reiterated his criticisms of cricket's administration at the time. In Wisden, Richie Benaud wrote, "Chappell will be remembered as much for his bid to improve the players' lot as he will for his run-getting and captaincy". During the WSC period, he founded a players' association with a loan provided by Kerry Packer. Despite Chappell's continued support for the organisation after his retirement, apathy and a lack of recognition from the ACB led to its demise in 1988. Revived in 1997 as the Australian Cricketers' Association (ACA), it is now an important organisation within the structure of Australian cricket. In 2005, Chappell became a member of the ACA executive. Chappell was inducted into the Sport Australia Hall of Fame in 1986, the FICA Cricket Hall of Fame in 2000 and the Australian Cricket Hall of Fame in 2003. Two new grandstands at the Adelaide Oval were named the Chappell Stands; at the dedication ceremony in 2003, the SACA president Ian McLachlan called the Chappells, "the most famous cricketing family in South Australia". In 2004, the Chappell family was again honoured with the creation of the Chappell–Hadlee Trophy, an annual series of ODI matches played between Australia and New Zealand. Chappell is the leading advocate for greater formal recognition of the first Australian sporting team to travel overseas, the Australian Aboriginal cricket team in England in 1868. ## Feud with Ian Botham Chappell has an infamous feud with Ian Botham. It started over an incident in 1977 and continues to this day. The pair were assigned to commentate together in 1998 and didn't exchange a word. In 2023, the pair were brought together for a TV special and it only seemed to make matters worse and reignite the feud. ## Media career Following the path of his grandfather Vic Richardson, who was a radio commentator for many years, Chappell entered the media in 1973 by writing magazine articles and a column for The Age. He did television commentary for the 0–10 Network and the BBC before playing WSC. During the 1980s, Chappell spent eight years co-hosting with Mike Gibson, Wide World of Sports, an innovative magazine-style program broadcast by Channel Nine on Saturday afternoons and co-hosted a sister show, Sports Sunday, for five years. Early in his stint on the former program, he swore without realising that he was live to air. A similar incident occurred during a live telecast of the 1993 Ashes series. Chappell began working as a commentator for Channel Nine's cricket coverage in the 1980–81 season, a position he retains until the network lost the Australian home cricket rights to Channel 7 in April 2018. Chappell became a radio commentator for the Macquarie Sports Radio in 2018. He later moved to ABC Radio before retiring in August 2022. ### Leadership critiques #### Greg Chappell The greatest controversy of his first season was the Underarm Incident, which involved his two younger brothers in an ODI played between Australia and New Zealand at the Melbourne Cricket Ground. Chappell showed no fraternal bias and was vehement in his criticism of his brother Greg's tactic. He wrote in a newspaper column on the matter: "Fair dinkum, Greg, how much pride do you sacrifice to win \$35,000?" #### Kim Hughes He supported the claims of Rod Marsh to the Australian captaincy over the incumbent, Kim Hughes, in the early 1980s. The constant campaign against Hughes destabilised his authority. Compounding the situation, the ACB compelled Hughes to be interviewed by Chappell on a regular basis. He also criticised Hughes's batting. "Hughes needs to score the runs when they are needed most. He is not doing this and his inconsistency is rubbing off on others... there is a not a lot of thought in his batting". On the morning of the second test against the West Indies in 1984–85, Chappell asked Hughes "Three months ago, you claimed Australia possessed no Test-worthy legspinner. So what is Bob Holland doing in the team?" Hughes resigned as captain after that match. Following Hughes' resignation, Australian cricket went into turmoil and Chappell received a share of the blame for the outcome. #### Allan Border & Bob Simpson Chappell had a direct influence on Hughes' successor, Allan Border. Early in his captaincy tenure, Border was struggling with the burdens of the position so the ACB appointed Bob Simpson as team coach to assist. There was animosity between Chappell and Simpson prior to this and Chappell continue to deride the need for a coach. Simpson responded by writing that the peer influence of older players helping younger players fell away during the era when the Chappell brothers led the team, and he was redressing the problem. Chappell believed that the Border-Simpson leadership was too defensive and that Simpson usurped too much of Border's control of the team; Border heeded Chappell's assessment and adopted a more aggressive on-field approach later in his career and became known as "Captain Grumpy" to his teammates. Mark Taylor, who captained the team after Border, moved to dilute Simpson's authority. Chappell remains a long-standing critic of the use of coaches by national teams. #### Steve Waugh Ian Chappell was often critical of Steve Waugh as captain, believing him to be a selfish player and unimaginative captain. When Waugh was appointed captain in 1999 Chappell said: > I think he's been a selfish cricketer . . . I've always felt that the things you do as a player leading up to getting the captaincy do have an effect [on] how players perceive you. I've had the feeling that a selfish player when he becomes captain . . . gets a little less out of his players than someone who is not selfish. Chappell felt Shane Warne should have been picked as captain instead and his criticism of Waugh's captaincy did not abate during Waugh's stint in that role, despite his success. Waugh later wrote of Chappell: > Ian Chappell ... always sweated on my blunders and reported them with an 'I told you so' mentality ... To say Chappell's criticism irked me would be an understatement, though I knew that, like anyone, he was entitled to an opinion. I don't mind the fact he criticised me — in fact, I would much rather someone make a judgement than not, but I have always felt that a critic must be either constructive or base his comments on fact ... It was something I had to live with, and when I realised he was never going to cut me much slack, I decided anything he said that was positive would be a bonus and the rest just cast aside. Chappell rated Ricky Ponting a better captain than Waugh. ### Player critiques Chappell has been a vocal critic of a number of Australian players, most recently Ed Cowan and George Bailey. ### Books and writings Chappell's first book was an account of the 1972 Ashes tour, Tigers Among the Lions, followed by a series of books of cricketing humour and anecdotes published in the early 1980s. The more analytical The Cutting Edge, an appraisal of modern cricket, appeared in 1992. Ashley Mallett's biography, Chappelli Speaks Out (published in the UK as Hitting Out – the Ian Chappell Story) was written in collaboration with Chappell and released in 2005. It caused controversy due to Chappell's assessment of Steve Waugh, who he described as "selfish" and as a captain, "ran out of ideas very quickly". In 2006, Chappell released an anthology of his cricket writings entitled A Golden Age. He is a regular contributor for ESPNcricinfo. ## Personal life After leaving school, Chappell spent two years as a clerk in a sharebroker's office, which he left to play league cricket in England. He then worked as a promotions representative for Nestlé and, later, the cigarette manufacturer WD & HO Wills. After eight years with Wills, Chappell capitalised on his fame as Australian captain by forming his own company specialising in advertising, promotion and journalism, which has remained his profession. He is twice married, and has a daughter (Amanda) with his first wife Kay. Chappell now lives in Sydney with his second wife Barbara-Ann. In recent years, Chappell has been a high-profile activist for better treatment of asylum seekers by the Australian government, in particular its policy of mandatory detention. He supports Australia cutting ties with the United Kingdom and becoming a republic, being a founding member of the Australian Republican Movement. In July 2019, Chappell announced that he had been undergoing radiotherapy for skin cancer.
66,364,100
Siege of Guînes (1352)
1,154,365,953
Siege of Hundred Years War
[ "1350s in France", "1352 in England", "Battles in Hauts-de-France", "Conflicts in 1352", "Edward III of England", "Hundred Years' War, 1337–1360", "Military history of the Pas-de-Calais", "Sieges involving England", "Sieges involving France", "Sieges of the Hundred Years' War" ]
The siege of Guînes took place from May to July 1352 when a French army under Geoffrey de Charny unsuccessfully attempted to recapture the French castle at Guînes which had been seized by the English the previous January. The siege was part of the Hundred Years' War and took place during the uneasy and ill-kept truce of Calais. The strongly fortified castle had been taken by the English during a period of nominal truce and the English king, Edward III, decided to keep it. Charny led 4,500 men and retook the town but was unable to blockade the castle. After two months of fierce fighting, a large English night attack on the French camp inflicted a heavy defeat and the French withdrew. Guînes was incorporated into the Pale of Calais. The castle was besieged by the French in 1436 and 1514, but was relieved each time, before falling to the French in 1558. ## Background Since the Norman Conquest of 1066 English monarchs had held titles and lands within France, the possession of which made them vassals of the kings of France. Following a series of disagreements between Philip VI of France (r. 1328–1350) and Edward III of England (r. 1327–1377), on 24 May 1337 Philip's Great Council in Paris agreed that the lands held by Edward in France should be taken back into Philip's hands on the grounds that Edward was in breach of his obligations as a vassal. This marked the start of the Hundred Years' War, which was to last 116 years. After nine years of inconclusive but expensive warfare, Edward landed with an army in northern Normandy in July 1346. He then undertook the Crécy campaign, to the gates of Paris and north across France. The English turned to fight Philip's much larger army at the Battle of Crécy, where the French were defeated with heavy loss. Edward needed a port where his army could regroup and be resupplied from the sea. The Channel port of Calais suited this purpose; it was also highly defensible and would provide a secure entrepôt into France for English armies. Calais could be easily resupplied by sea and defended by land. Edward's army laid siege to the port in September 1346. With French finances and morale at a low ebb after Crécy, Philip failed to relieve the town, and the starving defenders surrendered on 3 August 1347. By 28 September the Truce of Calais, intended to bring a temporary halt to the fighting, had been agreed. It was to run for nine months to 7 July 1348, but was extended repeatedly. The truce did not stop ongoing naval clashes between the two countries, nor small-scale fighting in Gascony and Brittany. In July 1348, a member of the King's Council, Geoffrey de Charny, was put in charge of all French forces in the north east. Despite the truce being in effect Charny hatched a plan to retake Calais by subterfuge, and bribed Amerigo of Pavia, an Italian officer of the city garrison, to open a gate for a force led by him. The English king became aware of the plot, crossed the Channel and led his household knights and the Calais garrison in a surprise counter-attack. When the French approached on New Year's Day 1350 they were routed by this smaller force, with significant losses and all their leaders captured or killed; Charny was among the captured. In late 1350 Raoul, Count of Eu, the Grand Constable of France, returned after more than four years in English captivity. He was on parole from Edward personally, pending the handover of his ransom. This was an extremely large amount, rumoured to have been 80,000 écus; more than Raoul could afford. It had been agreed that he would instead hand over the town of Guînes, 6 miles (9.7 km) from Calais, which was in his possession. This was a common method of settling ransoms. Guînes had an extremely strong keep, and was the leading fortification in the French defensive ring around Calais. English possession would go a long way to securing Calais against further surprise assaults. Guînes was of little financial value to Raoul, and it was clear that Edward was prepared to accept it in lieu of a full ransom payment only because of its strategic position. Angered by the attempt to weaken the blockade of Calais, the new French king, John II, had Raoul executed for treason, preventing the transaction from taking place. This interference by the crown in a nobleman's personal affairs, especially one of such high status, caused uproar in France. ## English attack In early January 1352 a band of freelancing English soldiers, led by John of Doncaster, seized the town of Guînes by a midnight escalade. The fortifications at Guînes were often used as quarters for English prisoners and according to some contemporary accounts Doncaster had been employed as forced labour there after being taken captive earlier in the war and had used the opportunity to examine the town's defences. After gaining his freedom he had remained in France as a member of the garrison of Calais, as he had been exiled from England for violent crimes. One of these sources suggests that Doncaster learnt the details of Guînes' defences through an affair with a French washerwoman. The French garrison of Guînes was not expecting an attack and Doncaster's party crossed the moat, scaled the walls, killed the sentries, stormed the keep, released the English prisoners there, and took over the whole castle. The French were furious: the acting-commander, Hugues de Belconroy, was drawn and quartered for dereliction of duty, at the behest of Charny, who had returned to France after being ransomed from English captivity. French envoys rushed to London to deliver a strong protest to Edward on 15 January. Edward was thereby put in a difficult position. The English had been strengthening the defences of Calais with the construction of fortified towers or bastions at bottlenecks on the roads through the marshes to the town. These could not compete with the strength of the defences at Guînes, which would greatly improve the security of the English enclave around Calais. However, retaining it would be a flagrant breach of the truce then in force. Edward would suffer a loss of honour and possibly a resumption of open warfare, for which he was unprepared. He therefore ordered the English occupants to hand Guînes back. By coincidence, the English Parliament was scheduled to meet, with its opening session on 17 January. Several members of the King's Council made fiery, warmongering speeches and the parliament was persuaded to approve three years of war taxes. Reassured that he had adequate financial backing, Edward changed his mind. By the end of January the Captain of Calais had fresh orders: to take over the garrisoning of Guînes in the King's name. Doncaster was pardoned and rewarded. Determined to strike back, the French took desperate measures to raise money, and set about raising an army. ## French attack The outbreak of hostilities at Guînes caused fighting to also flare up in Brittany and the Saintonge area of south-west France, but the main French effort was against Guînes. Geoffrey de Charny was again put in charge of all French forces in the north east. He assembled an army of 4,500 men, including 1,500 men-at-arms and a large number of Italian crossbowmen. By May the 115 men of the English garrison, commanded by Thomas Hogshaw, were under siege. The French reoccupied the town, but found it difficult to approach the castle. The marshy ground and many small waterways made it difficult to approach from most directions, while facilitating waterborne supply and reinforcement for the garrison. Charny decided that the only practicable approach was via the main entrance facing the town, which was defended by a strong barbican. He had a convent a short distance away converted into a fortress, surrounded by a stout palisade, and positioned catapults and cannons there. By the end of May the English authorities, concerned by these preparations, raised a force of more than 6,000 which was gradually shipped to Calais. From there they harassed the French in what the modern historian Jonathan Sumption describes as "savage and continual fighting" throughout June and early July. In mid-July a large contingent of troops arrived from England, and, reinforced by much of the Calais garrison, they were successful in approaching Guînes undetected and launching a night attack on the French camp. Many Frenchmen were killed and a large part of the palisade around the convent was destroyed. Shortly afterwards Charny abandoned the siege, leaving a garrison to hold the convent. The French captured and slighted a newly built English tower at Fretun, 3 miles (4.8 km) south west of Calais, then retreated to Saint-Omer, where their army disbanded. During the rest of the year the English expanded their enclave around Calais, building and strengthening fortifications on all the access routes through the marshes around Calais, forming what became the Pale of Calais. The potential offensive threat posed by Calais caused the French to garrison 60 fortified positions in an arc around the town, at ruinous expense. ## Aftermath The war also went badly for the French on other fronts and, encouraged by the new pope, Innocent VI, a peace treaty was negotiated at Guînes beginning in early 1353. On 6 April 1354 a draft was agreed. This Treaty of Guînes would have ended the war, very much in the favour of England. French and English ambassadors travelled to Avignon that winter to ratify the treaty in the presence of the Pope. This did not occur as the French king was persuaded that another round of warfare might leave him in a better negotiating position and withdrew his representatives. Charny was killed in 1356 at the Battle of Poitiers, when the French royal army was defeated by a smaller Anglo-Gascon force commanded by Edward's son, the Black Prince, and John was captured. In 1360, the Treaty of Brétigny ended the war, with vast areas of France being ceded to England; including Guînes and its county which became part of the Pale of Calais. The castle was besieged by the French in 1436 and 1514, but was relieved each time. Guînes remained in English hands until it was recaptured by the French in 1558. ## Citations and sources
68,832
Norman Conquest
1,173,824,533
11th-century invasion and conquest of England by Normans
[ "1060s conflicts", "1066 in England", "11th century in England", "Duchy of Normandy", "England in the High Middle Ages", "Military history of England", "Military history of Normandy", "Norman conquest of England", "Succession to the British crown", "William the Conqueror" ]
The Norman Conquest (or the Conquest) was the 11th-century invasion and occupation of England by an army made up of thousands of Norman, Breton, Flemish, and French troops, all led by the Duke of Normandy, later styled William the Conqueror. William's claim to the English throne derived from his familial relationship with the childless Anglo-Saxon king Edward the Confessor, who may have encouraged William's hopes for the throne. Edward died in January 1066 and was succeeded by his brother-in-law Harold Godwinson. The Norwegian king Harald Hardrada invaded northern England in September 1066 and was victorious at the Battle of Fulford on 20 September, but Godwinson's army defeated and killed Hardrada at the Battle of Stamford Bridge on 25 September. Three days later on 28 September, William's invasion force of thousands of men and hundreds of ships landed at Pevensey in Sussex in southern England. Harold marched south to oppose him, leaving a significant portion of his army in the north. Harold's army confronted William's invaders on 14 October at the Battle of Hastings. William's force defeated Harold, who was killed in the engagement, and William became king. Although William's main rivals were gone, he still faced rebellions over the following years and was not secure on the English throne until after 1072. The lands of the resisting English elite were confiscated; some of the elite fled into exile. To control his new kingdom, William granted lands to his followers and built castles commanding military strong points throughout the land. The Domesday Book, a manuscript record of the "Great Survey" of much of England and parts of Wales, was completed by 1086. Other effects of the conquest included the court and government, the introduction of the Norman language as the language of the elites, and changes in the composition of the upper classes, as William enfeoffed lands to be held directly from the king. More gradual changes affected the agricultural classes and village life: the main change appears to have been the formal elimination of slavery, which may or may not have been linked to the invasion. There was little alteration in the structure of government, as the new Norman administrators took over many of the forms of Anglo-Saxon government. ## Origins In 911, the Carolingian French ruler Charles the Simple allowed a group of Vikings under their leader Rollo to settle in Normandy as part of the Treaty of Saint-Clair-sur-Epte. In exchange for the land, the Norsemen under Rollo were expected to provide protection along the coast against further Viking invaders. Their settlement proved successful, and the Vikings in the region became known as the "Northmen" from which "Normandy" and "Normans" are derived. The Normans quickly adopted the indigenous culture as they became assimilated by the French, renouncing paganism and converting to Christianity. They adopted the langue d'oïl of their new home and added features from their own Norse language, transforming it into the Norman language. They intermarried with the local population and used the territory granted to them as a base to extend the frontiers of the duchy westward, annexing territory including the Bessin, the Cotentin Peninsula and Avranches. In 1002, English king Æthelred the Unready married Emma of Normandy, the sister of Richard II, Duke of Normandy. Their son Edward the Confessor, who spent many years in exile in Normandy, succeeded to the English throne in 1042. This led to the establishment of a powerful Norman interest in English politics, as Edward drew heavily on his former hosts for support, bringing in Norman courtiers, soldiers, and clerics and appointing them to positions of power, particularly in the Church. Childless and embroiled in conflict with the formidable Godwin, Earl of Wessex, and his sons, Edward may also have encouraged Duke William of Normandy's ambitions for the English throne. When King Edward died at the beginning of 1066, the lack of a clear heir led to a disputed succession in which several contenders laid claim to the throne of England. Edward's immediate successor was the Earl of Wessex, Harold Godwinson, the richest and most powerful of the English aristocrats. Harold was elected king by the Witenagemot of England and crowned by the Archbishop of York, Ealdred, although Norman propaganda claimed the ceremony was performed by Stigand, the uncanonically elected Archbishop of Canterbury. Harold was immediately challenged by two powerful neighbouring rulers. Duke William claimed that he had been promised the throne by King Edward and that Harold had sworn agreement to this; King Harald III of Norway, commonly known as Harald Hardrada, also contested the succession. His claim to the throne was based on an agreement between his predecessor, Magnus the Good, and the earlier English king, Harthacnut, whereby if either died without an heir, the other would inherit both England and Norway. William and Harald at once set about assembling troops and ships to invade England. ## Tostig's raids and the Norwegian invasion In early 1066, Harold's exiled brother, Tostig Godwinson, raided southeastern England with a fleet he had recruited in Flanders, later joined by other ships from Orkney. Threatened by Harold's fleet, Tostig moved north and raided in East Anglia and Lincolnshire, but he was driven back to his ships by the brothers Edwin, Earl of Mercia, and Morcar, Earl of Northumbria. Deserted by most of his followers, Tostig withdrew to Scotland, where he spent the summer recruiting fresh forces. King Harold spent the summer on the south coast with a large army and fleet waiting for William to invade, but the bulk of his forces were militia who needed to harvest their crops, so on 8 September Harold dismissed them. Hardrada invaded northern England in early September, leading a fleet of more than 300 ships carrying perhaps 15,000 men. Harald's army was further augmented by the forces of Tostig, who threw his support behind the Norwegian king's bid for the throne. Advancing on York, the Norwegians defeated a northern English army under Edwin and Morcar on 20 September at the Battle of Fulford. The two earls had rushed to engage the Norwegian forces before Harold could arrive from the south. Although Harold Godwinson had married Edwin and Morcar's sister Ealdgyth, the two earls may have distrusted Harold and feared that the king would replace Morcar with Tostig. The end result was that their forces were devastated and unable to participate in the rest of the campaigns of 1066, although the two earls survived the battle. Hardrada moved on to York, which surrendered to him. After taking hostages from the leading men of the city, on 24 September the Norwegians moved east to the tiny village of Stamford Bridge. King Harold probably learned of the Norwegian invasion in mid-September and rushed north, gathering forces as he went. The royal forces probably took nine days to cover the distance from London to York, averaging almost 25 miles (40 kilometres) per day. At dawn on 25 September Harold's forces reached York, where he learned the location of the Norwegians. The English then marched on the invaders and took them by surprise, defeating them in the Battle of Stamford Bridge. Harald of Norway and Tostig were killed, and the Norwegians suffered such horrific losses that only 24 of the original 300 ships were required to carry away the survivors. The English victory was costly, however, as Harold's army was left in a battered and weakened state, and far from the English Channel. ## Norman invasion ### Norman preparations and forces William assembled a large invasion fleet and an army gathered from Normandy and all over France, including large contingents from Brittany and Flanders. He mustered his forces at Saint-Valery-sur-Somme and was ready to cross the Channel by about 12 August. The exact numbers and composition of William's force are unknown. A contemporary document claims that William had 726 ships, but this may be an inflated figure. Figures given by contemporary writers are highly exaggerated, varying from 14,000 to 150,000 men. Modern historians have offered a range of estimates for the size of William's forces: 7000–8000 men, 1000–2000 of them cavalry; 10,000–12,000 men; 10,000 men, 3000 of them cavalry; or 7500 men. The army would have consisted of a mix of cavalry, infantry, and archers or crossbowmen, with about equal numbers of cavalry and archers and the foot soldiers equal in number to the other two types combined. Although later lists of companions of William the Conqueror are extant, most are padded with extra names; only about 35 individuals can be reliably claimed to have been with William at Hastings. William of Poitiers states that William obtained Pope Alexander II's consent for the invasion, signified by a papal banner, along with diplomatic support from other European rulers. Although Alexander did give papal approval to the conquest after it succeeded, no other source claims papal support before the invasion. William's army assembled during the summer while an invasion fleet in Normandy was constructed. Although the army and fleet were ready by early August, adverse winds kept the ships in Normandy until late September. There were probably other reasons for William's delay, including intelligence reports from England revealing that Harold's forces were deployed along the coast. William would have preferred to delay the invasion until he could make an unopposed landing. ### Landing and Harold's march south The Normans crossed to England a few days after Harold's victory over the Norwegians at Stamford Bridge on 25 September, following the dispersal of Harold's naval force. They landed at Pevensey in Sussex on 28 September and erected a wooden castle at Hastings, from which they raided the surrounding area. This ensured supplies for the army, and as Harold and his family held many of the lands in the area, it weakened William's opponent and made him more likely to attack to put an end to the raiding. Harold, after defeating his brother Tostig and Harald Hardrada in the north, left much of his force there, including Morcar and Edwin, and marched the rest of his army south to deal with the threatened Norman invasion. It is unclear when Harold learned of William's landing, but it was probably while he was travelling south. Harold stopped in London for about a week before reaching Hastings, so it is likely that he took a second week to march south, averaging about 27 miles (43 kilometres) per day, for the nearly 200 miles (320 kilometres) to London. Although Harold attempted to surprise the Normans, William's scouts reported the English arrival to the duke. The exact events preceding the battle remain obscure, with contradictory accounts in the sources, but all agree that William led his army from his castle and advanced towards the enemy. Harold had taken up a defensive position at the top of Senlac Hill (present-day Battle, East Sussex), about 6 miles (10 kilometres) from William's castle at Hastings. Contemporary sources do not give reliable data on the size and composition of Harold's army, although two Norman sources give figures of 1.2 million or 400,000 men. Recent historians have suggested figures of between 5000 and 13,000 for Harold's army at Hastings, but most agree on a range of between 7000 and 8000 English troops. These men would have comprised a mix of the fyrd (militia mainly composed of foot soldiers) and the housecarls, or nobleman's personal troops, who usually also fought on foot. The main difference between the two types was in their armour; the housecarls used better protecting armour than that of the fyrd. The English army does not appear to have had many archers, although some were present. The identities of few of the Englishmen at Hastings are known; the most important were Harold's brothers Gyrth and Leofwine. About 18 other named individuals can reasonably be assumed to have fought with Harold at Hastings, including two other relatives. ### Hastings The battle began at about 9 am on 14 October 1066 and lasted all day, but while a broad outline is known, the exact events are obscured by contradictory accounts in the sources. Although the numbers on each side were probably about equal, William had both cavalry and infantry, including many archers, while Harold had only foot soldiers and few archers. The English soldiers formed up as a shield wall along the ridge, and were at first so effective that William's army was thrown back with heavy casualties. Some of William's Breton troops panicked and fled, and some of the English troops appear to have pursued the fleeing Bretons. Norman cavalry then attacked and killed the pursuing troops. While the Bretons were fleeing, rumours swept the Norman forces that the duke had been killed, but William rallied his troops. Twice more the Normans made feigned withdrawals, tempting the English into pursuit, and allowing the Norman cavalry to attack them repeatedly. The available sources are more confused about events in the afternoon, but it appears that the decisive event was the death of Harold, about which different stories are told. William of Jumieges claimed that Harold was killed by the duke. The Bayeux Tapestry has been claimed to show Harold's death by an arrow to the eye, but this may be a later reworking of the tapestry to conform to 12th-century stories that Harold had died from an arrow wound to the head. Other sources stated that no one knew how Harold died because the press of battle was so tight around the king that the soldiers could not see who struck the fatal blow. William of Poitiers gives no details at all about Harold's death. ### Aftermath of Hastings The day after the battle, Harold's body was identified, either by his armour or marks on his body. The bodies of the English dead, who included some of Harold's brothers and his housecarls, were left on the battlefield, although some were removed by relatives later. Gytha, Harold's mother, offered the victorious duke the weight of her son's body in gold for its custody, but her offer was refused. William ordered that Harold's body be thrown into the sea, but whether that took place is unclear. Another story relates that Harold was buried at the top of a cliff. Waltham Abbey, which had been founded by Harold, later claimed that his body had been buried there secretly. Later legends claimed that Harold did not die at Hastings, but escaped and became a hermit at Chester. After his victory at Hastings, William expected to receive the submission of the surviving English leaders, but instead Edgar the Ætheling was proclaimed king by the Witenagemot, with the support of Earls Edwin and Morcar, Stigand, the Archbishop of Canterbury, and Ealdred, the Archbishop of York. William therefore advanced, marching around the coast of Kent to London. He defeated an English force that attacked him at Southwark, but being unable to storm London Bridge he sought to reach the capital by a more circuitous route. William moved up the Thames valley to cross the river at Wallingford, Berkshire; while there he received the submission of Stigand. He then travelled north-east along the Chilterns, before advancing towards London from the north-west, fighting further engagements against forces from the city. Having failed to muster an effective military response, Edgar's leading supporters lost their nerve, and the English leaders surrendered to William at Berkhamsted, Hertfordshire. William was acclaimed King of England and crowned by Ealdred on 25 December 1066, in Westminster Abbey. The new king attempted to conciliate the remaining English nobility by confirming Morcar, Edwin and Waltheof, the Earl of Northumbria, in their lands as well as giving some land to Edgar the Ætheling. William remained in England until March 1067, when he returned to Normandy with English prisoners, including Stigand, Morcar, Edwin, Edgar the Ætheling, and Waltheof. ## English resistance ### First rebellions Despite the submission of the English nobles, resistance continued for several years. William left control of England in the hands of his half-brother Odo and one of his closest supporters, William fitzOsbern. In 1067 rebels in Kent launched an unsuccessful attack on Dover Castle in combination with Eustace II of Boulogne. The Shropshire landowner Eadric the Wild, in alliance with the Welsh rulers of Gwynedd and Powys, raised a revolt in western Mercia, fighting Norman forces based in Hereford. These events forced William to return to England at the end of 1067. In 1068 William besieged rebels in Exeter, including Harold's mother Gytha, and after suffering heavy losses managed to negotiate the town's surrender. In May, William's wife Matilda was crowned queen at Westminster, an important symbol of William's growing international stature. Later in the year Edwin and Morcar raised a revolt in Mercia with Welsh assistance, while Gospatric, the newly appointed Earl of Northumbria, led a rising in Northumbria, which had not yet been occupied by the Normans. These rebellions rapidly collapsed as William moved against them, building castles and installing garrisons as he had already done in the south. Edwin and Morcar again submitted, while Gospatric fled to Scotland, as did Edgar the Ætheling and his family, who may have been involved in these revolts. Meanwhile, Harold's sons, who had taken refuge in Ireland, raided Somerset, Devon and Cornwall from the sea. ### Revolts of 1069 Early in 1069 the newly installed Norman Earl of Northumbria, Robert de Comines, and several hundred soldiers accompanying him were massacred at Durham; the Northumbrian rebellion was joined by Edgar, Gospatric, Siward Barn and other rebels who had taken refuge in Scotland. The castellan of York, Robert fitzRichard, was defeated and killed, and the rebels besieged the Norman castle at York. William hurried north with an army, defeated the rebels outside York and pursued them into the city, massacring the inhabitants and bringing the revolt to an end. He built a second castle at York, strengthened Norman forces in Northumbria and then returned south. A subsequent local uprising was crushed by the garrison of York. Harold's sons launched a second raid from Ireland and were defeated at the Battle of Northam in Devon by Norman forces under Count Brian, a son of Eudes, Count of Penthièvre. In August or September 1069 a large fleet sent by Sweyn II of Denmark arrived off the coast of England, sparking a new wave of rebellions across the country. After abortive raids in the south, the Danes joined forces with a new Northumbrian uprising, which was also joined by Edgar, Gospatric and the other exiles from Scotland as well as Waltheof. The combined Danish and English forces defeated the Norman garrison at York, seized the castles and took control of Northumbria, although a raid into Lincolnshire led by Edgar was defeated by the Norman garrison of Lincoln. At the same time resistance flared up again in western Mercia, where the forces of Eadric the Wild, together with his Welsh allies and further rebel forces from Cheshire and Shropshire, attacked the castle at Shrewsbury. In the southwest, rebels from Devon and Cornwall attacked the Norman garrison at Exeter but were repulsed by the defenders and scattered by a Norman relief force under Count Brian. Other rebels from Dorset, Somerset and neighbouring areas besieged Montacute Castle but were defeated by a Norman army gathered from London, Winchester and Salisbury under Geoffrey of Coutances. Meanwhile, William attacked the Danes, who had moored for the winter south of the Humber in Lincolnshire, and drove them back to the north bank. Leaving Robert of Mortain in charge of Lincolnshire, he turned west and defeated the Mercian rebels in battle at Stafford. When the Danes attempted to return to Lincolnshire, the Norman forces there again drove them back across the Humber. William advanced into Northumbria, defeating an attempt to block his crossing of the swollen River Aire at Pontefract. The Danes fled at his approach, and he occupied York. He bought off the Danes, who agreed to leave England in the spring, and during the winter of 1069–70 his forces systematically devastated Northumbria in the Harrying of the North, subduing all resistance. As a symbol of his renewed authority over the north, William ceremonially wore his crown at York on Christmas Day 1069. In early 1070, having secured the submission of Waltheof and Gospatric, and driven Edgar and his remaining supporters back to Scotland, William returned to Mercia, where he based himself at Chester and crushed all remaining resistance in the area before returning to the south. Papal legates arrived and at Easter re-crowned William, which would have symbolically reasserted his right to the kingdom. William also oversaw a purge of prelates from the Church, most notably Stigand, who was deposed from Canterbury. The papal legates also imposed penances on William and those of his supporters who had taken part in Hastings and the subsequent campaigns. As well as Canterbury, the see of York had become vacant following the death of Ealdred in September 1069. Both sees were filled by men loyal to William: Lanfranc, abbot of William's foundation at Caen, received Canterbury while Thomas of Bayeux, one of William's chaplains, was installed at York. Some other bishoprics and abbeys also received new bishops and abbots and William confiscated some of the wealth of the English monasteries, which had served as repositories for the assets of the native nobles. ### Danish troubles In 1070 Sweyn II of Denmark arrived to take personal command of his fleet and renounced the earlier agreement to withdraw, sending troops into the Fens to join forces with English rebels led by Hereward the Wake, at that time based on the Isle of Ely. Sweyn soon accepted a further payment of Danegeld from William, and returned home. After the departure of the Danes the Fenland rebels remained at large, protected by the marshes, and early in 1071 there was a final outbreak of rebel activity in the area. Edwin and Morcar again turned against William, and although Edwin was quickly betrayed and killed, Morcar reached Ely, where he and Hereward were joined by exiled rebels who had sailed from Scotland. William arrived with an army and a fleet to finish off this last pocket of resistance. After some costly failures, the Normans managed to construct a pontoon to reach the Isle of Ely, defeated the rebels at the bridgehead and stormed the island, marking the effective end of English resistance. Morcar was imprisoned for the rest of his life; Hereward was pardoned and had his lands returned to him. ### Last resistance William faced difficulties in his continental possessions in 1071, but in 1072 he returned to England and marched north to confront King Malcolm III of Scotland. This campaign, which included a land army supported by a fleet, resulted in the Treaty of Abernethy in which Malcolm expelled Edgar the Ætheling from Scotland and agreed to some degree of subordination to William. The exact status of this subordination was unclear – the treaty merely stated that Malcolm became William's man. Whether this meant only for Cumbria and Lothian or for the whole Scottish kingdom was left ambiguous. In 1075, during William's absence, Ralph de Gael, the Earl of Norfolk, and Roger de Breteuil the Earl of Hereford, conspired to overthrow him in the Revolt of the Earls. The exact reason for the rebellion is unclear, but it was launched at the wedding of Ralph to a relative of Roger's, held at Exning. Another earl, Waltheof, despite being one of William's favourites, was also involved, and some Breton lords were ready to offer support. Ralph also requested Danish aid. William remained in Normandy while his men in England subdued the revolt. Roger was unable to leave his stronghold in Herefordshire because of efforts by Wulfstan, the Bishop of Worcester, and Æthelwig, the Abbot of Evesham. Ralph was bottled up in Norwich Castle by the combined efforts of Odo of Bayeux, Geoffrey of Coutances, Richard fitzGilbert, and William de Warenne. Norwich was besieged and surrendered, and Ralph went into exile. Meanwhile, the Danish king's brother, Cnut, had finally arrived in England with a fleet of 200 ships, but he was too late as Norwich had already surrendered. The Danes then raided along the coast before returning home. William did not return to England until later in 1075, to deal with the Danish threat and the aftermath of the rebellion, celebrating Christmas at Winchester. Roger and Waltheof were kept in prison, where Waltheof was executed in May 1076. By that time William had returned to the continent, where Ralph was continuing the rebellion from Brittany. ## Control of England Once England had been conquered, the Normans faced many challenges in maintaining control. They were few in number compared to the native English population; including those from other parts of France, historians estimate the number of Norman landholders at around 8000. William's followers expected and received lands and titles in return for their service in the invasion, but William claimed ultimate possession of the land in England over which his armies had given him de facto control, and asserted the right to dispose of it as he saw fit. Henceforth, all land was "held" directly from the king in feudal tenure in return for military service. A Norman lord typically had properties scattered piecemeal throughout England and Normandy, and not in a single geographic block. To find the lands to compensate his Norman followers, William initially confiscated the estates of all the English lords who had fought and died with Harold and redistributed part of their lands. These confiscations led to revolts, which resulted in more confiscations, a cycle that continued for five years after the Battle of Hastings. To put down and prevent further rebellions the Normans constructed castles and fortifications in unprecedented numbers, initially mostly on the motte-and-bailey pattern. Historian Robert Liddiard remarks that "to glance at the urban landscape of Norwich, Durham or Lincoln is to be forcibly reminded of the impact of the Norman invasion". William and his barons also exercised tighter control over inheritance of property by widows and daughters, often forcing marriages to Normans. A measure of William's success in taking control is that, from 1072 until the Capetian conquest of Normandy in 1204, William and his successors were largely absentee rulers. For example, after 1072, William spent more than 75 per cent of his time in France rather than England. While he needed to be personally present in Normandy to defend the realm from foreign invasion and put down internal revolts, he set up royal administrative structures that enabled him to rule England from a distance. ## Consequences ### Elite replacement A direct consequence of the invasion was the almost total elimination of the old English aristocracy and the loss of English control over the Catholic Church in England. William systematically dispossessed English landowners and conferred their property on his continental followers. The Domesday Book of 1086 meticulously documents the impact of this colossal programme of expropriation, revealing that by that time only about 5 per cent of land in England south of the Tees was left in English hands. Even this tiny residue was further diminished in the decades that followed, the elimination of native landholding being most complete in southern parts of the country. Natives were also removed from high governmental and ecclesiastical offices. After 1075 all earldoms were held by Normans, and Englishmen were only occasionally appointed as sheriffs. Likewise in the Church, senior English office-holders were either expelled from their positions or kept in place for their lifetimes and replaced by foreigners when they died. By 1096 no bishopric was held by any Englishman, and English abbots became uncommon, especially in the larger monasteries. ### English emigration Following the conquest, many Anglo-Saxons, including groups of nobles, fled the country for Scotland, Ireland, or Scandinavia. Members of King Harold Godwinson's family sought refuge in Ireland and used their bases in that country for unsuccessful invasions of England. The largest single exodus occurred in the 1070s, when a group of Anglo-Saxons in a fleet of 235 ships sailed for the Byzantine Empire. The empire became a popular destination for many English nobles and soldiers, as the Byzantines were in need of mercenaries. The English became the predominant element in the elite Varangian Guard, until then a largely Scandinavian unit, from which the emperor's bodyguard was drawn. Some of the English migrants were settled in Byzantine frontier regions on the Black Sea coast and established towns with names such as New London and New York. ### Governmental systems Before the Normans arrived, Anglo-Saxon governmental systems were more sophisticated than their counterparts in Normandy. All of England was divided into administrative units called shires, with subdivisions; the royal court was the centre of government, and a justice system based on local and regional tribunals existed to secure the rights of free men. Shires were run by officials known as shire reeves or sheriffs. Most medieval governments were always on the move, holding court wherever the weather and food or other matters were best at the moment; England had a permanent treasury at Winchester before William's conquest. One major reason for the strength of the English monarchy was the wealth of the kingdom, built on the English system of taxation that included a land tax, or the geld. English coinage was also superior to most of the other currencies in use in northwestern Europe, and the ability to mint coins was a royal monopoly. The English kings had also developed the system of issuing writs to their officials, in addition to the normal medieval practice of issuing charters. Writs were either instructions to an official or group of officials, or notifications of royal actions such as appointments to office or a grant of some sort. This sophisticated medieval form of government was handed over to the Normans and was the foundation of further developments. They kept the framework of government but made changes in the personnel, although at first the new king attempted to keep some natives in office. By the end of William's reign, most of the officials of government and the royal household were Normans. The language of official documents also changed, from Old English to Latin. The forest laws were introduced, leading to the setting aside of large sections of England as royal forest. The Domesday survey was an administrative catalogue of the landholdings of the kingdom, and was unique to medieval Europe. It was divided into sections based on the shires, and listed all the landholdings of each tenant-in-chief of the king as well as who had held the land before the conquest. ### Language One of the most obvious effects of the conquest was the introduction of Anglo-Norman, a northern dialect of Old French with limited Nordic influences, as the language of the ruling classes in England, displacing Old English. Norman French words entered the English language, and a further sign of the shift was the usage of names common in France instead of Anglo-Saxon names. Male names such as William, Robert, and Richard soon became common; female names changed more slowly. The Norman invasion had little impact on placenames, which had changed significantly after earlier Scandinavian invasions. It is not known precisely how much English the Norman invaders learned, nor how much the knowledge of Norman French spread among the lower classes, but the demands of trade and basic communication probably meant that at least some of the Normans and native English were bilingual. Nevertheless, William the Conqueror never developed a working knowledge of English and for centuries afterwards English was not well understood by the nobility. ### Immigration and intermarriage An estimated 8000 Normans and other continentals settled in England as a result of the conquest, although exact figures cannot be established. Some of these new residents intermarried with the native English, but the extent of this practice in the years immediately after Hastings is unclear. Several marriages are attested between Norman men and English women during the years before 1100, but such marriages were uncommon. Most Normans continued to contract marriages with other Normans or other continental families rather than with the English. Within a century of the invasion, intermarriage between the native English and the Norman immigrants had become common. By the early 1160s, Ailred of Rievaulx was writing that intermarriage was common in all levels of society. ### Society The impact of the conquest on the lower levels of English society is difficult to assess. The major change was the elimination of slavery in England, which had disappeared by the middle of the 12th century. There were about 28,000 slaves listed in the Domesday Book in 1086, fewer than had been enumerated for 1066. In some places, such as Essex, the decline in slaves was 20 per cent for the 20 years. The main reasons for the decline in slaveholding appear to have been the disapproval of the Church and the cost of supporting slaves who, unlike serfs, had to be maintained entirely by their owners. The practice of slavery was not outlawed, and the Leges Henrici Primi from the reign of King Henry I continue to mention slaveholding as legal. Many of the free peasants of Anglo-Saxon society appear to have lost status and become indistinguishable from the non-free serfs. Whether this change was due entirely to the conquest is unclear, but the invasion and its after-effects probably accelerated a process already underway. The spread of towns and increase in nucleated settlements in the countryside, rather than scattered farms, was probably accelerated by the coming of the Normans to England. The lifestyle of the peasantry probably did not greatly change in the decades after 1066. Although earlier historians argued that women became less free and lost rights with the conquest, current scholarship has mostly rejected this view. Little is known about women other than those in the landholding class, so no conclusions can be drawn about peasant women's status after 1066. Noblewomen appear to have continued to influence political life mainly through their kinship relationships. Both before and after 1066 aristocratic women could own land, and some women continued to have the ability to dispose of their property as they wished. ## Historiography Debate over the conquest started almost immediately. The Anglo-Saxon Chronicle, when discussing the death of William the Conqueror, denounced him and the conquest in verse, but the king's obituary notice from William of Poitiers, a Frenchman, was full of praise. Historians since then have argued over the facts of the matter and how to interpret them, with little agreement. The theory or myth of the "Norman yoke" arose in the 17th century, the idea that Anglo-Saxon society had been freer and more equal than the society that emerged after the conquest. This theory owes more to the period in which it was developed than to historical facts, but it continues to be used to the present day in both political and popular thought. In the 20th and 21st centuries, historians have focused less on the rightness or wrongness of the conquest itself, instead concentrating on the effects of the invasion. Some, such as Richard Southern, have seen the conquest as a critical turning point in history. Southern stated that "no country in Europe, between the rise of the barbarian kingdoms and the 20th century, has undergone so radical a change in so short a time as England experienced after 1066". Other historians, such as H. G. Richardson and G. O. Sayles, believe that the transformation was less radical. In more general terms, Singman has called the conquest "the last echo of the national migrations that characterized the early Middle Ages". The debate over the impact of the conquest depends on how change after 1066 is measured. If Anglo-Saxon England was already evolving before the invasion, with the introduction of feudalism, castles or other changes in society, then the conquest, while important, did not represent radical reform. But the change was dramatic if measured by the elimination of the English nobility or the loss of Old English as a literary language. Nationalistic arguments have been made on both sides of the debate, with the Normans cast as either the persecutors of the English or the rescuers of the country from a decadent Anglo-Saxon nobility. ## See also - Ermenfrid Penitential - Anglo-Norman invasion of Ireland - Norman invasion of Wales - Norman conquest of southern Italy
18,965,090
SS Minnesotan
1,149,452,293
1912 cargo ship
[ "1912 ships", "Cargo ships of the United States Navy", "Merchant ships of Italy", "Ships built in Sparrows Point, Maryland", "Transport ships of the United States Army", "Unique transports of the United States Navy", "World War I auxiliary ships of the United States", "World War I merchant ships of the United States", "World War II auxiliary ships of the United States", "World War II merchant ships of the United States" ]
SS Minnesotan was a cargo ship built in 1912 for the American-Hawaiian Steamship Company. During World War I she was known as USAT Minnesotan in service for the United States Army and USS Minnesotan (ID-4545) in service for the United States Navy. She ended her career as the SS Maria Luisa R. under Italian ownership. She was built by the Maryland Steel Company as one of eight sister ships for American-Hawaiian, and was employed in inter-coastal service via the Isthmus of Tehuantepec and the Panama Canal after it opened. In World War I, USAT Minnesotan carried cargo and animals to France under charter to the U.S. Army from September 1917. When she was transferred to the U.S. Navy in August 1918, USS Minnesotan continued to undertake the same duties; after the Armistice she was converted to a troop transport and returned over 8,000 American troops from France. Returned to American-Hawaiian in 1919, Minnesotan resumed inter-coastal cargo service, and, at least twice, carried racing yachts from the U.S. East Coast to California. During World War II, Minnesotan was requisitioned by the War Shipping Administration and initially sailed between both New York and Caribbean ports. In the later half of 1943, Minnesotan sailed between Indian Ocean ports. The following year, the cargo ship sailed between New York and ports in the United Kingdom, before eventually returning to the Caribbean. In July 1949, American-Hawaiian sold Minnesotan to Italian owners who renamed her Maria Luisa R.; she was scrapped in 1952 at Bari. ## Design and construction In September 1911, the American-Hawaiian Steamship Company placed an order with the Maryland Steel Company of Sparrows Point, Maryland, for four new cargo ships—Minnesotan, Dakotan, Montanan, and Pennsylvanian. The contract cost of the ships was set at the construction cost plus an 8% profit for Maryland Steel, with a maximum cost of \$640,000 per ship. The construction was financed by Maryland Steel with a credit plan that called for a 5% down payment in cash with nine monthly installments for the balance. Provisions of the deal allowed that some of the nine installments could be converted into longer-term notes or mortgages. The final cost of Minnesotan, including financing costs, was \$65.65 per deadweight ton, which totaled just under \$668,000. Minnesotan (Maryland Steel yard no. 124) was the first ship built under the original contract. She was launched on 8 June 1912, and delivered to American-Hawaiian in September. Minnesotan was 6,617 gross register tons (GRT), and was 428 feet 9 inches (130.68 m) in length and 53 feet 7 inches (16.33 m) abeam. She had a deadweight tonnage of , and her cargo holds had a storage capacity of 490,838 cubic feet (13,899.0 m<sup>3</sup>). Minnesotan had a speed of 15 knots (28 km/h), and was powered by a single quadruple expansion steam engine with oil-fired boilers, that drove a single screw propeller. ## Early career When Minnesotan began sailing for American-Hawaiian, the company shipped cargo from East Coast ports via the Tehuantepec Route to West Coast ports and Hawaii, and vice versa. Shipments on the Tehuantepec Route arrived at Mexican ports—Salina Cruz, Oaxaca, for eastbound cargo, and Coatzacoalcos, Veracruz, for westbound cargo—and traversed the Isthmus of Tehuantepec on the Tehuantepec National Railway. Eastbound shipments were primarily sugar and pineapple from Hawaii, while westbound cargoes were more general in nature. Minnesotan sailed in this service on the east side of North America. After the United States occupation of Veracruz on 21 April 1914 (which found six American-Hawaiian ships in Mexican ports), the Huerta-led Mexican government closed the Tehuantepec National Railway to American shipping. This loss of access, coupled with the fact that the Panama Canal was not yet open, caused American-Hawaii to return in late April to its historic route of sailing around South America via the Straits of Magellan. With the opening of the Panama Canal on 15 August 1914, American-Hawaiian ships switched to taking that route. In October 1915, landslides closed the Panama Canal and all American-Hawaiian ships, including Minnesotan, returned to the Straits of Magellan route again. Minnesotan's exact movements from this time through early 1917 are unclear. She may have been in the half of the American-Hawaiian fleet that was chartered for transatlantic service. She may also have been in the group of American-Hawaiian ships chartered for service to South America, delivering coal, gasoline, and steel in exchange for coffee, nitrates, cocoa, rubber, and manganese ore. ## World War I On 11 September 1917, some five months after the United States declared war on Germany, the United States Army chartered Minnesotan for transporting animals to Europe in support of the American Expeditionary Force. Although there is no information about the specific conversion of Minnesotan, for other ships this typically meant that passenger accommodations had to be ripped out and replaced with ramps and stalls for the horses and mules carried. On 23 August 1918, Minnesotan was transferred to the United States Navy at Norfolk, Virginia. She was commissioned into the Naval Overseas Transportation Service (NOTS) the same day. Minnesotan was refitted and rearmed and made a brief roundtrip to New York. After taking on a general cargo, Minnesotan sailed 4 September to join a convoy from New York. After passing Gibraltar on 21 September, the cargo ship sailed on to Marseille and unloaded. Departing there on 21 October, she sailed for Newport News via Gibraltar, arriving back in the United States on 7 November. Minnesotan next took on a load of 798 horses and sailed on 30 November for Bordeaux, where she arrived on 13 December. Stopping at Saint-Nazaire the following day, Minnesotan departed for Norfolk on 21 December. After making port at Norfolk on 3 January 1919, the cargo ship sailed for New York, where she was inspected and found to be suitable for use as a troop transport. She was transferred to the Cruiser and Transport Force on 7 January and fitted with bunks and living facilities over the next three months. Sailing from New York on 30 March, Minnesotan began the first of her four voyages returning American servicemen from France. On 16 April at Saint-Nazaire, Minnesotan began her first homeward journey with troops, embarking several companies of the 111th Infantry Regiment of the U.S. 28th Infantry Division. George W. Cooper, historian of the 2nd Battalion of the 111th Infantry, reported that even though the fighting had been over for some five months, the fear of striking floating mines necessitated that the men wear life jackets for the first three days at sea. Minnesotan landed her 1,765 troops in New York on 28 April. On her next journey, Minnesotan loaded some 2,000 men of the 304th Ammunition Train and the U.S. 24th Infantry Division, for what turned out to be a rough passage with widespread seasickness. The men on board were greatly relieved when land was spotted, and the ship docked at Charleston, South Carolina, on 29 May. Details of Minnesotan's third journey are not available, but her final journey began by sailing from Brest on 23 July with elements of the U.S. 4th Infantry Division and ended upon arrival at Philadelphia on 3 August. In total, she carried 8,038 troops in four voyages from France. By 15 August, Minnesotan had entered dry dock at the Philadelphia Navy Yard to prepare for decommissioning, which took place six days later. She was then returned to American-Hawaiian. Leslie White, later a noted American anthropologist, was a crewman aboard USS Minnesotan. ## Interwar years Minnesotan resumed cargo service with American-Hawaiian after her return from World War I service. Though the company had abandoned its original Hawaiian sugar routes by this time, Minnesotan continued inter-coastal service through the Panama Canal. Hints at cargos she carried during this time can be gleaned from contemporary news reports from the Los Angeles Times. In March 1928, for example, the newspaper reported that Minnesotan sailed from Los Angeles with a \$2,500,000 cargo that included raw silk and 1,000 long tons (1,000 t) of copper bullion. The 1,000 bales of silk, picked up in Seattle, were worth \$1,000,000 on their own, while the load of copper was reportedly the largest water shipment of Arizona copper to that time. Canned goods, grape juice, and locally grown cotton completed the load. The Los Angeles Times also reported that Minnesotan delivered a then-record 3,000-long-ton (3,000 t) cargo from the East Coast to Los Angeles in October 1930. Minnesotan also carried some less-traditional cargo. In February 1928, she delivered one R-class and four six-meter (twenty-foot) sloops to Los Angeles. The five racing yachts, all from East Coast yacht clubs, arrived to sail in the national championships of six-meter and R-class sloops held 10–18 March. Minnesotan delivered two other six-meter sloops for new owners in November 1938. Minnesotan did have one mishap during the interwar period. On 3 May 1936, The New York Times reported that the day before, a receding tide had stranded Minnesotan about a half-mile (800 m) off of Monomoy Point, Massachusetts. Any damage the freighter sustained must have been minor; the cargo ship sailed from New York for San Francisco two weeks later. ### Labor difficulties Minnesotan played a part in several labor difficulties in the interwar years. In March 1935, the crew of Minnesotan called a wildcat strike that delayed the ship's sailing from Los Angeles by a day, but ended the strike after they were ordered back to work by their union. In October 1935, the deckhands and firemen of Minnesotan and fellow Hawaiian-American ships Nevadan and Golden Tide walked out—this time with the sanction of their union, the Sailors' Union of the Pacific (SUP)—after American-Hawaiian had suspended a member of the International Seamen's Union. In that same month, Minnesotan's deck engineer, Otto Blaczinsky, was murdered while the ship was in Los Angeles Harbor. The Industrial Association of San Francisco, an organization of anti-union businessmen and employers, believed that Blaczinsky was killed because he opposed union policies, and offered a \$1,000 reward for information leading to the arrest and conviction of Blaczinsky's killer. Threats of another Pacific coast strike in late 1936 caused west coast shippers to squeeze as much cargo as possible into Minnesotan and other ships; when Minnesotan arrived at Boston in October, The Christian Science Monitor reported that the ship had arrived "literally laden to her Plimsoll line". In September 1941, Minnesotan played a peripheral part in a larger protest by union sailors over war bonuses for sailing in the West Indies. The SUP struck on Minnesotan and fellow American-Hawaiian ship Oklahoman on 18 September in sympathy with the Seafarers International Organization, which had called a strike on eleven ships a week before. Both of the American-Hawaiian ships were idled while docked in New York. President Franklin D. Roosevelt called on the unions to end the strike three separate times during his press conference on 24 September. Roosevelt's admonition was heeded and both unions ended their strike after the National Mediation Board agreed to address the wartime bonus dispute. ## World War II By January 1941, Minnesotan, though still operated by American-Hawaiian, was engaged in defense work for the U.S. government, sailing to ports in South Africa. After the United States entered World War II, Minnesotan was requisitioned by the War Shipping Administration and frequently sailed in convoys. Though complete records of her sailings are unavailable, partial records indicate some of the ports Minnesotan visited during the conflict and some of the cargo she carried. From July 1942 to April 1943, Minnesotan sailed between New York and Caribbean ports, calling at Trinidad, Key West, Hampton Roads, Guantánamo Bay, and Cristóbal. In June 1943, Minnesotan called at Bombay. She sailed in the Indian Ocean between Calcutta, Colombo, and Bandar Abbas through August. On her last recorded sailing in the Indian Ocean, Minnesotan carried steel rails between Colombo and Calcutta. Minnesotan was back in New York by early December, and sailed to Florida and back by the end of the month. On 29 December, Minnesotan, loaded with a general cargo that included machinery and explosives, sailed as part of convoy HX 273 from New York for Liverpool. Minnesotan developed an undisclosed problem and returned to St. John's, Newfoundland, where she arrived on 13 January 1944. Thirteen days later, she sailed from St. John's to join convoy HX 276 for Liverpool, where she arrived with the convoy on 7 February. After calling at Methil and Loch Ewe, Minnesotan returned to New York in mid March. Minnesotan sailed on another roundtrip to Liverpool in May, but was back in New York by early June. Her last recorded World War II sailings were from New York to Key West, Guantánamo Bay, and Cristóbal, where she arrived in late July 1944. ## Later career After the war's end, American-Hawaiian continued operating Minnesotan for several more years, but in mid-July 1949, the company announced the sale of Minnesotan to Italian owners in a move approved by the United States Maritime Commission several days later. The sale of Minnesotan was protested by the Congress of Industrial Organizations which urged the United States Congress to intervene and to help retain American Merchant Marine jobs. Nevertheless, Maria Luisa R., the new name of the former Minnesotan, remained with the Italian buyers, S.A.R.G.A. SpA of Genoa, until she was scrapped in 1952 at Bari.
3,698,250
False potto
1,144,422,118
Lorisoid primate of uncertain taxonomic status found in Africa
[ "Controversial mammal taxa", "Lorises and galagos", "Mammals described in 1996", "Primates of Africa" ]
The false potto (Pseudopotto martini) is a lorisoid primate of uncertain taxonomic status found in Africa. Anthropologist Jeffrey H. Schwartz named it in 1996 as the only species of the genus Pseudopotto on the basis of two specimens (consisting only of skeletal material) that had previously been identified as a potto (Perodicticus). The precise provenances of the two specimens are uncertain, but at least one may have come from Cameroon. Schwartz thought the false potto could even represent a separate family, but other researchers have argued that the supposed distinguishing features of the animal do not actually distinguish it from the potto; specifically, the false potto shares several features with the West African potto (Perodicticus potto). The false potto generally resembles a small potto, but according to Schwartz it differs in having a longer tail, shorter spines on its neck and chest vertebrae, a smaller, less complex spine on the second neck vertebra, an entepicondylar foramen (an opening in the humerus, or upper arm bone), a lacrimal fossa (a depression in the skull) that is located inside the eye socket, a smaller upper third premolar and molar, and higher-crowned cheekteeth, among other traits. However, many of these traits are variable among pottos; for example, one researcher found entepicondylar foramina in almost half of the specimens in his sample of pottos. ## Taxonomy In a series of potto (Perodicticus potto) skeletons in the collections of the Anthropological Institute and Museum of the University of Zurich at Irchel, anthropologist Jeffrey H. Schwartz recognized two specimens with traits he believed distinct from all pottos, and in 1996 he used these two specimens to describe a new genus and species of primate, Pseudopotto martini. The generic name, Pseudopotto, combines the element pseudo- (Greek for "false") with "potto", referring to superficial similarities between the new form and the potto. The specific name, martini, honors primatologist Robert D. Martin. The exact provenance of the two specimens is unknown, and one is represented by a complete skeleton (but no skin) and the other by a skull only. Schwartz placed both specimens in a single species, but noted that further study might indicate that the two represent distinct species. He thought the relationships of the new form were unknown and difficult to assess and did not assign it to any family, but provisionally placed it closest to the family Lorisidae, together with the potto, the angwantibos, and the lorises. The discovery, published in the Anthropological Papers of the American Museum of Natural History, was featured in Scientific American and Science; the Science account noted that Schwartz thought Pseudopotto may represent a new family of primates. In 1998, the journal African Primates published three papers by primatologists on the false potto. Colin Groves affirmed that it was probably distinct from the potto and Simon Bearder cited it as an example of unrecognized taxonomic diversity in lorisids, but Esteban Sarmiento compared the new taxon to specimens of the potto and found that the alleged distinctive traits of the false potto in fact fell within the range of variation of the potto, and that the false potto was probably not even a species distinct from Perodicticus potto. In 2000, primatologist B.S. Leon agreed that the false potto was not distinct from the subspecies Perodicticus potto potto, but noted that various forms of potto were distinct enough from each other that there may indeed be more than one species of potto. Opinions since then have been divided: a 2003 compilation of African primate diversity concluded that there was insufficient evidence that the false potto is a distinct species, the primate chapter of the 2005 third edition of Mammal Species of the World, written by Groves, listed Pseudopotto as a genus but noted that it was "controversial"; and Schwartz continued to recognize the false potto as a genus in 2005. Also in 2005, primatologist David Stump reviewed some of the distinguishing features of Pseudopotto in the context of studying variation among pottos, and found that some but not all of the false potto's traits were found in some pottos, mainly western populations (subspecies potto). ## Description One of the specimens, AMZ 6698, is an adult female that lived in Zürich Zoo. It is represented by a virtually complete skeleton, but the skin was not preserved. According to Schwartz, the skeleton shows signs of osteoporosis and periodontitis (common in zoo animals), but not of other pathologies or abnormalities. The right teeth were removed before Schwartz studied the specimen. Schwartz selected this specimen as the holotype. The other specimen, AMZ-AS 1730, is a subadult male collected in the wild, of which only the skull, including the mandible (lower jaw), was preserved. The dentition includes both permanent and deciduous teeth. Specimens of Pseudopotto are at least superficially similar to pottos, but according to Schwartz, they differ in a number of traits. Among lorisids, Schwartz saw similarities between the false potto and true pottos as well as angwantibos and slow lorises (Nycticebus). The false potto is comparable in size to the smallest pottos, but falls within their range of metrical variation; small size is also seen in western pottos. The tail, according to Schwartz, is longer than in the potto. He does not provide measurements of the tail of AMZ 6698 and notes that at least one vertebra is missing, but Sarmiento counted 11 caudal vertebrae in an illustration of AMZ 6698 and Groves counted at least 15. However, Sarmiento found that the number of caudal vertebrae ranges from 5 to 17, with an average of 11, in pottos. Relatively long tails are also common in the western form of the potto, though according to Stump the tail of Pseudopotto is longer than any seen in pottos. The false potto allegedly has shorter spines on its cervical (neck) and first and second thoracic (chest) vertebrae, but Leon notes that this feature is also seen in western pottos. Schwartz writes that the false potto differs from pottos and angwantibos in lacking a bifid (two-tipped) spine on the second cervical vertebra, but Sarmiento found this feature in 3 out of 11 potto specimens he examined. The ulnar styloid process (a projection on the ulna, one of the bones of the forearm, where it meets the wrist) is not as hooked as in other lorisids, according to Schwartz, which Groves suggests may indicate that the wrist is more mobile. Another alleged diagnostic feature is the presence of an entepicondylar foramen (an opening near the distal, or far, end of the bone) on the humerus (upper arm bone); however, Sarmiento found this feature in 4 out of 11 specimens, and on one side of a fifth, and Stump noted that the foramen occurred in specimens from across the potto's range. The lacrimal fossa, a depression in the skull, is located on the upper surface of the skull in most lorisids, but Schwartz found that it was further to the back, inside the orbit (eye socket) in the false potto and the slow loris. Sarmiento found this feature in 3 out of 11 pottos examined. The coronoid process of the mandible is said to be more hooked in the false potto than in the potto and slow loris. Other distinguishing features of the false potto are in the dentition. Sarmiento notes, however, that captive specimens may develop abnormalities in the teeth and that some dental characters Schwartz uses are quite variable, sometimes even from one side of the same individual to another. The third upper molar (M3) is more reduced in the false potto than in any other prosimian, according to Schwartz, but Leon notes that western pottos also have a relatively small M3. The third upper premolar (P3) is also reduced, resembling the condition in the fork-marked lemurs (Phaner). Stump writes that small P3s are also common in western pottos, although the false potto's P3 is shaped differently. Groves notes that P1 is quite long, another point of similarity with the fork-marked lemurs. The lower premolars are compressed laterally in Pseudopotto, the cusps on the cheekteeth are higher, and the cristid obliqua (a crest connected to the protoconid cusp) is at a relatively buccal position (in the direction of the cheeks). In AMZ 6698, skull length is 59.30 mm (2.335 in) and length of the right humerus is 57.65 mm (2.27 in). ## Distribution and status According to records in the Anthropological Institute and Museum, AMZ 6698, the holotype, is from "Equatorial Africa", and AMZ-AS 1730 is from the "Cameroons". According to mammalogist Ronald Nowak, these designations imply that the latter came either from modern Cameroon or far eastern Nigeria (British Cameroons) and the former from Cameroon or a neighboring state. In 1999, Simon Bearder claimed, citing a personal communication by C. Wild, that Pseudopotto had been seen in the wild and in 2001, ornithologist Christopher Bowden noted the occurrence of Pseudopotto on Mount Kupe in Cameroon, also citing C. Wild. However, the IUCN Red List notes that while sightings of the false potto at 820 to 940 m (2690 to 3080 ft) on Mount Kupe had been reported, surveys had failed to confirm its occurrence there, though pottos, some with long tails, had been found. The false potto was formerly included under the potto in the Red List, however, , it is treated as a synonym of the Central African potto (Perodictitus edwardsi) due to the evidence that it is a distinct species being considered insufficient. The American Society of Mammalogists instead synonymizes it with the West African potto (P. potto). ## External [Mammals described in 1996](Category:Mammals_described_in_1996 "wikilink") [Lorises and galagos](Category:Lorises_and_galagos "wikilink") [Primates of Africa](Category:Primates_of_Africa "wikilink") [Controversial mammal taxa](Category:Controversial_mammal_taxa "wikilink")
58,885,239
Daisy Bacon
1,170,313,161
American magazine editor (1898–1986)
[ "1898 births", "1986 deaths", "20th-century American women writers", "American editors", "American women editors", "People from Erie County, Pennsylvania" ]
Daisy Sarah Bacon (May 23, 1898 – March 1, 1986) was an American pulp fiction magazine editor and writer, best known as the editor of Love Story Magazine from 1928 to 1947. She moved to New York in about 1917, and worked at several jobs before she was hired in 1926 by Street & Smith, a major pulp magazine publisher, to assist with "Friends in Need", an advice column in Love Story Magazine. Two years later she was promoted to editor of the magazine, and stayed in that role for nearly twenty years. Love Story was one of the most successful pulp magazines, and Bacon was frequently interviewed about her role and her opinions of modern romance. Some interviews commented on the contrast between her personal life as a single woman, and the romance in the stories she edited; she did not reveal in these interviews that she had a long affair with a married man, Henry Miller, whose wife was the writer Alice Duer Miller. Street & Smith gave Bacon other magazines to edit: Ainslee's in the mid-1930s and Pocket Love in the late 1930s; neither lasted until 1940. In 1940 she took over as editor of Romantic Range, which featured love stories set in the American West, and the following year she was also given the editorship of Detective Story. Romantic Range and Love Story ceased publication in 1947, but in 1948 she became the editor of both The Shadow and Doc Savage, two of Street & Smith's hero pulps. However, Street & Smith shut down all their pulps the following April, and she was let go. In 1954 she published a book, Love Story Writer, about writing romance stories. She wrote a romance novel of her own in the 1930s but could not get it published, and in the 1950s also worked on a novel set in the publishing industry. She struggled with depression and alcoholism for much of her life, and attempted suicide at least once. After she died, a scholarship fund was established in her name. ## Early life and early career Daisy Sarah Bacon was born on May 23, 1898, in Union City, Pennsylvania. Her father, Elmer Ellsworth Bacon, divorced his first wife in 1895 to marry Daisy's mother, Jessie Holbrook. Elmer died of Bright's disease on January 1, 1900, and Jessie moved to her family's farm in Barcelona, New York, on Lake Erie on the outskirts of Westfield. Daisy was taught to read and write at age three by her maternal grandmother, Sarah Ann Holbrook. One of Daisy's great-uncles, Dr. Almon C. Bacon, was the founder of Bacone College in Oklahoma. Jessie remarried in 1906, to George Ford. George and Jessie had one child, Esther Joa Ford, born October 1, 1906; George died on January 24, 1907, leaving Jessie Ford alone to raise the two half-sisters. In 1909 Jessie left the farm and moved into Westfield, where Daisy attended the local high school, Westfield Academy. While she was in high school the Westfield Republican published an essay she wrote about the Louisiana Purchase for a competition. She graduated from high school in 1917 as valedictorian, and was awarded a \$100 scholarship to Barnard College, though she never enrolled there. Shortly after Daisy's graduation from high school, the family moved to Manhattan, living in a hotel at first. Bacon worked at several different jobs when she first moved to Manhattan. She was briefly a photographer's model, before taking a job at the Harry Livingston Auction Company, which sold unclaimed luggage left at hotels by guests. She collected and recorded the auction payments. Bacon also wrote and submitted articles and fiction to the magazines of the day, but she was not immediately successful in selling her work. In the early 1920s Bacon sold two articles to the Saturday Evening Post: one about her work at the auction company, and a ghost-written account of the life of a chambermaid in a New York hotel, titled "On the Fourteenth Floor". Years later Bacon's half-sister Esther recalled living at the Astor Hotel and becoming friends with Arturo Toscanini's wife, Carla, and it is possible that Jessie took a job as a chambermaid at the Astor, with room and board, during the family's first years in the city. ## Street & Smith Bacon continued writing, without further success, later recalling that she "worked like slaves to get into Liberty and never made it". In March 1926 she was hired by Street & Smith, one of the major pulp magazine publishers, as a reader for "Friend in Need", an advice column that ran in Love Story Magazine. The magazine had been launched as a monthly in 1921, and was successful enough to switch to weekly publication in September 1922. The advice column received about 75 to 150 letters a day, mostly from women, and between ten and twenty were printed each week. Another Street & Smith employee, Alice Tabor, also worked on the column. Street & Smith insisted that no matter what the letter-writer's problem might be, divorce could never be recommended as a solution. The letters covered every kind of romantic and matrimonial problem, and Bacon's biographer, Laurie Powers, suggests that the letters "gave Daisy a priceless education in the magazine's audience ... [and] became the foundation of Daisy's uncanny ability to know what her readers wanted to read in a romance". While working on "Friend in Need", Bacon also wrote fiction for Love Story, beginning with "The Remembered Fragance". ### Editor of Love Story Magazine In March 1928 Ruth Abeling, the editor of Love Story, was fired, and Bacon was made editor in her stead. She hired her sister Esther as her assistant; to ensure that their relationship appeared professional at the office, the two of them switched to using their last names, Bacon and Ford, for each other, both in and out of the office. Bacon found she had to adapt her usual soft-spoken and rather genteel speech to be successful in some of her working relationships at Street & Smith: "well-bred tones did not spell authority to them", she later recalled, but "after I learned to talk to them in language which I had heard my grandfather's stable boys use, everything was fine". There were so many stories submitted to the magazine that other staff had to be employed to reduce the volume of submissions that reached Bacon's desk to a manageable quantity; even so Bacon found herself reading about a million words of manuscripts each week. The fiction in the magazine when Bacon began working at Street & Smith was Victorian in tone, and Bacon was scornful of the characterization: "The heroines were usually paid companions, governesses, or employed in some such genteel occupation and were always so sweet that it made you want to choke them!" She wanted the heroines of her stories to more closely resemble her readership, working as secretaries or beauticians. However, she also understood the role of glamor in the magazine: readers liked to read about chorus girls and models struggling to succeed, and about women in unusual roles such as pilots. A common stereotype in romance fiction was a poor girl with rich relatives who cruelly mistreated her; Bacon argued that "anyone who thinks that only those people who do not have to work for a living have the capacity for making other people's lives miserable has just never spent an hour inside of the average factory, hotel, school, or department store or around almost any office." Circulation was strong at the time Bacon became editor, at perhaps 400,000, a very high figure for a pulp magazine. The magazine continued to prosper in her hands, and the circulation may have reached 600,000. In her first year as editor, Bacon was forced to write the ending to a serial by Ruby Ayres. The serial had already begun when Bacon took over as editor, even though Ayres had not sent the last installment by the time the first one appeared in print. When the final installment did not arrive in time for publication, Bacon delayed the problem for a week by splitting the most recent episode into two, but by the time the deadline for the next issue came, the rest of the manuscript still had not arrived. The story was about an unpleasant woman whose husband was falling in love with his secretary. In Bacon's version the wife falls from a high window and dies, leaving him free to marry his secretary. Bacon was criticized for the ending, but Ayre's own version of the installment, which finally arrived, had the wife die in the same way. Bacon became friends with some of her writers, inviting them to her apartment and buying them lunch. Douglas Hilliker, an artist who drew interior illustrations and later painted magazine cover art, lodged with Bacon and her mother and half-sister for a while in 1930, along with his wife and daughter. Bacon was friends with Maysie Grieg, who was already a successful writer when Bacon met her, and with Gertrude Schalk, an African-American writer who sold her first story to Bacon in 1930. Bacon and Schalk planned an unusual book together: a collection of Schalk's stories that would have included the initial version submitted to Bacon, the correspondence over the changes Bacon requested, and the final version printed in the magazine. The project had to be abandoned when Schalk's manuscripts were accidentally destroyed and Bacon's correspondence files were lost in an office move. In mid-1934, Street & Smith decided to resurrect Ainslee's Magazine, which had been merged into Far West Illustrated in 1926, as another love story magazine. It was titled Ainslee's, and given to Bacon to edit from its first issue, dated December 1934. It was in bedsheet format, with slightly more risqué plots, and an occasional mention of nudity. Powers comments on the language used: "More explicit kissing scenes used word such as 'sensuous' and 'intimate' [and] the word 'damned' showed up several times." In 1935 Bacon submitted a manuscript of a romance novel to William Morrow. It was rejected, and she seems not to have submitted it elsewhere. ### Late 1930s Bacon became aware that there was a glass ceiling in effect for women at Street & Smith, and that there was a limit to how high she could progress in the company. In late 1936 an article of hers titled "Women Among Men" appeared in an early issue of The New York Woman; it was published anonymously, presumably because she was concerned about how Street & Smith's management would react to it. Among other complaints she described how her business ideas were treated differently from those of the men in the office: "Over a period of years I've had a great many ideas about promoting new business but they have never been taken up. And not only that but several have been accepted later when they were put forward by men. Two of the best plans I ever had ... have been recently accepted and put into operation by men who are ... almost totally without business experience." Bacon's anonymity did not last; interest in the article was enough to generate gossip about who the author might be, and in December she was identified as the author in Walter Winchell's newspaper column. She later recorded that she showed up at the office that day unaware of Winchell's piece, and was "blissfully unaware that everyone in the office was lying in wait for me". In 1937 Street & Smith gave Bacon Pocket Love to edit. This was a new magazine, probably started in an attempt to acquire part of the increasing market for paperbacks and digest-sized magazines. It only lasted for four issues. Ainslee's had been retitled Smart Stories, and under Bacon's editorship it lasted until 1938. The death of George Campbell Smith, Jr., in April 1937 led a year later to the arrival of Allen L. Grammer, from Curtis Publishing, to manage the company. Grammer brought several of his staff with him, and quickly began making sweeping changes to improve the efficiency of Street & Smith's business. ### 1940s and the end of the pulps In July 1940, Grammer made Bacon editor of Romantic Range, a Western romance pulp, and the following year Grammer also gave her Detective Story to edit. The Special Services Division planned to distribute copies of Detective Story to men in the armed services overseas, and as the editor, Bacon temporarily became their part-time employee. Bacon was frequently interviewed, in newspapers and at least once on the radio, while working at Street & Smith. On romance in mid-twentieth century America, she commented in 1941 that "It is better for girls to acquire careers first, husbands afterward," and "financial independence for the wife is an ideal basis for marriage. To be singled out by a girl with a good job is the highest form of flattery for a man. She does not need his support. Therefore she loves him for himself." A 1942 profile said "In her pages, she offers to the average woman—not a flight from actual life—but a heightened reality." The same profile quoted Love Story's circulation as between two and three million readers a month. Another estimate from 1941 gives 350,000 as the weekly circulation, but even this lower figure was still much higher than most pulp magazines. Another theme in her interviews was that she was the editor of a magazine about romance, but was herself unmarried: a 1941 article was titled "Editor Sells Romance to Lonely Wives, but Has No Love Herself", and another interview the same year was titled "Cobbler's Child". Street & Smith published annual anthologies of stories from their magazines, and in the early 1940s Bacon and Ford were given responsibility for producing these. All-Fiction Stories drew its contents from all Street & Smith's fiction magazines, and there were also specialized titles, though these did not necessarily appear each year. These included Detective Story Annual, All-Fiction Detective Stories, and on two occasions an annual drawn from Love Story. In late 1946 Grammer decided to cease publication of both Romantic Range and Love Story; the last issue of each appeared in January and February 1947, respectively. This left Bacon in charge only of Detective Story, but in June 1948 Street & Smith fired William de Grouchy, the editor of The Shadow, and Doc Savage, and gave them to Bacon to edit as well. These were both hero pulps, meaning that they carried a lead novel in every issue about the same character, whose name gave each magazine its title. The Shadow was a crime fighter, and Doc Savage was a scientific genius; the Shadow's novels were mysteries, and Doc Savage's varied between adventure, mystery, and science fiction. Bacon converted both magazines from digest size to their original larger pulp format, and later claimed that this had immediately led to a 25 percent increase in circulation for The Shadow. She told Walter Gibson, who wrote the lead novels for The Shadow, not to change his approach to the fiction, but asked Lester Dent, the lead writer for Doc Savage, to return to the adventure format mixed with science fiction elements that had characterized the early issues of the magazine. Dent was unwilling but produced three short novels along the lines she requested. Bacon rejected one of Dent's manuscripts, and as there was no time for Dent to write a replacement novel, an issue of Doc Savage had to be skipped. Bacon also rejected multiple plot suggestions from Dent, and pulp historian Will Murray describes the relationship between Bacon and Dent as "the most difficult writer/editor relationship Dent ever enjoyed". According to Dent's wife, Dent "believed that a woman had no place editing an adventure magazine". Bacon only had a short time to work on the magazines, however: in April 1949 Street & Smith announced that they were ceasing publication of all their pulp fiction magazines, with the exception of Astounding Science Fiction. Bacon was let go and Ford was given the task of managing the production of the last issues of each magazine, that summer. ## Personal life On July 10 of either 1922 or 1923, Bacon met Henry Wise Miller, the husband of Alice Duer Miller. The Millers were part of the Algonquin Round Table social group, but it was Alice who was the successful writer; Henry became a stockbroker, funded by his wife's money. The Millers lived somewhat separate lives, deliberately spending part of each year away from each other, and Powers comments that it is possible it was an open marriage. Bacon and Henry Miller soon began a relationship. She also began to suffer from depression during the mid-1920s. In 1929 Bacon and Miller spent two weeks together in England and France, just before the Wall Street crash in October. A rift between the two at the end of the year was quickly healed, and since Alice Miller was often away in Hollywood or overseas, Bacon spent many weekends with Henry at Botts, Henry's house near Kinnelon in New Jersey. In 1931 Bacon rented a house in Morris Plains, New Jersey, with plans to write a novel there. She often took Esther and her mother with her; the other two would frequently be left to themselves as Miller would come to pick her up and take her to Botts. It is not known if Alice Miller was aware of her husband's infidelity, but she may have been. Powers suggests that her long poem Forsaking All Others (1931) is a veiled reference to her own marriage: the protagonist has an affair with a younger woman, but refuses to leave his wife for her. Bacon's occasional problems with depression surfaced at times when she was at Botts, and Powers suggests that this might have been because the place reminded her of Alice's existence. Bacon's mother died in 1936, and Bacon's journals from that time start to record that she was aware she was drinking too much. The habit probably started during Prohibition, and in early 1937 Ford's journals began to include a symbol on some days that almost certainly meant Bacon had been drunk that day. The notation appeared every few weeks, sometimes for several consecutive days. Bacon's relationship with Miller was beginning to show signs of fraying by the late 1930s, and she also felt under pressure because of the change in management at Street & Smith in 1938. In late May 1938 Bacon attempted suicide. A doctor visited the house, and three days later Bacon was admitted to Doctor's Hospital in Manhattan, staying there for ten days. Alice Miller died in August 1942. Bacon and Henry Miller's relationship was over by that time, though they still saw each other. Bacon was probably seeing other men by 1942: photographs of her from that time include two of her with another man. Henry remarried in 1947, to Audrey Frazier, a college professor. ## Retirement Bacon was initially happy in retirement; she bought a house in Port Washington, on Long Island, and planned to write a novel, "a scandalous tell-all" about publishing, to be titled Love Story Diary. By 1951 her depression was causing her problems again, and although she was no longer drinking she was still concerned enough about alcoholism to clip articles about it for her journal. In May a long gap in her journal probably indicates another suicide attempt. She recovered physically, but by June she was "tearing out chunks of her hair", according to Powers. She gave up work on the novel after this for a while, eventually returning to it in 1952, and then decided instead to write a non-fiction book about how to write romance stories, though she did not completely give up work on the novel. The result was Love Story Writer, which was published in late 1954. In 1963 Bacon started an imprint, Gemini Books, to reprint it, this time under the title Love Story Editor. She never finished working on Love Story Diary; it was never published, and the manuscript was lost. Ford's husband died in 1962. Ford moved in with Bacon, and the two women lived together for the rest of Bacon's life. By 1981 Bacon was bedbound upstairs, and as Ford could not climb the stairs, the two women did not see each other for the last five years of Bacon's life. Bacon died on March 25, 1986, and was buried in Port Washington; Ford died three years later. Bacon and Ford planned a scholarship fund for journalism students from Port Washington High School, which was eventually established as the Daisy Bacon Scholarship Fund in 1991. In 2016 the Baxter Estates Village Hall in Port Washington held an exhibit about Bacon, including her desk, photographs, manuscripts, and typewriter. ## Magazines edited Bacon edited the following magazines while at Street & Smith: - Love Story Magazine. Early 1928 to February 1947. - Ainslee's, also titled Ainslee's Smart Love Stories and Smart Love Stories during its run. December 1934 to August 1938. - Pocket Love Magazine. May to November 1937. - Romantic Range. Mid-1940 to January 1947. - Detective Story Magazine. Mid-1942 to Summer 1949. - Doc Savage. Winter 1949 to Summer 1949. - The Shadow. Fall 1948 to Summer 1949.
20,646,635
Tidus
1,171,076,273
Final Fantasy character
[ "Characters designed by Tetsuya Nomura", "Fictional aquatics sportspeople", "Fictional bodyguards in video games", "Fictional male sportspeople", "Fictional patricides", "Fictional swordfighters in video games", "Final Fantasy X", "Final Fantasy characters", "Male characters in video games", "Science fantasy video game characters", "Square Enix protagonists", "Teenage characters in video games", "Video game characters introduced in 2001" ]
Tidus (Japanese: ティーダ, Hepburn: Tīda) is a character in Square Enix's Final Fantasy series and the main protagonist of the 2001 role-playing video game Final Fantasy X. Tidus is a 17-year-old from the city of Zanarkand who is transported to the world of Spira following an attack by the creature Sin. Shortly after his arrival he meets and joins Yuna, a mage and her guardians in a pilgrimage to kill Sin after learning that he is actually his missing father Jecht. He has appeared in other video games, including the Final Fantasy X sequel X-2, the Kingdom Hearts series, and several Square Enix crossover games. Tidus was designed by Tetsuya Nomura with a cheerful appearance, in contrast to previous Final Fantasy protagonists. Scenario writer Kazushige Nojima wanted to expand the relationship between player and character with monologues describing the game's setting. While the narrative was initially focused the romance between Tidus and Yuna, Square provided a major focus on his misrelationship with Jecht in order to provide a major impact in the setting. Tidus is voiced primarily by Masakazu Morita in Japanese and James Arnold Taylor in English. Both actors enjoyed voicing the character, and Morita also performed his motion capture. He has been generally well received by video-game critics. Tidus' cheerful personality and heroic traits make him an appealing protagonist, contrasting with previous male characters in the franchise. His character development and romantic relationship with Yuna are considered among the best in video games, although reviewers and fans were divided on Taylor's voicing. Tidus has been popular with fans, often ranking as one of the best Final Fantasy characters in polls. Action figures and Tidus-related jewelry have been produced, and he is a popular cosplay character. ## Creation and development Before the development of Final Fantasy X, game scenario writer Kazushige Nojima was concerned about the relationship between the player and the main character in a Final Fantasy title and wanted to try to make the story easier to follow. Since the player and the main character find themselves in a new world, Nojima wanted Tidus' understanding of that world to track the player's progress in the game. Nojima felt that Tidus was the easiest character to draw in the first half of Final Fantasy X, because character and player learn about the storyline together. Nojima created a brief description of Tidus for character designer Tetsuya Nomura, and Nomura created a sketch for input from Nojima and other staff members. Nomura was asked to design Tidus differently from the game's theme so he would stand out. Movie director Hiroshi Kuwabara noted the difficulty the developers had in making Tidus and the other main characters realistic. The staff wanted to use an undead person as a playable character, and Tidus was meant to be that character. During the game's development, however, Nojima saw a film with a similar idea for its protagonist. The role of an undead person was then given to a secondary character, Auron. Director Yoshinori Kitase said that in the development of Final Fantasy X, one of the staff's main objectives was to focus on the romance between Tidus and Yuna. Nojima said that he cried during the game's ending, when Tidus and Yuna are separated and Tidus vanishes. Nomura mentioned the contrast between the lead male and female protagonists established by their names; Tidus' name is based on the Okinawan word for "sun", and Yuna's name means "night" in Okinawan. The contrast is also indicated by the items required to empower their celestial weapons: the sun sigil and crest for Tidus, and the moon sigil and crest for Yuna. Because a player can change Tidus' name, the character is not referred to by name in audible dialogue, but a character in Dream Zanarkand uses Tidus' name in a dialogue box. The only other in-game appearance of his name is "Tidu" in Spiran script on the nameplate of an Auroch locker in the Luca stadium. Before Final Fantasy X's release, Tidus was known to the media as Tida. In early 2001, PlayOnline changed the character's name to "Tidus". Because his name is never spoken in FFX, its intended pronunciation has been debated. Interviews with James Arnold Taylor and spoken dialogue in the English versions of Dissidia Final Fantasy, Dissidia 012 Final Fantasy, and Kingdom Hearts (with cameo appearances by the character) indicate that it is pronounced /ˈtiːdəs/ (TEE-dəs); in the English version of Kingdom Hearts II, Tidus' name is pronounced /ˈtaɪdəs/ (TY-dəs). According to Taylor, it was pronounced TEE-dəs during the localization of FFX because the narrator of an early English trailer pronounced it that way. For the sequel, Final Fantasy X-2, producer Kitase thought that the greatest fan expectation was for the reunion of Tidus and Yuna after their separation in the first game. The game generated rumors about Tidus' connection with the villain, Shuyin, who was physically similar and had the same actors. Square responded that such a storyline, given Tidus' nature, would be too complicated. For the remastering of Final Fantasy X and X-2, producer Kitase's motivation was to have people too young to have played the games experience them; his son was only old enough to know the characters of Tidus and Yuna from Dissidia Final Fantasy and its prequel. ### Design Designer Nomura said that he wanted Tidus' clothing and accessories to suggest a relationship with the sea. Tidus' clothing has a distinctive blue motif; his blitzball team logo, based on a fish hook, is an amalgam of the letters "J" and "T" (the first letters of Tidus' name and that of his father, Jecht). Tidus' design was specifically made to stand out within the world of Spira. Because of the improvements with the technology when compared with previous Final Fantasy games, Nomura also wanted to make Tidus' face more realistic and make his built more noticeable especially when compared with previous Final Fantasy characters who had a scrawny look. Square specifically asked for Tidus to have an Asian vibe. Both Tidus and fellow comrade share the keycolor blue with the former making match with the ocean. Artist Yusuke Naora also worked on Tidus' design and his relation to the sea, which he found hard to draw and transform into CGI. The developers had difficulty with Tidus and Yuna's kissing scene, since they were unaccustomed to animating romantic scenes. According to Visual Works director Kazuyuki Ikumori, this was due to the use of 3D models, and it was revised several times due to a negative response from female staff members. Tidus was initially a rude plumber who was part of a delinquent gang, but Kitase said he would be a weak protagonist and he was made an athlete instead. ### Personality According to Nomura, he wanted to give Tidus a cheerful persona and appearance after designing serious, moody main characters for Final Fantasy VII and VIII. He wanted to continue the recent trend of sky-related names, and Kazushige Nojima chose a name based on tiida (Okinawan for "sun"). Nojima called Tidus' personality "lively" and compared him to Final Fantasy VIII's Laguna Loire and Zell Dincht, two other cheerful characters. The chemistry behind Tidus and Yuna was made so that the latter is sacrificing everything she has to defeat Sin while the former tags despite lacking knowledge about the world. Though Tidus is portrayed as ignorant initially due to not knowing the world of Spira, his growth across the story was specifically written to make his character arc more notable, especially because in the end he saves the world. One of the most important scenes from Tidus and Yuna is when the latter is kidnapped by Al Bhed and, upon saving her, Tidus loudly claims he and the Guardians will protect from everybody so they should not be concerned. This leads to Yuna moving her lips intending to thank Tidus leading to a similar kind response. Nevertheless, Yuna and Tidus' lines were kept as subtitles. While Nojima wrote the character, the scenes where the protagonist's and Yuna's relationship becomes intimate was written by Daisuke Watanabe as Nojima had problems writing the duo in a proper romantic relationship. In restrospect, Nojima believes such scene was well executed as he believes the couple's comfort helped Yuna finished her character arc in the process. His relationship with his father was based on "stories throughout the ages, such as the ancient Greek legends" and would reveal the key to the weakness of Sin, the game's main antagonist. Kitase noted that, in contrast to previous orphan characters seen in the franchise, Tidus' character arc included accepting Jecht's seeking redemption for Tidus' child abuse. Kitase felt that the voice acting and facial expression were crucial to Tidus at this stage. Motomu Toriyama said that when Final Fantasy X was released, he saw the story from Tidus' point of view: "about parent, child and family". Although FFX was originally centered on the relationship between Tidus and Yuna, the addition of Jecht's character and his feud with son was added later in the making of the game to provide more focus on how the father and son produce a bigger impact in Spira's history rather than the romantic couple. Kitase found the story between Tidus and Jecht to be more moving than the story between Tidus and Yuna. ### Portrayals Masakazu Morita voiced Tidus in Japanese. He called the character a career highlight, comparable to his voicing of Bleach manga protagonist Ichigo Kurosaki. Morita also enjoyed performing Tidus' motion capture, which gave him a greater understanding of the character's personality; when he recorded Tidus' dialogue for the game, he moved his body. Morita said that Tidus was his favorite, calling him "the most outstanding, most special character to me". As his first work as an actor, he has fond memories of voicing Tidus and interacting with other Final Fantasy X staff members. Morita said that there was no difficulty in working as Tidus, since the character's personality was similar to his own, and he did not need to study the character. However, he was concerned that if fans did not enjoy Tidus it would impact his career. When announcing the Japanese actor, Square said that Morita was chosen because he also did the motion capture for Zell (which would make fans remember previous games). Across FFX there are also flashback scenes which depict a seven-year old Tidus. For these scenes Tidus is instead voiced by Yūto Nakamura. For the fighting game Dissidia Final Fantasy, Morita returned to voice Tidus. He was concerned about being able to perform the character's lines like the original Final Fantasy series, since it had been nearly a decade since he voiced Tidus. By that time, he was also more accustomed to acting as Ichigo and Keiji Maeda from Capcom's Sengoku Basara hack-and-slash games and the characters had a different vocal tone than Tidus'. When Moriata returned to voice Tidus, he tried to make it match his original performance. When the game director complimented Morita for keeping the character's tone, Morita was relieved and joked that he felt younger. James Arnold Taylor was Tidus' English-language voice. Taylor was offered the role by voice director Jack Fletcher (who believed that he would fit the character), and translator Alexander O. Smith explained Tidus to him. In contrast to Morita, Taylor made the character friendlier and less serious with the staff's approval. After recording Final Fantasy X, Taylor said that he would enjoy voicing Tidus again; the character was "like an old friend to me now. I know so much more about him now than I did when we first started, knowing hardly anything about him. I would really hate it if anybody else voiced him". Recording the game took Taylor three-and-a-half months, and he enjoyed the experience. According to Taylor, it would be unrealistic for Tidus to hide emotion. He said that although there were things he would change about his performance (such as the scene where Tidus and Yuna begin laughing together), he was grateful for the warm fan reception of his work. Smith felt that the forced-laugh scene was adapted well from the original Japanese scene, because of how "stilted and out of place" it was in the original version. Smith was confused by Morita and Mayuko Aoki's performance, but after discussing it with Nojima he found it well done in both languages and called it "awkward" and "funny". When Final Fantasy X was re-released in 2013, Taylor said that he was proud to be Tidus' voice. For Dissidia NT, Taylor commented that while Tidus' new role would seem new to players due to how he is led once again into battle, people would still find him as an appealing new trait. IGN said that the character "has a tendency to speak a little too high and fast when he gets excited". This led to several negative responses. On the other hand, PSXextreme liked Taylor's work in voicing Tidus. In one scene, Yuna tells Tidus to laugh (to cheer him up) and Tidus forces a laugh. Although fans criticised the laughter as too forced, Taylor stated that it was an intentionally "awkward, goofy, dumb laugh". Kikunosuke Onoe V portrayed Tidus in the Final Fantasy X Kabuki adaptation from 2023. In promoting the play, Morita appeared in commercials, asking fans to go see it. Morita expressed his excitement over Final Fantasy X being retold and that his character would be portrayed by Onoe as a result of his popularity. ## Appearances ### Final Fantasy X series In Final Fantasy X, Tidus is a player in the underwater sport of blitzball in an advanced, technological version of Zanarkand. Belying his cheerful, carefree attitude, Tidus hates his absent father, Jecht—initially because of his mother's neglect, and later for their rivalry at blitzball. During a blitzball tournament, Zanarkand is destroyed by a huge, shrouded creature known as Sin. Sin transports Tidus and Jecht's friend, Auron, to the world of Spira. After his arrival on Spira, Tidus drifts to the island of Besaid and joins a number of guardians on a journey to help Yuna defeat Sin. Tidus joins them in the hope of finding his way home. When he meets Auron, Tidus learns that Jecht and Auron made the same pilgrimage ten years before to protect the summoner Braska (Yuna's father) and defeated Sin (who was reborn as Jecht). As the journey continues, Tidus, losing hope that he will return home, begins a romantic relationship with Yuna and swears not to let her die after the guardians tell him that Sin's battle will kill her. When the party approaches Zanarkand, Tidus learns that he and Zanarkand are the dreams of dead people known as fayth. "Dream" Zanarkand was created when Sin was born during the war between Zanarkand and Bevelle and the original Zanarkand was destroyed. If Sin is permanently defeated, the summoning of Dream Zanarkand and its people (including Tidus) will cease. In the real Zanarkand, the group decides to find a way to destroy Sin which does not require the sacrifice of a guardian or a summoner. They attack Sin, entering its shell. They eventually find Jecht (whom they must defeat to eliminate Sin), and Tidus makes peace with his father in the aftermath. After defeating the spirit of Yu Yevon (who is responsible for Sin's rebirth), the fayth are allowed to leave and the summoning of Dream Zanarkand ends. As he vanishes, Tidus says goodbye to his friends and joins the spirits of Auron, Jecht and Braska in the afterlife. Tidus makes few appearances in the plot of the 2003 sequel, Final Fantasy X-2, although meeting him is the player's objective. Two years after the events of FFX, Yuna sees a sphere with a young man resembling Tidus trapped in a prison. She joins the Gullwings, a sphere-hunting group, and travels around Spira in the hope of finding more clues that Tidus is alive. The individual in the sphere is later revealed as Shuyin. Depending on the player's development during the game, the fayth will appear to Yuna at the end and tell her that they can make Tidus return to her. He then appears in Spira, and he and Yuna are reunited. In another final scene, Tidus unsure whether or not he is still a dream wants to remain with Yuna. He is also an unlockable character as Star Player, a blitzball player. In Final Fantasy X-2: International + Last Mission (the game's updated version), Tidus is a recruitable playable character for battles. An extra episode, set after the original game's play-through, reveals that he is living in Besaid with Yuna. An illusion of Tidus also appears as a boss character. Tidus' dialogue, monologues and songs were included on the Final Fantasy X Vocal Collection and feel/Go dream: Yuna & Tidus CDs. Although he does not fully understand that he is not the fayth's dream, Tidus feels that disappearing would be preferable to making Yuna cry again. The novel Final Fantasy X-2.5 \~Eien no Daishou\~, set after Final Fantasy X-2, explores Tidus and Yuna's visit to Besaid Island 1,000 years before. The HD remastered version of Final Fantasy X and X-2, Final Fantasy X/X-2 HD Remaster, adds an audio drama (Final Fantasy X: Will) in which Tidus is a new blitzball star who appears to be concealing an injury. After Yuna breaks up with him, Tidus helps her on a quest to defeat a reborn Sin. Tetsuya Nomura made a revision of Tidus's design for this game, hinting it at a possible Final Fantasy X-3. Onoe Kikunosuke portrays Tidus in the 2023 kabuki play adaptation of Final Fantasy X, including his child persona. ### Other appearances He also appears in games outside the Final Fantasy X, and a younger version is a friend of the protagonists Sora and Riku in the Kingdom Hearts series. In Kingdom Hearts, Tidus appears with younger versions of Wakka and Final Fantasy VIII's Selphie as an optional sparring opponent. The character makes a cameo appearance in Kingdom Hearts: Chain of Memories, and is mentioned briefly in Kingdom Hearts II. A digital replica of Tidus is a boss character in Kingdom Hearts Coded, and he appears with Auron and Yuna in the board game-based Itadaki Street Special. In Dissidia Final Fantasy (an action game with several Final Fantasy heroes and villains), Tidus is the hero from Final Fantasy X: a warrior from the goddess, Cosmos, whose father works for the rival god Chaos. Tidus has two uniforms in this game, and his thoughts and actions refer to FFX. With the cast, he reappears in the prequel Dissidia 012 and represents Chaos in the previous war. Tidus is confronted by Yuna and offers his life to save her from an attack by the villain Emperor, but is saved by Jecht to become a warrior of Cosmos. In addition to his previous outfits, Tidus has a design based on an illustration by Square artist Yoshitaka Amano. He appears in the third entry in the series, Dissidia NT. Tidus is a playable character in the Theatrhythm Final Fantasy rhythm game. He also appears in World of Final Fantasy, and Fortune Street: Dragon Quest & Final Fantasy 30th Anniversary. Tidus' disappearance between Final Fantasy X and its sequel is also explained in the game Mobius Final Fantasy. Trapped in an underworld-like place known as Palamecia, Tidus joins forces with a warrior known as Wol. The two join on a quest to become fully Warrior of Light though Tidus uses as a distraction since he does not care about his own well-being, satisfied with his actions in Spira. After seeing one of Yuna's creatures disappear from Palamecia, Tidus decides to search for a way to return to Spira. Following more battles, Tidus finds a crystal which allows him to be teleported back to the world. His latest appearance is in the mobile phone game Final Fantasy Explorers-Force. ## Reception ### Critical Tidus had a positive reception in video-game publications. Raymon Padilla of GameSpy called him a "garishly dressed Leonardo DiCaprio", whose flaws make him appealing. Several critics often praised Tidus for his cheerful personality contrasting previous brooding leads. In the book, Dungeons, Dragons, and Digital Denizens: The Digital Role-Playing Game, authors Gerald A. Voorhees and Joshua Call compared Tidus with Final Fantasy VII protagonist Cloud Strife in appearance and weapon, but they found Tidus more realistic than Cloud. The 1UP.com staff described Tidus as the "good kind of jock" because of his support for the game's other protagonists, but his anger and growth kept him from being a "stereotypical boy scout". In the book Gaming Lives in the Twenty-First Century, the writers recalled that Tidus' characterization differs in the original Japanese release of Final Fantasy X and its English dub; the localized version failed to emulate the original Tidus. According to GameSpot reviewer Greg Kasavin, players might not initially like the character but would eventually find him "suitably endearing". Kasavin wrote that Tidus had the "surprising depth" characteristic of past Final Fantasy protagonists. Atlus character designer Kazuma Kaneko called him "a dashing lead character". The revelation of his true nature as a being created by the Fayth and apparent death confused critics though gave a sad impression. `His gradual care for his abusive father was appreciated. 1UP found him the worst-dressed video-game character, while Logo TV noted Tidus' sex appeal as a reason for his popularity. In Console Video Games and Global Corporations, Mia Consalvo stated that although Tidus was designed from a Western's perspective which contrasted the others' Eastern designs, the game managed to blend their looks and appeal to the audience. In regards to the character's hobby, RPGFan said that the Bliztball was well integrated into the narrative as Tidus and Wakka becomes close friends in a short amount of time as a result of having the same passion. At the same, Wakka acts likea mentor to Tidus when the latter believes they should not focus on a sport in moments a chaos only for the former to tell the protagonist that Spira's citizens are fascinated by the idea of such sport. ` In Science Fiction Video Games Neal Roger Tringham describes Final Fantasy X as a game that focuses on melancholy by having Tidus disappear due to him taking down Sin with his town being also taken in the process. While the game often deals with the concept of dead spirits, Joseph Roach notes Tidus' nature of being a "Fayth" does not involve death but instead a memory-like being who stands out among the Fayths for how mature is his portrayal in the narrative. However, while Tidus becomes more heroic in the game to the point he manages to defeat Yu Yevon and end the cycle of death from Spira, he is still haunted by death in the process as a result of the Fayths not being able to maintain his physical form. Through his journeys and relationship with Yuna, Roach notes that Tidus manages to become his own individual especially in Final Fantasy X-2 due to him regaining his body. For the first two X games' re-release, Nomura redesigned Tidus based on his older appearance from the audio drama Will. For the franchise's 30th anniversary, Square presented Tidus' new design in a museum. Director Takeo Kujiraoka from Dissidia NT noted that the staff received multiple requests by fans to include Tidus' Will look as an alternative design but Nomura said it was not possible as the company would first need to develop Final Fantasy X-3. For Final Fantasy X's 20th anniversary, several fans also said they wanted a Final Fantasy X-3 to give Tidus and Yuna a proper happy ending. In regards to the novel and Will, Inverse found several events so nonsensical that it felt that Square was aiming to weaken Tidus like they often do with Sora in every installment from the Kingdom Hearts series to start the next game with less powers. The relationship between Tidus and Yuna was listed as one of the video-game "great loves" by GameSpot, and is often cited as one of the best romances in gaming too. Gamasutra's Leigh Alexander, calling Tidus a "forgettable hero", nevertheless praised his and Yuna's relationship. In 2001, Tidus and Yuna won Game Informer's Best Couple of the Year award. Their kiss also gathered attention. Yuna's English voice actress, Hedy Burress, said that Tidus' interaction with Yuna gave her a humanized, "womanly aspect". According to Eurogamer's Tom Brawell, Tidus and the other characters "make much more dignified and believable decisions than those made by their predecessors in other Final Fantasy games". While noting that Tidus' first scene in the ruins of Zanarkanad was one of the most impressive scenes he saw in gaming, Ash Parrish from Kotaku said the kiss scenes Tidus and Yuna share in the forest underwater was one of the "first sex scenes" in his life due to erotic atmosphere which contrasted quite relationships from the previous games as well as the song "Suteki da ne" which helped improve more the romantic atmosphere. This was further amplified by the next dialogue where the new couple swear their eternal love for each other. Den of Geek cited the eventual kiss in latter parts of as one of the best scenes in the entire franchise. ### Popularity Tidus' character has also appeared in popularity polls and features in video-game publications. He was Final Fantasy X's second-most-popular character behind Auron in a fan poll in 2001, while he remained at the top twenty years later. Complex listed him as the second-best Final Fantasy character, surpassed only by Cloud. His caring, cheerful personality (contrasting with previous Final Fantasy protagonists) was praised. GameZone ranked Tidus the third-best Final Fantasy character (behind Cloud and Sephiroth, also from Final Fantasy VII), and Heath Hooker called him "a complete mixture of everything cheesy and everything emotional". Tidus was the fourth-most-popular male Final Fantasy character in a 2012 Square Enix poll. In a Famitsu poll, Tidus was voted the 20th-best video-game character in Japan. Christian Nutt of GamesRadar wrote that despite initial issues, Tidus' character development during the game made him more likable; Nutt ranked him the fourth-best Final Fantasy hero. Tidus and Yuna were included in The Inquirer's list of most memorable video-game couples, with Tidus' self-sacrifice and their farewell noted. To commemorate the franchise's 20th anniversary, Square released figurines of Tidus and other Final Fantasy protagonists. In 2020, Tidus was also voted as the seventh best character in the entire Final Fantasy franchise in a Japanese poll by NHK. According to Square Enix producer Shinji Hashimoto, Tidus cosplay has been popular. The character has also inspired action figures and jewelry. ## See also - Characters of Final Fantasy X and X-2
65,227,106
Greek case
1,084,705,939
1967 human rights case against Greece
[ "1967 in Greece", "1969 in Greece", "European Commission of Human Rights cases", "Greek junta", "Human rights abuses in Greece", "Human rights in Greece" ]
In September 1967, Denmark, Norway, Sweden and the Netherlands brought the Greek case to the European Commission of Human Rights, alleging violations of the European Convention of Human Rights (ECHR) by the Greek junta, which had taken power earlier that year. In 1969, the Commission found serious violations, including torture; the junta reacted by withdrawing from the Council of Europe. The case received significant press coverage and was "one of the most famous cases in the Convention's history", according to legal scholar Ed Bates. On 21 April 1967, right-wing army officers staged a military coup that ousted the Greek government and used mass arrests, purges and censorship to suppress their opposition. These tactics soon became the target of criticism in the Parliamentary Assembly of the Council of Europe, but Greece claimed they were necessary as a response to alleged Communist subversion and justified under Article 15 of the ECHR. In September 1967, Denmark, Norway, Sweden, and the Netherlands filed identical cases against Greece alleging violations of most of the articles in the ECHR that protect individual rights. The case was declared admissible in January 1968; a second case filed by Denmark, Norway and Sweden for additional violations, especially of Article 3 forbidding torture, was declared admissible in May of that year. In 1968 and early 1969, a Subcommission held closed hearings concerning the case, during which it questioned witnesses and embarked on a fact-finding mission to Greece, cut short due to obstruction by the authorities. Evidence at the trial ran to over 20,000 pages, but was condensed into a 1,200-page report, most of which was devoted to proving systematic torture by the Greek authorities. The Subcommission submitted its report to the Commission in October 1969. It was soon leaked to the press and widely reported, turning European public opinion against Greece. The Commission found violations of Article 3 and most of the other articles. On 12 December 1969, the Committee of Ministers of the Council of Europe considered a resolution on Greece. When it became apparent that Greece would lose the vote, foreign minister Panagiotis Pipinelis denounced the ECHR and walked out. Greece was the first (and until the 2022 exit of Russia, the only) state to leave the Council of Europe; it returned to the organization after the Greek democratic transition in 1974. Although the case revealed the limits of the Convention system to curb the behavior of a non-cooperative dictatorship, it also strengthened the legitimacy of the system by isolating and stigmatizing a state responsible for systematic human rights violations. The Commission's report on the case also set a precedent for what it considered torture, inhuman and degrading treatment, and other aspects of the Convention. ## Background In the aftermath of World War II, European democratic states created the Council of Europe, an organization dedicated to promoting human rights and preventing a relapse into totalitarianism. The Statute of the Council of Europe (1949) required its members to adhere to a basic standard of democracy and human rights. In 1950 the Council of Europe approved the draft European Convention on Human Rights (ECHR), which came into force three years later. The European Commission of Human Rights (1954) and European Court of Human Rights (1959) were set up to adjudicate alleged violations of the Convention. The Convention organs operate on the basis of subsidiarity and cases are only admissible when the applicants have exhausted domestic legal remedy (recourse to the national legal system to enforce one's rights). Greece was a founding member of the Council of Europe, and in 1953 the Hellenic Parliament unanimously ratified both the ECHR and its first protocol. Greece did not allow individuals who alleged that their rights had been violated by the Greek government to make applications to the Commission, so the only way to hold the country accountable for violations was if another state party to the ECHR brought a case on their behalf. Greece was not a party of the Court, which can issue legally binding judgements, so if the Commission found evidence of a violation, it was up to the Committee of Ministers to resolve the case. Although the Council of Europe has considerable investigatory abilities, it has hardly any power of sanction; its highest sanction is expulsion from the organization. In 1956, Greece filed the first interstate application with the Commission, Greece v. United Kingdom, alleging human rights violations in British Cyprus. ### 21 April 1967 coup On 21 April 1967, right-wing army officers staged a military coup shortly before the 1967 Greek legislative election was scheduled to occur. Alleging the coup was necessary to save Greece from Communist subversion, the new Greek junta governed the country as a military dictatorship. Its first edict was to issue Royal Decree no. 280, which cancelled several articles in the 1952 Constitution of Greece because of an indefinite official emergency. More than six thousand regime opponents were arrested immediately and imprisoned; purges, martial law, and censorship also targeted the ruling junta's opponents. The following months saw public demonstrations outside Greece opposing the junta. The suggestion of referring Greece to the European Commission of Human Rights was first raised in Politiken, a Danish newspaper, a week after the coup. The junta became a target of vociferous criticism in the Parliamentary Assembly of the Council of Europe for its human rights violations. On 24 April, the Parliamentary Assembly debated the Greek issue. The Greek representatives were not present at this meeting because the junta dissolved the Greek parliament and canceled their credentials. On 26 April, the Assembly passed Directive 256, inquiring into the fate of the missing Greek deputies, calling for the restoration of parliamentary, constitutional democracy, and objecting to "all measures contrary to the European Convention on Human Rights". Although both the assembly and the Committee of Ministers showed a reluctance to alienate Greece, ignoring the coup entirely would have put the Council of Europe's legitimacy at stake. On 3 May 1967, the junta sent a letter to the Secretary General of the Council of Europe, announcing Greece was in a state of emergency, which justified human rights violations under Article 15 of the European Convention on Human Rights. This implicit acknowledgement that the junta did not respect human rights was later seized upon by the Netherlands, Sweden, Norway and Denmark as the grounds for their complaint to the Commission. Greece did not provide any reason for this derogation until 19 September, when it asserted that the political situation before the coup justified emergency measures. The Commission considered this to be an undue delay. On 22–24 May, the Legal Committee met and proposed another resolution against the junta. The Standing Committee of the Assembly adopted this as Resolution 346 on 23 June. The resolution stated Greece had violated Article 3 of the Statute of the Council of Europe: "Every member ... must accept the principles of the rule of law and of the enjoyment by all persons within its jurisdiction of human rights and fundamental freedoms." The resolution expressed "the wish that the Governments of the Contracting Parties to the European Convention on Human Rights refer the Greek case, either separately or jointly, to the European Commission of Human Rights in accordance with Article 24 of the Convention". On 10 September, the Parliamentary Assembly debated documents prepared by the Legal Committee which stated that, although only the Commission could make a legally binding determination, the Greek derogation of the Convention was not justified. ## Admissibility ### First application Under Resolution 346, on 20 September 1967, three member-states of the Council of Europe (Sweden, Norway, and Denmark) filed identical applications against Greece before the Commission. They alleged violations of almost all the articles in the ECHR which protect individual rights: 5 (right to liberty and security of person), 6 (right to a fair trial), 8 (right to private and family life), 9 (freedom of thought, conscience and religion), 10 (freedom of expression), 11 (freedom of peaceful assembly and association), 13 (right to a legal remedy), and 14 (non-discrimination in securing the rights under the Convention, including on the basis of political belief). The applicants also stated Greece had not shown its invocation of Article 15 (derogations) to be valid. The applications, based on public decrees which prima facie (at first glance) violated the ECHR, referred to previous discussions in the Parliamentary Assembly in which the Greek junta was criticized. The next day, Belgian politician Fernand Dehousse proposed that the European Community bring a similar case against Greece, with which the EC had an association agreement. Although his proposal did not receive support, the EC cut off all economic aid to Greece. On 27 September, the Netherlands joined the suit with an identical application; the Commission merged all four applications on 2 October. The Scandinavian countries did not have an ethnic affinity to the victims of human rights violations, nor did they have a commercial interest in the case; they intervened because they felt it was their moral duty and because public opinion in their countries was opposed to the actions of the Greek junta. Max Sørensen, the president of the Commission, said that the case was "the first time that the machinery of the Convention ... had been set rolling by states with no national interest in lodging an application and apparently motivated by the desire to preserve our European heritage of freedom unharmed". Although the case was unprecedented in that it was brought without national self-interest, international promotion of human rights was characteristic of Scandinavian foreign policy at the time. Following attempts to boycott goods from the applicant countries in Greece, exporter industries pressured their governments to drop the case. For this reason, the Netherlands withdrew from active participation in the case. Belgium, Luxembourg and Iceland later announced that they supported the actions of the Scandinavian and Dutch governments, although this declaration had no legal effect. Attempts to elicit a similar declaration from the United Kingdom were unsuccessful, despite the opposition of many British people to the junta. The Wilson government stated that it "did not believe it would be helpful in present circumstances to arraign Greece under the Human Rights Convention". The Greeks claimed the case was inadmissible because the junta was a revolutionary government and "the original objects of the revolution could not be subject to the control of the Commission". It argued that governments had a margin of appreciation (latitude of governments to implement the Convention as they see fit) to enact exceptional measures in a public emergency. The Commission found the emergency principle was not applicable because it was intended for governments which operated within a democratic and constitutional framework, and furthermore the junta created the "emergency" itself. Therefore, it declared the case admissible on 24 January 1968—allowing it to proceed to a full investigation. ### Second application On 24 November 1967, The Guardian reporter and human rights lawyer Cedric Thornberry published an article investigating several cases of torture in Greece, finding that it "appears to be common practice". On 27 January 1968, Amnesty International published a report by two lawyers, Anthony Marreco and James Becket, who had traveled to Greece and collected first-hand accounts of human rights violations, including a list of 32 people who said that they had been tortured. As a result of these findings, the three Scandinavian countries filed another application on 25 March 1968 for breach of Articles 3 (no torture or inhuman or degrading treatment) and 7 (no ex post facto/retroactive law), as well as Articles 1 (right to property) and 3 (right to free elections) of Protocol 1 of the ECHR. The Greek government argued domestic remedies were available for these alleged violations, and therefore the application should be declared inadmissible under Article 26 of the ECHR. The applicants countered that such remedies were "in fact inadequate and ineffective". The Commission noted three circumstances that undermined the effectiveness of domestic remedies. First, people under administrative detention (i.e. without trial or conviction) had no recourse to a court. Second, Decree no. 280 suspended many of the constitutional guarantees related to the judicial system. Third, on 30 May, the Greek junta regime fired 30 prominent judges and prosecutors, including the president of the Supreme Civil and Criminal Court of Greece, for involvement in a decision that displeased the junta. The Commission noted in its report that this action showed the Greek judicial system lacked judicial independence. Therefore, according to the Commission, "in the particular situation prevailing in Greece, the domestic remedies indicated by the respondent government could [not] be considered effective and sufficient". The application was declared admissible on 31 May. The allegation of torture raised the public profile of the case in Europe and changed the Greek junta's defense strategy, since Article 15 explicitly forbade derogation of Article 3. From 1968, the Commission gave the case priority over all other business; as it was a part-time organization, the Greek case absorbed almost all of its time. On 3 April 1968, a Subcommission was formed to examine the Greek case, initially based on the first application. It held hearings at the end of September, deciding to hear witnesses at its subsequent meeting in November. Fact-finding, especially on-location, is rare in ECHR cases compared to other international courts, such as the Inter-American Court of Human Rights. ## Investigation Greece outwardly cooperated with the investigation, but requested a delay at each step of the process, which was always granted. Foreign Minister Panagiotis Pipinelis tried to create the impression in the Committee of Ministers, which had all the decision-making power in the Council of Europe, that Greece was willing to change. He calculated that Western countries could be persuaded to overlook Greece's human rights violations, and that leaving the Council of Europe would only redouble the international pressure against the junta. Pipinelis, a conservative monarchist, tried to use the case as leverage against more hardline elements of the junta for his preferred political solution: the return of King Constantine and elections in 1971. The Greek government tried to hire international lawyers for its defense, but all refused to represent the country. Many Greek lawyers also refused, but Basil Vitsaksis agreed and for his performance was rewarded with an appointment as ambassador to the United States in 1969. Hearings with witnesses were held in the last week of November 1968. Although its proceedings were in camera (closed), the Commission's proceedings were frequently leaked and journalists reported on its proceedings. The Greek government did not allow any hostile witnesses to leave the country, so the Scandinavians recruited Greek exiles to testify. During the hearings, two Greek witnesses brought by the junta escaped and fled to the Norwegian delegation, seeking asylum. They said they had been tortured, and their families in Greece were under threat. Although the junta struck them off the list of witnesses, they were allowed to testify as witnesses for the Commission. One of them did so; the other claimed to have been kidnapped by the head of the Norwegian delegation, Jens Evensen, and returned to Athens without testifying. The Subcommission announced that it would begin its investigation in Greece on 6 February 1969 (later postponed to 9 March at the request of the Greek government), using its power to investigate alleged violations in member countries. Article 28 of the ECHR requires member states "furnish all necessary facilities" to carry out an investigation. Its interviews were held without representatives of either Greece or the applicant governments present, after wanted posters were put out in Greece for Evensen's arrest and because of fears that the presence of Greek officials would intimidate witnesses. Although it allowed some witnesses to testify to the Subcommission, the Greek government obstructed the investigation and prevented it from accessing some witnesses who had physical injuries, allegedly from torture. Because of this obstruction (and in particular because they were not allowed to visit Leros or Averoff Prison [el], where political prisoners were held) the Subcommission discontinued its visit. After the obstructed visit, the Subcommission refused all requests for delays and the Greek party retaliated by not filing the required paperwork. By this time, more torture victims had escaped from Greece and several testified at hearings in June and July, without the presence of either party. The Subcommission heard from 88 witnesses, collected many documents (some sent clandestinely from Greece) and amassed over 20,000 pages of proceedings. Among those who gave evidence to the Subcommission were prominent journalists, ministers from the last democratically elected government, including former Prime Minister Panagiotis Kanellopoulos, and military officers such as Konstantinos Engolfopoulos, former Chief of the Hellenic Navy General Staff. Those who told the Subcommission they had suffered brutality in jail included Nikos Konstantopoulos, then a student, and Professors Sakis Karagiorgas [el] and Georgios Mangakis [de; el]. Amnesty investigators Marreco, Becket, and Dennis Geoghegan gave evidence and the junta sent hand-picked witnesses to testify. ## Friendly settlement attempt As the investigation was concluding, the Subcommission requested closing remarks from both parties and tried to achieve a friendly settlement (mutual agreement to resolve the identified violations) as required by Article 28(b); talks began to this effect in March 1969. The Scandinavian countries thought no friendly settlement was possible because torture was forbidden and non-negotiable. The Greek government proposed unannounced visits by the International Committee of the Red Cross. The Scandinavian parties also wanted a deadline for free elections, but the Greek government was unwilling to fix a date for parliamentary elections. Because of these differences, a friendly settlement was impossible, and the matter was forwarded to the full Commission. ## Findings On 4 October, the Subcommission adopted its final report and forwarded it to the full Commission, which adopted it on 5 November. Most of the report's more than 1,200 pages dealt with Articles 3 and 15. The report contained three sections: "History of the Proceedings and Points at Issue", "Establishment of the Facts and Opinion of the Commission" (the bulk of the report), and a shorter section explaining the failed attempt to come to a "Friendly Settlement". The report was widely praised for its objectivity and rigorous standard of evidence. Relying on direct evidence, the report did not cite the findings of third parties, such as the Red Cross or the reports of the rapporteurs for the Council of Europe's political branch. Becket stated that he found it "difficult to imagine how the Commission could have been more thorough in their investigation of the cases [of torture victims] they chose". He found the report to be "a signal achievement ... judicial in tone, objective in its conclusions, [it dealt] systematically and completely with the issues before the Commission". Legal expert A. H. Robertson noted that "the Commission required corroboration of the allegations made, offered the government every opportunity to rebut the evidence produced and even examined the possibility that (as alleged) many of the accounts of torture were deliberately fabricated as part of a plot to discredit the government". The Commission also found that Greece had infringed Articles 3, 5, 6, 8, 9, 10, 11, 13, and 14 as well as Article 3 of Protocol 1. For Article 7 of the Convention and Article 1 of Protocol 1, the Commission found no violation. The report made ten proposals for remedying the human rights violations in Greece; the first eight dealt with conditions of detention, control of police and independence of the judiciary while the last two recommended allowing a free press and free elections. With these suggestions, Commissioner Sørensen later recalled, the Commission hoped to convince Greece to promise the Committee of Ministers to restore democracy—the original primary aim of the case, according to Sørensen. ### Article 3 The report devotes over 300 pages to Article 3, examining 30 cases of alleged torture to the standard of proof required in individual applications, based on the testimony of 58 witnesses. An annex to the report lists the names of 213 people alleged to have been tortured or otherwise ill-treated, and five who were said to have died from their injuries; more than 70 of these cases involved abuse by the Security Police in their headquarters on Bouboulinas Street in Athens. Rigorous local fact-finding was key to the report's findings and authority regarding Article 3. Legal scholar Isabella Risini writes that, while the report has a dispassionate tone, "The horrific methods of torture and ill-treatment as well as the suffering of individuals at the hands of their tormentors emerge clearly." Commissioner Philip O'Donoghue later stated that, "The value of hearing evidence in a local venue cannot be overestimated ... No written description, however colorful, could have been as informative as the visit to Bouboulinas Street in Athens." Of the 30 cases, sixteen were fully investigated, and eleven of these could be proved beyond a reasonable doubt. The remaining seventeen cases were blocked by Greek obstruction; of these cases, two had "indications" of torture, seven were "prima facie cases", and eight had "strong indications" of torture. The most common form of torture was falanga—the beating of the soles of the feet, which Greek police practiced on chairs or benches, with or without shoes. Other forms of torture included generalized beatings, electric shocks, blows to the male genitalia, dripping water onto the head, mock executions, and threats to kill the victims. The Commission also considered psychological and mental torture, and poor conditions of imprisonment. According to the Commission, overcrowding, uncleanliness, lack of adequate sleeping arrangements, and the severance of contact with the outside world were also inhuman treatment. The purpose of torture, according to the report, was "the extraction of information including confessions concerning the political activities and association of the victims and other persons considered to be subversive". Despite numerous substantiated cases of torture reported to the authorities, the authorities had made no effort to investigate, stop the practice, or punish those responsible. Because the torture met both "repetition" and "official tolerance" criteria, the Commission determined that the Greek government systematically practiced torture. The Commission was the first international human rights body to find that a state practiced torture as government policy. ### Article 5 The Subcommission documented instances in which citizens had been deprived of their liberty, for example, by being deported from Greece, subjected to internal exile to islands or remote villages where they were forbidden to speak with locals and required to report to police twice daily, or subjected to police supervision. Considering Article 5 in conjunction with Article 15, the Commission found that the Greek government had unjustly restrained liberty with some of these measures, which violated the ECHR because they were excessive and disproportionate to the alleged emergency, and because they were not imposed by a court. The Commission did not consider the permissibility of internal exile, travel restrictions, or confiscation of passports under Article 5, nor did it offer a clear definition of "deprivation of liberty". According to Jeffrey Agrest, writing in Social Research, the previous Greek Constitution may not have been in compliance with Article 5 as interpreted by the Commission, because it allowed detention without trial, charges, or appeal for a certain duration, after which the authorities had to bring charges or release the suspect. (The time limit on such extrajudicial detention was abolished by Royal Decree 280.) This question was not examined by the Commission. ### Article 15 The Subcommission heard 30 witnesses and also examined relevant documents, such as the manifestos of far-left parties, related to the dispute over whether Article 15 was applicable. The Greek government claimed that the United Democratic Left (EDA), alleged to have Communist tendencies, was forming a popular front and infiltrating youth organizations to seize power. The applicant governments retorted that if the EDA was in fact a danger to democracy, its power could be circumscribed by constitutional means, and it had been losing support in previous elections and becoming increasingly politically isolated. After examining the evidence, the Subcommission concluded the Greek communists had given up in their attempt to seize power by force and lacked the means to do so, while the popular front scenario was implausible. Further, the rapid and effective suppression of junta opponents after the coup was evidence the Communists were "incapable of any organised action in a crisis". The Greek government also alleged that a "crisis of institutions" due to political mismanagement made the coup necessary; the applicant countries stated that "disapproval of the programme of certain political parties, namely the Centre Union and the EDA, did not of itself entitle the respondent Government to derogate from the Convention under Article 15". The Subcommission found that, contrary to the claims of their opponents, the Center Union politicians Georgios and Andreas Papandreou were committed to democratic and constitutional government. The Subcommission also rejected the junta's argument that demonstrations and strikes justified the coup, as these disruptions to public order were not more severe in Greece than other European countries and did not rise to a level of danger such as to justify derogation. Although the Subcommission found that before the coup there had been an increase of "political instability and tension, of an expansion of the activities of the Communists and their allies, and of some public disorder", it believed that the elections scheduled for May 1967 would have stabilized the political situation. The Subcommission also investigated whether, even if an imminent danger justified the coup, the derogation could continue afterwards. The Greek government reported disorder that took place after the coup, including the formation of what it deemed to be illegal organizations and a series of bombings between September 1967 and March 1969. Some witnesses stated the repressive measures of the junta had exacerbated the disorder. Although it paid close attention to the bombings, the Subcommission found the authorities could control the situation using "normal measures". The Greek government's justification for the existence of an "emergency" relied heavily on the Commission's judgement in Greece v. United Kingdom, in which the declaration of the British government that there was an emergency in British Cyprus was given significant weight. The Commission took a narrower view of the government's margin of appreciation to declare an emergency in the Greek case, by ruling that the burden of proof was on the government to prove the existence of an emergency that necessitated extraordinary measures. The Commission ruled 10–5 that Article 15 did not apply, either at the time of the coup or at a later date. Furthermore, the majority judged that Greece's derogation did not meet procedural requirements and that being a "revolutionary government" did not affect Greece's obligations under the Convention. The five dissenting opinions were lengthy, indicating that for their authors this matter represented the crux of the case. Some of these opinions indicated agreement with the Greek government's reasoning that the coup countered an actual "serious danger threatening the life of the nation", and even agreed with the coup itself. Others argued that a "revolutionary government" had greater freedom to derogate from the Convention. Legal scholars Alexandre Charles Kiss [fr] and Phédon Végléris [fr] argue some of the dissenting opinions are effectively abstentions, which are not allowed under the Commission's rules. As of 2019, the Greek case is the only time in the history of the Commission or the Court that an invocation of Article 15 was deemed unjustified. The applicant countries also argued that the derogation violated Articles 17 and 18, relating to abuse of rights, on the grounds that those articles "were designed to protect democratic regimes against totalitarian conspiracies", while the Greek regime did not act to protect rights and freedoms. The Commission did not rule on this question because the derogation was deemed invalid on other grounds, but a separate opinion by Felix Ermacora explicitly recognized that the Greek regime abused its rights. ### Other articles Imposing martial law, arbitrary suspension of judges and convictions of people for "acts directed against the national security and public order", were judged to constitute a violation of Article 6 (right to a fair trial). The Commission found no violation of Article 7 over the constitutional amendment of 11 July 1967, alleged to be ex post facto (retroactive) law, because it was not enforced. A violation was found for Article 8, as arrests were unnecessarily conducted at night in the absence of an actual emergency, disrupting family life. Articles 9 and 10, guaranteeing freedom of conscience and freedom of expression respectively, were deemed to have been violated by censorship of the press. For Article 11, which guarantees freedom of association, the Commission found that it had been violated as the restrictions were not "necessary in a democratic society". Instead, the restrictions indicated an attempt to create a "police state, which is the antithesis of a 'democratic society'". Article 13, the requirement to have a legal remedy for violations, was violated due to flaws in judicial independence and lack of investigations into credible allegations of torture. The authorities were judged to have violated Article 14 due to discrimination in the application of other rights such as freedom of expression. The Commission found "a flagrant and persistent violation" of Article 3 of Protocol 1, which guaranteed the right to vote in elections, as "Article 3 of Protocol 1 implies the existence of a representative legislative body elected at reasonable intervals and constituting the basis of a democratic society". Because of the indefinite suspension of elections, "the Greek people are thus prevented from freely expressing their political opinion by choosing the legislative body in accordance with Article 3 of the said Protocol". ## Political processes The case revealed divisions within the Council of Europe between smaller states that emphasized human rights and larger ones (including the United Kingdom, West Germany, and France) which prioritized keeping Greece within NATO as a Cold War ally against the Eastern Bloc. A key consideration was that the United States did not oppose the Greek junta and, throughout the case, intervened in favor of keeping Greece inside the Council of Europe. The larger Western European countries used the case to deflect domestic criticism of their relations with the junta and calls for Greece to be ejected from NATO. Besides the judicial case, political processes against Greece in the Council of Europe had been ongoing in 1968 and 1969. In certain respects the process was similar to the Commission's procedure, because the Parliamentary Assembly appointed a rapporteur, Max van der Stoel, to visit the country and investigate the facts of the situation. The choice of Van der Stoel, a Dutch social-democratic politician, indicated the Assembly's hard line on Greece. Working from the findings of Amnesty International and Thornberry, he visited the country three times in 1968, but the junta barred him from returning because it claimed he lacked objectivity and impartiality. He found that, similar to Francoist Spain and the Estado Novo dictatorship in Portugal, which had been refused membership, it was "undeniable that the present Greek regime does not fulfill the objective conditions for membership in the Council of Europe as set out in Article 3 of the Statute". This was due in part to the lack of rule of law and protection of fundamental freedoms in Greece, and the lack of a parliament prevented Greece's participation in the Parliamentary Assembly. Van der Stoel presented his report, which unlike the Commission's findings was not bound by confidentiality, with a recommendation of expulsion under Article 8 of the Statute, to the Parliamentary Assembly on 30 January 1969. As Van der Stoel emphasized, this was distinct from the Commission's work as he did not evaluate if the ECHR had been violated. Following debate, the Parliamentary Assembly passed Resolution 547 (92 for, 11 against, 20 abstentions) which recommended the expulsion of Greece from the Council of Europe. During its meeting on 6 May 1969, the Committee of Ministers resolved to bring Resolution 547 to the attention of the Greek government and scheduled a vote on the resolution for its next meeting on 12 December 1969. Late 1969 saw a scramble for votes on the expulsion of Greece; the junta publicly threatened an economic boycott of the countries that voted for the resolution. Out of eighteen countries, Sweden, Denmark, the Netherlands, Luxembourg, Iceland, Switzerland, and the United Kingdom had already signaled their intention to vote for Greece's expulsion before the 12 December meeting. The United Kingdom had had an ambiguous stance towards Greece, but on 7 December, Prime Minister Harold Wilson gave a speech in the House of Commons indicating that the government would vote against Greece. ## Greek exit ### Leak of the report Shortly after the Commission received the report, it was leaked. Summaries and excerpts were published in The Sunday Times on 18 November and Le Monde on 30 November. Extensive newspaper coverage publicized the finding that Greece had violated the ECHR and torture was an official policy of the Greek government. The report echoed the findings of other investigations by Amnesty International and the US Committee for Democracy in Greece. The reports made a strong impact on public opinion; demonstrations against the junta were held across Europe. On 7 December, Greece issued a note verbale to the Secretary-General of the Council of Europe denouncing the leak and accusing the Commission of irregularities and bias, which made the report "null and void" in Greece's opinion. Greece also claimed that the Commission leaked the report to influence the 12 December meeting. The Commission's Secretariat denied responsibility for the leak; Becket stated that it "came from Greece itself and constituted an act of resistance by Greeks against the regime", according to "well-informed sources". After the leak, British ambassador to Greece Michael Stewart advised Pipinelis that if the junta would not agree to a concrete timeline for democratization, it would be best to withdraw voluntarily from the Council of Europe. ### 12 December meeting On 12 December, the Committee of Ministers met in Paris. Because its rules forbade a vote on the report until it had been in the Committee's hands for three months, the report, transmitted on 18 November 1969, was not discussed at their meeting. Pipinelis, the Greek Foreign Minister, gave a lengthy speech in which he discussed the causes of the 1967 coup, possible reforms in Greece, and the recommendations in the Commission's report. However, since his audience had copies of the Commission's report, and Pipinelis did not give a timeline for elections, his speech was not convincing. Eleven of the eighteen Council of Europe member states sponsored the resolution calling for Greece's expulsion; a resolution by Turkey, Cyprus, and France to delay the vote was unsuccessful. By this time, these states were the only ones opposing Greece's expulsion, and it became obvious that Greece would lose the vote. Historian Effie Pedaliu suggests the United Kingdom's dropping its support for the junta in the Council process rattled Pipinelis, leading to his sudden reversal. After the president of the Committee, Italian foreign minister Aldo Moro suggested a break for lunch, Pipinelis demanded the floor. In a face-saving move, he announced that Greece was leaving the Council of Europe under Article 7 of the Statute, pursuant to the junta's instructions, and walked out. This had the effect of denouncing three treaties of which Greece was a party: the Statute, the ECHR, and Protocol 1 of the ECHR. ### Aftermath The Committee of Ministers passed a resolution stating that Greece had "seriously violated Article 3 of the Statute" and had withdrawn from the Council of Europe, rendering suspension unnecessary. On 17 December 1969, the Secretary-General released a note verbale rejecting Greece's allegations against the Commission. The Committee of Ministers adopted the report at its next meeting on 15 April. It stated the "Greek government is not prepared to comply with its continuing obligations under the Convention", noting ongoing violations. Therefore, the report would be made public and the "Government of Greece [was urged] to restore without delay, human rights and fundamental freedoms in Greece" and abolish torture immediately. As Moro stated at the 12 December meeting, in practice Greece immediately ceased to be a member of the Council of Europe. The country announced on 19 February 1970 that it would not participate in the Committee of Ministers as it no longer considered itself a member. Pursuant to Article 65 of the ECHR, Greece ceased to be a party of the ECHR after six months, on 13 June 1970, and de jure left the Council of Europe on 31 December 1970. Pipinelis later told U.S. Secretary of State William Rogers that he regretted the withdrawal, as it furthered Greece's international isolation and led to more pressure against the junta at NATO. Greek dictator Georgios Papadopoulos issued a statement calling the Commission "a conspiracy of homosexuals and communists against Hellenic values", and declaring, "We warn our friends in the West: 'Hands off Greece'". ## Second case On 10 April 1970, Denmark, Norway and Sweden filed another application against Greece alleging violations of Articles 5 and 6 related to the ongoing trial of 34 regime opponents before the Extraordinary Military Tribunal of Athens, one of whom seemed likely to be executed. The applicant countries asked the Commission to intercede to prevent any executions from being carried out, a request that was granted. The Secretary-General of the Council of Europe submitted such a request at the behest of the Commission's president. Greece said the application was inadmissible because it had denounced the Convention, and domestic remedies had not been exhausted. The Commission ruled the application provisionally admissible on 26 May, a decision that became final on 16 July as Greece responded to queries. Greece's reasoning was rejected because its withdrawal from the ECHR did not take effect until 13 June and violations that occurred before that date remained subject to Convention jurisdiction. Also, exhaustion of domestic remedies did not apply because the violations related to "administrative practices". On 5 October, the Commission decided it could not decide the facts of the case because Greece's refusal to cooperate in the proceedings made it impossible for the Commission to carry out its usual functions. None of the defendants in the trial were executed, although it is unclear if the intervention affected the proceedings in Greece. Following the fall of the junta on 23 July 1974, Greece rejoined the Council of Europe on 28 November 1974. At the request of Greece and the three applicant countries, the case was struck in July 1976. ## Efficacy and results The report was hailed as a great achievement for exposing human rights violations in a document of substantial authority and credibility. Pedaliu argues that the case helped break down the concept of non-intervention over human rights violations. The process triggered extensive press coverage for nearly two years, increasing awareness of the situation in Greece and of the ECHR. Council of Europe Commissioner for Human Rights Thomas Hammarberg stated that, "The Greek case became a defining lesson for human rights policies in Europe." He argued the expulsion of Greece from the Council of Europe had "an influence and a great moral significance for many Greeks". The case led to development in the forensics of torture and a focus on developing techniques that could prove that torture had occurred. The case enhanced the prestige and influence of Amnesty International and similar organizations, and caused the Red Cross to reexamine its policies regarding torture. The case revealed the weakness of the Convention system as it existed in the late 1960s, because "on its own the Convention system was ultimately unable to prevent the establishment of a totalitarian regime", the main purpose of those who had proposed it in 1950. Unlike other Convention cases at the time, but similar to Ireland v. United Kingdom (a case charging mistreatment of Irish republican prisoners in Northern Ireland), it was an interstate case alleging systematic and deliberate human rights violations by a member state. The Commission, which had only moral power, dealt best with individual cases and when the responsible state cared about its reputation and therefore had an incentive to cooperate. Other cases involved minor deviations from a norm of protecting human rights; in contrast, the junta's premises were antithetical to the principles of the ECHR—something the Greek government did not deny. The lack of results led legal scholar Georgia Bechlivanou to conclude there was "a total lack of effectiveness of the Convention, whether direct or indirect". Changing a government responsible for systematic violations is outside the ECHR system's remit. Israeli law scholar Shai Dothan believes that the Council of Europe institutions created a double standard by dealing much more harshly with Greece than with Ireland in Lawless (1961). Because Greece had a very low reputation for human rights protection, its exit did not weaken the system. Instead, the Greek case paradoxically increased the prestige of the Commission and strengthened the Convention system by isolating and stigmatizing a state responsible for serious human rights violations. Commissioner Sørensen believed the Committee of Ministers' actions had resulted in a "lost opportunity" by playing the threat of expulsion too soon and closed off the possibility of a solution under Article 32 and the Commission's recommendations. He argued that Greece's economic dependence on the EC and its military dependence on the United States could have been leveraged to bring the regime around, which was impossible once Greece left the Council of Europe. Although conceding the report was a "pyrrhic victory", Pedaliu argues that Sørensen's view fails to appreciate the fact that the Greek regime was never willing to curtail its human rights violations. The case stripped the junta of international legitimacy and contributed to Greece's increasing international isolation. Such isolation may have contributed to the junta's difficulties in effective government; it was unable to respond to the Turkish invasion of Cyprus, which caused the junta's sudden collapse in 1974. Human rights lawyer Scott Leckie argues that the international scrutiny of human rights in Greece helped the country to transition more rapidly to democracy. Greece's denunciation was the first time a regional convention on human rights was denounced by one of its members. In 2022, Russia became the second country to leave the Council of Europe, prior to a vote over its expulsion for its invasion of Ukraine. Becket found that "there is no doubt that the Convention System process was a significant restraint on the behaviour of the Greek authorities" and that because of international scrutiny, fewer people were tortured than would have been otherwise. On 5 November 1969, Greece signed an agreement with the Red Cross in an attempt to prove its intention to reform, although the agreement was not renewed in 1971. The agreement was significant as no similar agreement had been signed by a sovereign country with the Red Cross outside of war; torture and mistreatment declined following the agreement. International pressure also prevented retaliation against witnesses in the case. Becket also considered that Greece had made an incompetent blunder to defend itself when it was clearly in the wrong, and could have quietly left the Council of Europe. The definition of torture used in the Greek case significantly impacted the United Nations Declaration against Torture (1975) and the United Nations Convention against Torture (1984). It also led to another Council of Europe initiative against torture, the Convention for the Prevention of Torture and Inhuman or Degrading Treatment and Punishment (1987), which created the Committee for the Prevention of Torture. The Greek case also triggered the Conference on Security and Co-operation in Europe, which led to the Helsinki Accords. In 1998, George Papandreou, the Greek foreign minister, thanked "all those, both within the Council [of Europe] and without, who supported the struggle for the return of democracy to the country of its origin". ## Effect on ECHR jurisprudence The Greek case was the first time the Commission formally found a violation of the ECHR, and its conclusions were influential precedents in later cases. In terms of admissibility under Article 26, the Commission decided that it did not just consider the formal existence of legal remedies but whether they were actually effective in practice, including consideration of whether the judiciary was actually independent and impartial. Building on Lawless v. Ireland, the case helped to define the circumstances that might qualify as "a public emergency threatening the life of the nation" under Article 15, although leaving open the question, unresolved as of 2018, whether successful coup plotters may derogate rights based on an emergency resulting from their own actions. According to Jeffrey Agrest, the most significant point of law established by the case was its interpretation of Article 15, as the judgement prevented the use of the article as an escape clause. The case also illustrated the limits to the margin of appreciation doctrine; the suspension of all constitutional rule of law was manifestly outside the margin. During the 1950s and 1960s, there was no definition of what constituted torture or inhuman and degrading treatment under Article 3 of the ECHR. The Greek case was the first time the Commission had considered Article 3. In the Greek case, the Commission stated that all torture was inhuman treatment, and all inhuman treatment was degrading. It found that torture was "an aggravated form of inhuman treatment" distinguished by the fact that torture "has a purpose, such as the obtaining of information or confessions, or the infliction of punishment", rather than the severity of the act. However, the purposeful aspect was marginalized in later cases, which considered that torture was objectively more severe than acts which amounted only to inhuman or degrading treatment. In the Greek case report, the Commission ruled that the prohibition on torture was absolute. The Commission did not specify whether inhuman and degrading treatment was also absolutely prohibited, and seemed to imply they may not be, with the wording "in the particular situation is unjustifiable". This wording gave rise to a concern that inhuman and degrading treatment could be sometimes justified, but in Ireland v. United Kingdom the Commission found that inhuman and degrading treatment was also absolutely prohibited. A threshold of severity distinguished "inhuman treatment" and "degrading treatment". The former was defined as "at least such treatment as deliberately causes severe suffering, mental or physical which, in the particular situation is unjustifiable" and the latter, that which "grossly humiliates the victim before others, or drives him to an act against his will or conscience". Among the implications of the Greek Case report is that poor conditions are more likely to be found to be inhuman or degrading if they are applied to political prisoners. The Commission reused its definitions from the Greek case in Ireland v. United Kingdom. The case also clarified that the Commission's standard of proof was beyond a reasonable doubt, a decision which left an asymmetry between the victim and state authorities, who could prevent the victim from collecting the evidence necessary to prove they had suffered a violation. The Court ruled in later cases where Article 3 violations seemed likely, it was incumbent upon the state to conduct an effective investigation into alleged ill-treatment. It also helped to define what constituted an "administrative practice" of systematic violations.
71,249,415
The Widows of Culloden
1,172,325,516
Fashion collection by Alexander McQueen
[ "2000s fashion", "2006 in Paris", "Alexander McQueen collections", "British fashion" ]
The Widows of Culloden (Scottish Gaelic: Bantraich de cuil lodair) is the twenty-eighth collection by British fashion designer Alexander McQueen, made for the Autumn/Winter 2006 season of his eponymous fashion house. It was inspired by his Scottish ancestry and is regarded as one of his most autobiographical collections. It is named for the widows of the Battle of Culloden (1746), often seen as a major conflict between Scotland and England. Widows makes extensive use of the McQueen family tartan and traditional gamekeeper's tweeds, as well as other elements taken from Highland dress. Historical elements reflected the fashion of the late Victorian era and the 1950s. The collection's runway show was staged on 3 March 2006 during Paris Fashion Week. It was dedicated to Isabella Blow, McQueen's friend and muse. The show marked a return to theatricality for McQueen, whose shows in the preceding two seasons had been comparatively conventional. Widows was presented on a square stage with a glass pyramid at its centre. Fifty-one ensembles were presented across roughly three phases, ending with a Pepper's ghost illusion of English model Kate Moss projected within the glass pyramid. Critical response was positive, especially towards McQueen's tailoring and the collection's balance of artistry and commercial practicality. The show is since regarded as one of McQueen's best, with the illusion of Kate Moss regarded as its highlight. Ensembles from Widows are held by various museums and have appeared in exhibitions such as the McQueen retrospective Alexander McQueen: Savage Beauty. The Widows of Culloden collection and show, especially the Kate Moss illusion, have been extensively analysed, especially as an exploration of gothic literature in fashion. Widows is frequently discussed with McQueen's first Scottish-themed collection, Highland Rape (Autumn/Winter 1995), whose runway show was highly debated in the fashion world. ## Background British designer Alexander McQueen was known in the fashion industry for dramatic, theatrical fashion shows featuring imaginative and occasionally controversial designs. Although he was born in England, McQueen's father was of Scottish descent. His mother was fascinated with this family history, an interest she passed on to McQueen early in his childhood. McQueen maintained an interest in contentious periods of Scottish history, especially instances of Scottish–English conflict such as the Jacobite risings and the Highland Clearances. He resented the romanticisation of Scotland (sometimes called tartanry), particularly by other British fashion designers such as Vivienne Westwood, and drew inspiration from Scottish resistance to English domination. McQueen's first Scotland-inspired collection was the controversial Highland Rape (Autumn/Winter 1995), which marked his first use of the red, black, and yellow McQueen clan tartan. The collection became known for its runway show, which featured models walking unsteadily down the runway in torn and bloody clothing. Intended as a reference to what McQueen described as "England's rape of Scotland", the collection was described by many British fashion critics as misogynistic, a characterisation to which McQueen consistently objected. American journalists tended to be more positive about the collection: Amy Spindler of The New York Times called it "a collection packed with restless, rousing ideas, by far the best of the London season." In retrospective, Highland Rape is considered to be the launching point of McQueen's fame, and has been credited with leading to his appointment as head designer at French luxury fashion house Givenchy. He held that post from 1996 to 2001; shortly after his contract ended he sold 51% of his label to the Gucci Group. ## Concept and creative process The Widows of Culloden (Autumn/Winter 2006) is the twenty-eighth womenswear collection designed by McQueen for his eponymous fashion house. McQueen spoke of wanting to "show a more poetic side" of his work with the collection. The collection was inspired by McQueen's Scottish ancestry, his love for the natural world, and Shakespeare's Scottish play Macbeth. Its name comes from the women who were widowed following the Battle of Culloden (1746). This engagement marked the defeat of the Jacobite rising of 1745, in which Charles Edward Stuart raised a Scots Jacobite army and attempted to regain the British throne. The battle led to British efforts to dismantle the Scottish clan system and ban the wearing of tartan, and has historically been mythologized as a conflict between Scotland and England. Widows features the return of many of McQueen's signature elements: sharp tailoring, altered silhouettes, and a dark yet romantic atmosphere. Multiple authors described it as something of a greatest hits collection for the designer. Soft dresses and flowing evening gowns accompanied McQueen's typical tailored suits and dresses. The light, ethereal aspect of many of the dresses is credited to McQueen's time at Givenchy, where he learned le flou, the dressmaking side of haute couture. McQueen personally created many patterns from the collection. Many elements from the collection were taken directly from or referred to traditional Highland dress, both upper- and lower-class. The primary fabrics used were tartan and tweed; Aran knit, brocade, black velvet, organza, and chiffon also featured in several ensembles. Some garments, especially those made with chiffon, were torn or left with unfinished seams. Other items were artificially aged to create the appearance of having been worn. As in Highland Rape, the tartan used in the collection is the red, yellow, and black McQueen family tartan, woven in a historic mill in Lochcarron, Scotland. Several of the tartan garments included aspects of the traditional féileadh-mór, a large piece of fabric which is wrapped around the body and held by a belt, and the kilt, a knee-length wraparound skirt. Other uses of tartan were non-traditional, such as tailored jackets and suits. The extensive use of tweed references the garb of traditional Scottish gamekeepers. Tweed production is indigenous to Scotland, especially in the Scottish Isles. In the 1840s, the fabric acquired an association with high-class leisure, after the British nobility began taking hunting trips to Scottish estates and adopting the tweed worn by locals and estate staff. The tweed also plays on McQueen's Autumn/Winter 2005 collection, which made similarly heavy use of the fabric. Hunting and gamekeeping is also referenced in the use of animal fur and items made from the feathers and wings of game birds. Usage of animal parts, both natural and imitation, was typical for McQueen; he was especially partial to the symbolism associated with birds. The show's headpieces made particular use of avian elements. They were created by Irish-born milliner Philip Treacy, a frequent collaborator of McQueen's; they had been introduced by McQueen's friend and muse Isabella Blow. Some authors interpreted their complexity and emphasis on birds as a gesture toward Blow, who loved elaborate headwear and the sport of falconry. Other authors read them as an allusion to bird-women in mythology; author Katharine Gleason wrote that the headdresses imparted a "mythic quality" to the models. Many ensembles incorporated historical elements and allusions to other designers. Fashion historian Judith Watt noted references to the Arts and Crafts movement, as well as the S-bend silhouette common in the fashion of the 1870s and 1880s. The use of wasp waists, bustles, close tailoring, and belted jackets can be seen as a reference to Victorian and 1950s fashion. Some designs alluded to the military uniforms of World War II, and model Spitfire planes were repurposed as hair accessories. The winged headpieces referenced a series of winged headdresses made by Italian couturier Elsa Schiaparelli in the 1930s. Several of the evening gowns took inspiration from a dress designed in 1987 by fellow British designer John Galliano, nicknamed the "shellfish dress" for its layers of white organza ruffles that resembled stacked clamshells. McQueen had long admired and sought to emulate the complicated construction of the original. ## Runway show ### Staging and design The runway show for The Widows of Culloden was staged on 3 March 2006 at the Palais Omnisports de Paris-Bercy in Paris, and was dedicated to Isabella Blow. The invitations for the Widows show were black and white, with a print of an Edwardian cameo and the title of the show rendered in Scottish Gaelic: Bantraich de cuil lodair. Seating was so limited that the show was actually staged twice in one night to accommodate all the guests. McQueen typically worked with a consistent creative team for his shows, and Widows was no exception. McQueen's creative director Katy England was responsible for the show's overall styling, Eugene Souleiman styled hair, and Charlotte Tilbury styled make-up, which was kept minimal and neutrally toned. Production was handled by Gainsbury & Whiting, and John Gosling was responsible for soundtrack design. Widows marked a return to theatricality for McQueen, whose shows in the preceding two seasons – The Man Who Knew Too Much (Autumn/Winter 2005) and Neptune (Spring/Summer 2006) – had been comparatively conventional. Gosling's soundtrack incorporates songs from the 1993 film The Piano, scored by Michael Nyman; Scottish bagpipes and drums, various punk rock tracks, and a sound effect of howling wind. McQueen intended to use a track commissioned from Nyman for the finale, but dropped it in favour of a song from the soundtrack of the 1993 film Schindler's List. Scottish historian Murray Pittock wrote that the use of the Schindler's List song "suggested an analogy between Culloden and the Holocaust". ### Show Audience members entered the space through a large glass pyramid. Seats were arranged around a square stage of rough wood, reminiscent of the wooden stage in his collection No. 13 (Spring/Summer 1999). Another glass pyramid was placed in the centre of the square, leaving a catwalk in which the models walked counter-clockwise. In contrast to Highland Rape, when the models staggered or stalked angrily down the runway, the models in Widows moved in a stoic, upright manner which Gleason described as "the attitude of warrior princesses". Widows of Culloden comprised fifty-one ensembles across three broad phases, each look worn by a different model. The show opened with dresses, sweaters, and tailored suits in tweeds, Aran knits, and brocades of muted neutral shades. The next phase comprised dark-coloured outfits with a focus on tartan and black leather, followed by a series of black evening gowns sometimes taken as mourning dresses. The final looks were a set of lighter-coloured gowns, some of which were worn with frock coats. The show closed with a Pepper's ghost illusion within the glass pyramid, featuring a life-sized projection of Kate Moss, an English model and friend of McQueen's, wearing a billowing chiffon dress. It was the first fashion show to employ this kind of illusory effect. After the illusion ended, as a curtain call, all the models walked the runway in a parade to the Donna Summer song "Last Dance" (1978), followed by McQueen. ## Significant ensembles The show opened with Ukrainian model Snejana Onopka wearing a tweed suit with a fur collar, cream shirt with ruffled front, and tan-coloured leather boots. The look was styled with the "Bird's Nest" headdress, made from a pair of mallard duck wings surrounding a silver bird's nest with blue eggs made from quartz Swarovski crystals, speckled to look like duck eggs. The nest and eggs were created by British jeweller Shaun Leane, while the headpiece was made by Treacy; both of whom were long-time McQueen collaborators. Look 12 was a full-length dress covered entirely in pheasant feathers. Its long torso and flared lower portion reference the style of gowns from the 1890s. Researcher Kate Bethune described the unique construction of the dress: "each of the feathers has been individually hand-stitched onto a length of ribbon, and then these lengths of ribbon have been stitched onto a net ground". Jess Cartner-Morley of The Guardian called it "meticulously engineered" and likened it to a full-length dress made of razor clam shells from Voss (Spring/Summer 2001). Fashion theorist Jonathan Faiers wrote that the lavish use of game bird feathers evoked Scotland's transformation into a "sports arena for absentee English landlords" in the late Victorian era. Look 14 was a blouse and midi skirt ensemble of a light material, printed with a collage of realistic images of birds, moths, and skulls. It was paired with a tan fur coat that matched the skirt's hemline, and a wide belt of dark leather at the waist. According to textile curators Clarissa M. Esguerra and Michaela Hansen, the print represented "the collection's themes of nature, metamorphosis, and memento mori". Look 33 was a one-shouldered tartan dress with tulle underskirt, styled with an undershirt of sheer fabric with rose designs in black, creating an illusion of arm and chest tattoos. The model's waist was cinched by a large belt in dark leather with a Celtic buckle. Roses paired with plaid was a combination used on some Jacobite clothing. On the runway, it was styled with the "Bird Skull" headpiece, which featured black feathers and a silver-case eagle skull set with dark Swarovski crystals. Brazilian model Raquel Zimmermann wore Look 47, one of the collection's most-discussed ensembles, a full-length ivory gown in silk tulle and lace with an antlered headdress. The flowers from the lace were individually cut and hand-sewn to the tulle. The dress ended in a fishtail hem with a tumble of lace ruffles set on the bias; Watt noted a similarity to dresses in the paintings of French artist James Tissot. For the headpiece, a £2,000 piece of hand-embroidered lace was draped over and pierced by a pair of translucent white Perspex antlers to form a veil. The gown was based in part on the wedding dress of Sarah Burton, a designer with McQueen's label. Some sources refer to it as a wedding dress, but it has also been called the Widow's Weeds, after the Victorian term for women's mourning clothing. McQueen previously employed a similar headpiece using antlers and black lace in Dante (Autumn/Winter 1996), which reappeared in his 2004 retrospective show Black. Look 48 was a flowing gown in off-white chiffon, worn by Australian model Gemma Ward, with butterflies placed in the hair as accessories. Both Judith Watt and fashion journalist Dana Thomas described it as an evolution of the oyster dress from Irere (Spring/Summer 2003), which was inspired by Galliano's shellfish dress. Cartner-Morley connected the use of butterflies to the final showpiece of Voss, in which artist Michelle Olley was "besieged by giant moths". ## Reception Contemporary reviews were highly positive. McQueen received a standing ovation at the runway show, a rarity in fashion. Reviewers particularly noted McQueen's tailoring as one of the show's strongest features. The illusionism of Kate Moss dress was regarded as the highlight of the runway show. Women's Wear Daily named Widows one of their top ten collections for the Fall 2006 season. It had the highest pageviews at Style.com for any major collection that season, with 1.7 million. Robert Polet, then chairman of Gucci Group, the parent company for McQueen's label, reportedly cried "bravo!" upon seeing it and ran backstage to congratulate McQueen. Elizabeth McMeekin of The Glasgow Herald suggested the collection had helped drive a trend for Scottish-inspired fashion that season. Many reviews noted that the collection was both artistic and commercially viable. McQueen did not always achieve this balance – his designs were notorious for being unwearable – so critics felt this was a positive development for his brand. According to Jonathan Akeroyd, then CEO of the Alexander McQueen brand, the looks chosen for the runway "represented about half of what was available" for sale from the collection. Akeroyd reported that sales were strong for the collection and credited the theatrical runway show for driving brand awareness. Writing in the International Herald Tribune, fashion journalist Suzy Menkes concurred, calling the collection's eveningwear "a brand image-maker". Several of the evening gowns were suitable for ordering as custom bridal gowns. Writing for Vogue, Sarah Mower called it a "timely reconfirmation of McQueen's unique powers as a showman-designer". For The Daily Telegraph, Hilary Alexander said that it "restored the true spirit of the romantic renegade to the catwalk". Lisa Armstrong at The Times wrote that "almost every item was a showstopper but also eminently wearable", although she called the Moss illusion "unspeakably cheesy". According to Susannah Frankel at The Independent, the show was "a return to the unbridled spectacle and raw power with which he made his name", citing its "juxtaposition of fragility and strength, masculinity and femininity". The collection is viewed favourably in retrospect. In a 2011 interview with Vogue, Sarah Burton, who succeeded McQueen as the label's head after he died in 2010, described Widows as one of his most iconic collections. In 2012, Judith Watt called the sculptural aspects of some designs close to the "purist cutting of Cristóbal Balenciaga", a Spanish designer known for technical precision and unique silhouettes. Dana Thomas said that Widows represented a "Best of McQueen in the 1990s" in her 2015 book Gods and Kings, writing that many ensembles appeared to be revisions of McQueen's own earlier designs. British fashion curator Claire Wilcox described Widows in 2016 as a "masterly, romantic collection". Edinburgh-based journalists Caroline Young and Ann Martin wrote that the collection's slim-fit tailored tartan suits "presented the designer's refined craftsmanship at its very best". Speaking in 2020, author Vixy Rae said "it had a focus of extreme technicality matched with richness of imagery with ideas taken directly from Scottish costume". ## Analysis ### Autobiography and historicism McQueen's work was highly autobiographical: he incorporated elements of his memories, feelings, and personal fixations into his designs and runway shows. Widows, with its emotional focus on both McQueen's and Scotland's history, is generally regarded as one of his most autobiographical collections. Deborah Bell, a professor of costume design, cited curator Andrew Bolton in noting that the "romantic version of historic narrative" from Highland Rape and Widows was "profoundly autobiographical" for McQueen, and suggests this is the reason that it was so impactful. Fashion theorists Paul Jobling, Philippa Nesbitt, and Angelene Wong called the collection "a personal reckoning with [McQueen's] own past", particularly his relationships with his mother, sister, and Isabella Blow. Women's Wear Daily noted that "the clothes seemed perfectly to describe McQueen's own eccentric point of view". Historical references are a major component of The Widows of Culloden, resulting in critical discussion over whether the collection is modernist or historicist. Cultural theorist Monika Seidl discussed Widows as a collection in the vein of Romanticism, a 19th century movement which emphasised emotion and glorification of the past. She cites it as an example of fashion that "self-confidently plays around with time when fragments from the past are blatantly and visibly reactivated as the new look of the moment". Cathy Horyn of The New York Times argues that McQueen was "a storyteller", which positioned him as anti-modernist. She criticised the historical elements in Widows as an unnecessary obstacle between McQueen and his designs. Historian Jack Gann argued that, in effect, it was both: the modern and historical elements combined to show that "perception of our place in time is simultaneously of multiple eras". Costume curator Lilia Destin noted that the collection subverted typical historical narratives by decentring warriors in favour of their widows, and wrote that it "awards their ghosts a sense of transhistorical agency through memory". Jobling, Nesbitt, and Wong argued that McQueen's description of Widows as more "triumphant" than Highland Rape actually indicates that the widows are celebrating, not mourning, the deaths of their husbands. Historian Timothy Campbell wrote about Highland Rape and Widows in the coda to his 2016 book Historical Style, describing them as counter-arguments to the notion that traumatic events in history must be experienced only in a state of grief. In Campbell's words, "McQueen suggests, Culloden must first be resurrected or remade as something other than tragedy in order to be historically impactful". ### Highland Rape and Scottish culture Widows served as a counterpart to Highland Rape, which was also heavily inspired by Scottish culture. In comparison, Widows has been described as less angry and more reflective. McQueen himself reflected that Widows displayed a more positive view of Scotland, and related the difference to his own mental health, saying "I'm in a much clearer head space now than I was when I did the Highland." The collections were discussed together in a 2014 BBC Alba documentary, McQueen of Scots, which explored McQueen's Scottish heritage and its influence on his designs. Art historian Ghislaine Wood wrote that the "two collections provided contrasting but cathartic narratives on specific historical events ... and in many ways they reflect the complexity and drama of McQueen's vision". Author Katherine Gleason wrote that the use of tartan in Widows was "more polished, softened with ruffles and embroidery". To Lisa Armstrong, the restrained nature of Widows, compared to the overt political rage of Highland Rape, was a sign of maturity. Murray Pittock viewed both collections as part of the evolution and worldwide dissemination of tartan since the 1990s. While softer than Highland Rape, Widows is nonetheless interpreted as a statement about the appropriation of Scottish culture in a wider British context. McQueen's use of tartan in Widows, and the anglicised aspects of the designs, has been viewed as an exploration of the commodification of tartan in high fashion and British culture. Pittock noted that the wide-ranging visual elements of Widows "symbolically commented on the destruction, misprision and exploitative reinterpretation of Scotland for a global audience". For American fashion editor Robin Givhan, the use of tartan "hinted at the rebellion of the punk movement without embracing its anger". ### Gothic elements Critics have described The Widows of Culloden as an exploration of Gothic literary tropes through fashion, and some have compared it to specific works of classic literature. Gothic fiction, as an offshoot of Romantic literature, emphasises feelings of the sublime and the melancholy, but is set apart by its focus on fear and death. It is distinguished from other supernatural genres by its focus on the present as a state inevitably haunted by the past: literally, in the form of ghosts, as well as metaphorically, through memories and secrets. In McCaffrey's view, Widows exemplified melancholy – in the Gothic sense of "tensions between beauty and heartache" – through its visual staging. The stoic performance of the models represented a dignified grief that he likened to "visions of gothic heroines stalking the candle-lit corridors of an ancient castle". McCaffrey called the illusion of Kate Moss an example of highly staged melancholy, with every element contributing to the audience's emotional involvement. Kate Bethune presented a similar analysis, noting that the collection's sense of melancholy was "consolidated in its memorable finale". For Faiers, the models, and especially the Moss illusion, represented "the ghosts of the past unable to contend with the march of fashionable progress". The Bird's Nest headdress and the Bird Skull headpiece from Looks 1 and 33 respectively have been discussed as a set which represents the cycle of life and death, and the fragility of beauty. Discussing McQueen's proclivity for the gothic more generally, the professor of literature Catherine Spooner highlighted his fascination with dark aspects of history. She noted that in several of his most historical collections, including Highland Rape, Widows, and In Memory of Elizabeth Howe, Salem, 1692 (Autumn/Winter 2007), the "distressed fabric; screen-printed photographs; fragments of historical dress disassembled and reordered" reflect the disturbing aspects of history he drew on for inspiration. Author Chloe Fox wrote that McQueen had "mined the refined sense of an aristocratic past" to produce the collection. Literature professor Fiona Robertson found that McQueen's Scottish collections and the historical novels of Scottish writer Walter Scott epitomised the Scottish style of gothic by focusing on the country's "broken and self-alienated national history". The mood of Widows may be read as part of a shift toward darkness and melancholy as an aesthetic in fashion, which some authors have argued was a response to global turmoil and increased nihilism following the turn of the century. Art historian Bonnie English noted that McQueen was one of a number of major designers, including Karl Lagerfeld, John Galliano, Yohji Yamamoto, and Marc Jacobs, who produced sombre collections for the Fall/Winter 2006 season. In a 2019 New York Times essay discussing the cultural archetype of the melancholy woman, writer Leslie Jamison described Highland Rape and Widows as emblematic of an "aesthetic of suffering" in fashion. ### White gowns The lace gown with veiled antlers – the Widow's Weeds – has provoked significant critical response, much of which focused on the theatrical and animalistic effect created by the headdress. McQueen later said that the look "worked because it looks like she's rammed the piece of lace with her antlers". Lisa Skogh wrote that the veil over the antlers suggested a "dramatic bridal crown". Watt described it as "creating a phantasmagorical hybrid beast-woman". Author Sarah Heaton noted that the antlers "insist on the feminine relationship to the land, nature, and psyche". Both the antlered gown and the chiffon gown worn by Moss have been analysed as wedding dresses. From this perspective, the antlered gown has been read as especially ambiguous: the veil can be seen as entrapping, protecting, or concealing the bride who wears it. It has drawn comparisons to the wedding dress obsessively worn by the spinster Miss Havisham in the novel Great Expectations (1861). Heaton, whose work focuses on the intersection between fashion and literature, described these two long white dresses as "revisionist" wedding gowns that evoke the Gothic to subvert its limitations. In her view, the lace veil uplifted on the antlers of McQueen's ensemble is reminiscent of Miss Havisham's wedding veil, but where Miss Havisham's veil is shroud-like and grotesque, the veil of McQueen's design "suggests the strength of femininity". The illusion of Moss, on the other hand, evokes the Gothic trope of the barefoot "mad woman"; normally this figure would be confined to an attic or asylum, but again McQueen subverts the expectation by displaying her to the public, making her ephemeral and uncontained. Cultural theorist Monika Seidl considered the same pair of gowns from a more critical perspective in 2009, arguing that they framed their wearers "as trophy and ... as victim". For Seidl, far from presenting feminine strength, the antlers in combination with the gamekeepers' clothing referenced earlier in the show evoked an image of the bride as a hunting trophy. She viewed the Moss illusion as presenting a contained "Wiedergänger" or vengeful spirit. However, she described both dresses as persuasive in the way they "destabilise the notion of a bride". ### Other analyses Post-humanist theorist Justyna Stępień argued that McQueen's use of unusual silhouettes and structures, particularly in Widows and Plato's Atlantis (Spring/Summer 2010), provoked an emotional reaction in the audience and forced them to reconsider their perception of the human body. Analysing McQueen's tendency to incorporate and reinterpret styles from various cultures, Esguerra and Hansen compared the use of Highland dress in Widows to the interpretation of Islamic dress in Eye (Spring/Summer 2000). They found that Widows succeeded because of McQueen's connection to Scottish culture, enabling him to present a collection with a strong narrative. In contrast, they felt that Eye suffered from a lack of personal knowledge, and came across as insensitive to the diversity of Islamic culture. In an analysis of fashion as a performative intersection of sex and gender, Paul Jobling, Philippa Nesbitt, and Angelene Wong examined Widows as a "poetic text" relating to McQueen's identity as a gay man, and his ideal of feminine empowerment through fashion. They examined McQueen's use of feathers as a subversion and expansion of typical gender roles which see men as predators and women as prey. In their analysis, the models resemble bird-women in mythology, such as the Greek Harpy and the Russian Gamayun, who are beautiful yet dangerous threats to "phallocentric male power". McQueen's use of animal motifs thus allows women to explore different types of femininity and female power, without being constrained to any particular binary. They further argue that McQueen's combination of "masculine" tartan with "feminine" fabrics like lace is another subversion of the gender binary, allowing the widows to step into a liberated role following the deaths of their husbands. Writer Cassandra Atherton described using several McQueen collections, including Widows, in a university-level creative writing course to teach a connection between poetry and fashion, particularly how one can inspire the other. Literature professor Mary Beth Tegan described using Highland Rape and Widows together in 2021 as a teaching aid to engage university students in the short story "The Highland Widow" by Walter Scott (1827). She found that the "affective glamour of Gothic tale and fashion spectacle ... roused my students' interest and sustained their reflections" about both the story and the fashion. ## Legacy Actress Sarah Jessica Parker attended the opening of the 2006 AngloMania: Tradition and Transgression in British Fashion exhibition at the New York Metropolitan Museum of Art (The Met) wearing a version of Look 33 from Widows, the one-shouldered tartan dress. McQueen accompanied her wearing a matching tartan great kilt. Elizabeth McMeekin called it a "chic, tremendously of the-moment" choice. Dresses from Widows have appeared in magazine photoshoots and editorials. Gemma Ward wore Look 33 for an editorial fashion shoot in the July 2006 issue of Harper's Bazaar, styled with the Spitfire headpiece from Look 44. Moss wore the original dress from the illusion on the cover of the May 2011 issue of Harper's Bazaar UK. Look 47, The Widow's Weeds, and Look 48, the chiffon dress with butterfly accessories, appeared in "Dark Angel", a 2015 retrospective editorial of McQueen's work in British Vogue by fashion photographer Tim Walker. Fashion collector Jennifer Zuiker auctioned her McQueen collection in 2020, including at least two pieces from Widows. A tartan dress, Look 30 from the runway show, sold for a reported \$9,375, and a floral ballgown, Look 50, sold for a reported \$68,750. The Alexander McQueen brand archive in London retains ownership of several looks from the collection, including the Widow's Weeds and the pheasant feather dress. Swarovski owns the Bird's Nest and Bird Skull headdresses. The Met owns the original Look 33 tartan dress ensemble with the sheer undershirt, as well as Look 30, another tartan dress. The Victoria and Albert Museum (the V&A) in London owns a variant of the Kate Moss dress. The Los Angeles County Museum of Art owns several pieces from the collection. The National Gallery of Victoria (NGV) in Australia owns 14 ensembles and mockups from Widows, including Look 33 and Look 50. The majority of these were gifted by philanthropist Krystyna Campbell-Pretty in 2016 as part of a larger collection. ### Museum exhibitions Several ensembles from Widows – at least five tartan looks, the pheasant feather dress, and the Widow's Weeds – appeared in Alexander McQueen: Savage Beauty, a retrospective exhibition of McQueen's designs shown in 2011 at The Met and in 2015 at the V&A. The Kate Moss illusion made an appearance at both versions of the exhibition. In the original presentation at the Met, the illusion was recreated in miniature, but in the V&A restaging, it was presented in full size in its own room. The NGV displayed the collection donated by Campbell-Pretty in 2019 as the Krystyna Campbell-Pretty Fashion Gift. The exhibition included items from several McQueen collections, including Widows. Items from the collection appeared in the 2022 exhibition Lee Alexander McQueen: Mind, Mythos, Muse, first shown at the Los Angeles County Museum of Art and later in expanded form at the National Gallery of Victoria. Widows was placed in the Fashioned Narratives section of the exhibition, which highlighted collections focused on original and historical stories. Mind, Mythos, Muse compared one sleeveless dress with high-necked ruff and gold beading from the Widows collection to the ruffed and embroidered outfit worn by King Louis XIII of France in the painting Portrait of Louis XIII, King of France as a Boy by Flemish painter Frans Pourbus II (c. 1616). Two tartan ensembles from the runway show were compared to a 1780 portrait of Hugh Montgomerie, 12th Earl of Eglinton by John Singleton Copley. During the time the portrait was painted, the wearing of tartan in Scotland was prohibited, except for soldiers and veterans, by the Dress Act of 1746. In Copley's painting, Montgomerie, wearing tartan, poses triumphantly over defeated Cherokee warriors. In reality, Montgomerie was not present at the battle being depicted, which occurred during the Anglo-Cherokee War in 1760; nor was it a British victory. The exhibition presents the Widows outfits as a counterpoint to the colonialist narrative in the portrait.
19,680,601
Single Ladies (Put a Ring on It)
1,173,551,544
2008 single by Beyoncé
[ "2008 singles", "2008 songs", "Beyoncé songs", "Billboard Hot 100 number-one singles", "Black-and-white music videos", "Columbia Records singles", "Dance-pop songs", "Grammy Award for Song of the Year", "MTV Video of the Year Award", "Music videos directed by Jake Nava", "Song recordings produced by Beyoncé", "Song recordings produced by The-Dream", "Song recordings produced by Tricky Stewart", "Songs about marriage", "Songs with feminist themes", "Songs written by Beyoncé", "Songs written by Kuk Harrell", "Songs written by The-Dream", "Songs written by Tricky Stewart" ]
"Single Ladies (Put a Ring on It)" is a song recorded by American singer Beyoncé, from her third studio album, I Am... Sasha Fierce (2008). Columbia Records released "Single Ladies" as a single on October 8, 2008, as a double A-side alongside "If I Were a Boy", showcasing the contrast between Beyoncé and her aggressive onstage alter ego Sasha Fierce. It explores men's unwillingness to propose or commit. In the song, the female protagonist is in a club to celebrate her single status. The song was acclaimed by music critics, who praised the song’s beats and catchiness, "Single Ladies" won three Grammy Awards in 2009, including Song of the Year, among other accolades. Several news media sources named it as one of the best songs of 2008, while some considered it one of the best songs of the decade. It topped the US Billboard Hot 100 chart for four non-consecutive weeks and has been certified quadruple-platinum by the Recording Industry Association of America (RIAA). The song charted among the top ten within the singles category in several other countries. Globally, it was 2000's seventh best-selling digital single with 6.1 million copies sold. A black-and-white music video accompanied the single's release. It won several awards, including the Video of the Year at the 2009 MTV Video Music Awards. Beyoncé has performed "Single Ladies" on television and during her concert tours. The song and particularly its music video have been widely parodied and imitated. Several notable artists have performed cover versions. Media usage has included placement in popular television shows. ## Background and release "Single Ladies (Put a Ring on It)" was written by Beyoncé, Terius "The-Dream" Nash, Thaddis "Kuk" Harrell, and Christopher "Tricky" Stewart, and was produced by Nash and Stewart. Beyoncé recorded the song in May 2008 at the Boom Boom Room Studio in Burbank, California, and it was mixed by Jaycen Joshua and Dave Pensado, with assistance from Randy Urbanski and Andrew Wuepper. Nash conceptualized "Single Ladies" after Beyoncé's secret marriage to hip hop recording artist Jay-Z in April 2008. Stewart commented that the song was "the only public statement that [Beyoncé and Jay-Z had] ever made about marriage", and that while in the studio recording the song Beyoncé had remained tightlipped, even to the point of removing her wedding band. Beyoncé's marriage inspired Nash to compose a song about an issue that affected many people's relationships: the fear or unwillingness of men to commit. In an interview with Billboard magazine, Beyoncé added that she was drawn to the song because of the universality of the topic, an issue that "people are passionate about and want to talk about and debate". She stated that although "Single Ladies" is a playful uptempo song, it addresses an issue that women experience every day. In "Single Ladies", Beyoncé portrays her alter ego Sasha Fierce, which appears on the second part of I Am... Sasha Fierce. The song was released simultaneously with "If I Were a Boy"; as lead singles, they were meant to demonstrate the concept of the dueling personalities of the singer. This reinforced the theme of the album, which was created by placing its ballads and uptempo tracks on separate discs. The singles debuted on US radio on October 8, 2008; "Single Ladies" did so on mainstream urban New York radio station Power 105.1. Both singles were added to rhythmic contemporary radio playlists on October 12, 2008; "Single Ladies" was sent to urban contemporary playlists the same day, while "If I Were a Boy" was instead classified for contemporary hit radio. The two songs were released as a double A-side single on November 7, 2008, in Australia, New Zealand, and Germany. Dance remixes of the song were made available in the US on February 10, 2009, and in Europe on February 16, 2009. "Single Ladies" was not originally released as a single in the UK, but the song became increasingly popular there and reached the top ten in the UK Singles Chart as a result of download sales. On February 16, 2009, it was released as a CD single, and the dance remixes became available as a digital download. ## Composition and lyrical interpretation "Single Ladies" is an uptempo dance-pop, bounce, and R&B song with dancehall and disco influences. While the first measure is in , it is set in common time except for a change to for one measure before the final chorus. It makes use of staccato bounce-based hand claps, Morse code beeps, an ascending whistle in the background, and a punchy organic beat. The instrumentation includes a bass drum, a keyboard and spaced out synthesizers that occasionally zoom in and out; one commentator, Sarah Liss of CBC News, noted that their arrangement surprisingly comes as light, instead of dense. According to the sheet music published at Musicnotes.com by Sony/ATV Music Publishing, "Single Ladies" is written in the key of E major and played in a moderate groove of 96.9 beats per minute. Beyoncé's vocals range from the note of F<sub>3</sub> to D<sub>5</sub>. It has a chord progression of E in the verses, and Bdim–C–Bdim–Am in the chorus. J. Freedom du Lac of The Washington Post noted the song features "playground vocals". "Single Ladies" is musically similar to Beyoncé's 2007 single "Get Me Bodied"; Andy Kellman of AllMusic called it a "dire throwback" to the song. Stewart and Harrell said in an interview given to People magazine that the similar rhythm of the two songs is "what Beyoncé responds to". Ann Powers of the Los Angeles Times saw the song's theme of female empowerment as an extension of that of "Irreplaceable" (2006), and Daniel Brockman of The Phoenix noted that its usage of "blurry pronouns" such as "it" resembles Beyoncé's 2005 single "Check on It". Liss commented that the beat of the "Single Ladies" evokes African gumboot dancing and schoolyard Double Dutch chants, a view shared by Douglas Wolf of Time magazine. Trish Crawford of the Toronto Star concluded that "Single Ladies" is "a strong song of female empowerment", and other music critics have noted its appeal to Beyoncé's fan base of independent women as in the song, Beyoncé offers support to women who have split up from their no-good boyfriends. In "Single Ladies", Beyoncé emphasizes her aggressive and sensual alter ego Sasha Fierce. She displays much attitude in her voice, as stated by Nick Levine of Digital Spy. Echoing Levine's sentiments, Liss wrote that Beyoncé sounds "gleefully sassy". The lyrics reflect post-breakup situations. Accompanied by robotic-like sounds, the opening lines of the song are call and response; Beyoncé chants, "All the single ladies", and background singers echo the line each time. In the first verse, Beyoncé narrates the recent end to a poor relationship after she "cried [her] tears for three good years". She reclaims her right to flirt, have fun, and find a lover who is more devoted than the previous one. Beyoncé goes out to celebrate with her friends in a club where she meets a new love interest. However, her former boyfriend is watching her, and she directs the song to him. She then sings the chorus, which uses minor chords and contains several hooks, "If you like it then you shoulda put a ring on it ... Oh oh oh". In the second verse, Beyoncé tells her ex-lover that, as he did not attempt to make things more permanent when he had the chance, he has no reason to complain now that she has found someone else. On the bridge, she affirms that she wants her new love interest "to make like a prince and grab her, delivering her to 'a destiny, to infinity and beyond'" while "Prince Charming is left standing there like the second lead in a romantic comedy". Towards the end of the song, Beyoncé takes a more aggressive vocal approach and employs a middle eight as she sings, "And like a ghost I'll be gone". When she chants the chorus for the third and final time, her vocals are omnipresent within layers of music, as described by Frannie Kelley of NPR. An electronic swoop tugs in continuously until the song ends. ## Critical reception ### Reviews The song received critical acclaim. Nick Levine of Digital Spy particularly praised its beats, which according to him, "just don't quit". Michelangelo Matos of The A.V. Club wrote that the song is "fabulous, with glowing production, a humongous hook, and beats for weeks". Ann Powers of the Los Angeles Times was also impressed with the overall production of the song, specifically the chorus, adding "More than most female singers, Beyoncé understands the funky art of singing rhythmically, and this is a prime example." Fraser McAlpine of BBC Online considered "Single Ladies" to be the best song Beyoncé has attempted since "Ring the Alarm" (2006) and complimented the former's refrain, describing it as "so amazingly catchy that it provides a surprisingly solid foundation for the entire song". Alexis Petridis of The Guardian commended the threatening atmosphere that "Single Ladies" creates by using minor chords. Daniel Brockman of The Phoenix complimented the song's use of the word "it", and wrote that the technique "sums up her divided musical persona far more effectively than the [album's] two-disc split-personality gimmick." Darryl Sterdan of Jam! called the song single-worthy, and wrote that it is "a tune that actually sounds like a Beyoncé number". Sarah Liss of CBC News wrote that "Single Ladies" represents Beyoncé at her best, describing it as "an instantly addictive [and] a bouncy featherweight dance-pop track". She further commented that it was pleasant to hear a voice which "changes timbre naturally, a voice with actual cracks and fissures (however slight)" in contrast to the "Auto-Tune epidemic that seems to be plaguing so many of her mainstream pop peers". Douglas Wolf of Time magazine added that "Single Ladies" is a sing-along which allows Beyoncé to demonstrate her virtuosity and "a focused, commanding display of individuality that speaks for every raised hand without a ring on it". Sasha Frere-Jones of The New Yorker wrote that the song combines a jumble of feelings and sounds that "don't resolve but also never become tiring". He concluded that "Single Ladies" was generally jubilant and that Beyoncé's vocals were pure and glimmering. Andy Kellman of AllMusic and Jessica Suarez of Paste magazine noted the song as one of the standouts from I Am... Sasha Fierce, and saw similarities to "Get Me Bodied". Writers praised the song's dance beat; Colin McGuire of PopMatters praised "Single Ladies" as one of Beyoncé's best dance tracks. Spence D. of IGN Music described the song as a "Caribbean flair and booty shaking jubilation that should get even the most staid of listeners snapping their necks and gyrating joyfully". Joey Guerra of the Houston Chronicle wrote that it is a "hip-shaking club" song similar to "Check on It". Leah Greenblatt of Entertainment Weekly magazine wrote that "Single Ladies" is a "giddy, high-stepping hybrid of lyrical kiss-off and fizzy jump-rope jam". Describing the song as a "winning high-stepping" one, Adam Mazmanian of The Washington Times wrote that "Single Ladies" is designed to get the women out on the dance floor as Beyoncé sings it with "a genuinely defiant, independent voice". Some critics were unimpressed by "Single Ladies". Mariel Concepcion of Billboard magazine called it "standard screech-thump fare". The Observer's Adam Mattera saw "Single Ladies" and "Diva" as potential sources of inspiration for drag queens, although they may leave others confused. Sal Cinquemani of Slant Magazine criticized its lyrical inconsistencies, suggesting it is a "leftover" from B'Day. ### Recognition Rolling Stone named "Single Ladies" the best song of 2008, and wrote, "The beat ... is irresistible and exuberant, the vocal hook is stormy and virtuosic." "Single Ladies" ranked as the second-best song of the 2000s decade in the magazine's 2009 readers' poll, and Rolling Stone critics placed it at number 50 on the list of the 100 Best Songs of the Decade. In 2021, the same magazine placed the song at number 228 on its list of the 500 Greatest Songs of All Time."Single Ladies" was placed at number two on MTV News' list of The Best Songs of 2008; James Montgomery called it "hyperactive and supercharged in ways I never thought possible. It's epic and sexy and even a bit sad." "There is absolutely zero chance Beyoncé ever releases a single like this ever again", Montgomery concluded. Time magazine's critic Josh Tyrangiel, who called the song "ludicrously infectious", ranked it as the seventh-best song of 2008. Douglas Wolf of the same publication placed it at number nine on his list of the All-Time 100 Songs. "Single Ladies" appeared at number six on the Eye Weekly's critics' list of the Best Singles of 2008, and at number six on About.com's Mark Edward Nero's list of the Best R&B Songs of 2008. On The Village Voice's year-end Pazz & Jop singles list, "Single Ladies" was ranked at numbers three and forty one in 2008 and 2009 respectively. Additionally, the Maurice Joshua Club Mix of the song was ranked at number 443 on the 2008 list. "Single Ladies" was named the best song of the 2000s decade by Black Entertainment Television (BET). Sarah Rodman, writing for The Boston Globe, named "Single Ladies" the fourth most irresistible song of the decade, and stated, "[Beyoncé] combined leotards with crass engagement-bling baiting into one delicious sexy-yet-antiquated package. The video had the whole world dancing and waving along via YouTube." VH1 ranked "Single Ladies" at number sixteen on its list of The 100 Greatest Songs of the 2000s. In his book Eating the Dinosaur (2009), Chuck Klosterman wrote that "Single Ladies" is "arguably the first song overtly marketed toward urban bachelorette parties". Jody Rosen of The New Yorker credited the melodies that float and dart over the thump for creating a new sound in music that didn't exist in the world before Beyoncé. He further wrote, "If they sound 'normal' now, it's because Beyoncé, and her many followers, have retrained our ears." ## Accolades "Single Ladies" has received a number of awards and nominations, including the Song of the Year, Best R&B Song and Best Female R&B Vocal Performance at the 52nd Grammy Awards. It also won the awards for Favorite Song at the 2009 Kids' Choice Awards, Song of the Year at the 2009 Soul Train Music Awards, and Best R&B Song at the 2009 Teen Choice Awards. The American Society of Composers, Authors and Publishers (ASCAP) recognized "Single Ladies" as one of the most performed songs of 2009 at the 27th ASCAP Pop Music Awards. The song was nominated in the Best Song category at the 2009 NAACP Image Awards and in the English-language "Record of the Year" category at the 2009 Premios Oye! Awards. It was also nominated for Record of the Year at the 2009 Soul Train Music Awards, Viewer's Choice Award at the 2009 BET Awards, Best R&B/Urban Dance Track at the 2009 International Dance Music Awards, and World's Best Single at the 2010 World Music Awards. ## Chart performance "Single Ladies" debuted at number 72 on the US Billboard Hot 100 chart issue dated November 1, 2008. On December 6, 2008, it moved from number 28 to number two on the Hot 100 chart, as a result of its debut at number one on the Hot Digital Songs chart, selling 204,000 digital downloads. The song became Beyoncé's fifth solo single to top the Hot Digital Songs chart. "If I Were a Boy" charted at number three on the Hot 100 chart the same week, and thus Beyoncé became the seventh female in the US to have two songs in the top five positions of that particular chart. The following week "Single Ladies" climbed to number one on the Hot 100 chart, selling 228,000 downloads, and became Beyoncé's fifth solo single to top the chart. It tied her with Olivia Newton-John and Barbra Streisand at number six on the list of female artists with the most Hot 100 number one hits, as of 2010. The song was at the top of the chart for four non-consecutive weeks, during the last of which digital downloads of "Single Ladies" increased by 157 percent to 382,000 units—its best week of digital sales. For the week ending January 15, 2009, the song moved to number one on the Hot 100 Airplay chart with 147.3 million listener impressions. It reached number one on the Hot R&B/Hip-Hop Songs chart, where it remained for twelve consecutive weeks. "Single Ladies" topped the Pop Songs and the Hot Dance Club Play charts, and reached number two on the Pop 100 chart. The song has been certified quadruple-platinum by the Recording Industry Association of America (RIAA) for sales of over 4,000,000 copies. It passed the 5 million sales mark in October 2012. In August 2022, RIAA updated Beyoncé's sales, certifying "Single Ladies" as having sold more than 9 million copies. "Single Ladies" debuted at number 81 on the Canadian Hot 100 chart for the week ending November 29, 2008. On January 24, 2009, its ninth charting week, it moved to its peak spot at number two, and was subsequently certified double-platinum by the Canadian Recording Industry Association (CRIA) for sales of over 160,000 copies. The song peaked at number seven, and spent 112 weeks on the UK Singles Chart. It topped the UK R&B Chart, where it succeeded the song's double A-side, "If I Were a Boy". On August 7, 2020, "Single Ladies" was certified double-platinum by the British Phonographic Industry (BPI) for sales of over 1,200,000 copies. As of November 2013, it has sold 704,000 copies in the UK. On the Irish Singles Chart, it reached number four and enjoyed twenty weeks of charting, while on the Japan Hot 100 chart it made its way to number 25. In Australia, the single attained a high point of number five on the ARIA Singles Chart, and received a nine-times platinum certification from the Australian Recording Industry Association (ARIA) for sales of over 630,000 copies. It peaked at number two on the New Zealand Singles Chart, and was certified platinum by the Recording Industry Association of New Zealand (RIANZ) for shipment of over 15,000 copies. "Single Ladies" appeared on several charts in mainland Europe, and peaked at number 20 on the European Hot 100 Singles chart. It reached the top 10 in the Netherlands, Italy and Spain, and the top 40 in both Belgian territories (Flanders and Wallonia), as well as in Hungary, Norway, Sweden and Switzerland. "Single Ladies" was 2009's seventh best-selling digital single with 6.1 million units sold worldwide. ## Music video ### Background and concept The music video for "Single Ladies" was shot immediately after that of "If I Were a Boy", but it received less attention during production than the "higher-gloss, higher-profile video" for "If I Were a Boy". Both videos were shot in black-and-white in New York City and were directed by Jake Nava, with whom Beyoncé had worked on previous music videos including "Crazy in Love" and "Beautiful Liar". "Single Ladies" was choreographed by Frank Gatson Jr. and JaQuel Knight, and incorporates J-Setting choreography. The two music videos premiered on MTV's Total Request Live show on October 13, 2008 to reinforce the concept of conflicting personalities. The videos were released to other media outlets on the same date and subsequently included on Beyoncé's remix album with videography, Above and Beyoncé, and the platinum edition of I Am... Sasha Fierce. Beyoncé told Simon Vozick-Levinson of Entertainment Weekly that the inspiration for the video was a 1969 Bob Fosse routine entitled "Mexican Breakfast" seen on The Ed Sullivan Show, which featured Fosse's wife, Gwen Verdon, dancing with two other women. "Mexican Breakfast" had become an Internet viral sensation the previous summer after Unk's "Walk It Out" was dubbed over the original mix. Beyoncé wanted to attempt a similar dance and eventually, the choreography of "Single Ladies" was liberally adapted from "Mexican Breakfast": > I saw a video on YouTube. [The dancers] had a plain background and it was shot on the crane; it was 360 degrees, they could move around. And I said, 'This is genius.' We kept a lot of the Fosse choreography and added the down-south thing—it's called J-Setting, where one person does something and the next person follows. So it was a strange mixture ... It's like the most urban choreography, mixed with Fosse—very modern and very vintage. Beyoncé wanted a simple music video; it was filmed with minimal alternative camera shots and cuts, and no changes to hairstyles, costumes and sets. According to JaQuel Knight, Beyoncé also wanted the video to feel "good and powerful" and include choreography that could be attempted by anybody. The day the video was shot, the song was divided into three parts. Nava deliberately used lengthy shots so that viewers "would connect with the human endeavor of Beyoncé's awe-inspiring dance", with all the changes in looks, angles, and lighting executed live on-camera because he wanted to keep the feel "very organic and un-gimmicky". The styling was inspired by a Vogue photo shoot. In the video Beyoncé wears a titanium roboglove designed by her long-time jeweler, Lorraine Schwartz, to complement her alter ego Sasha Fierce. The glove consists of several pieces, including a ring and a separate component that covers Beyoncé's upper arm. She first wore the roboglove on the red carpet at the MTV Europe Music Awards on November 8, 2008. The video shoot took around twelve hours. Many performances of the song were filmed without interruption, and edited together to give the impression that the final video was filmed in a single take. ### Synopsis In the video for "Single Ladies", emphasis is laid on Beyoncé's more aggressive and sensual side, her alter ego Sasha Fierce. It shows her in an asymmetrical leotard and high-heels, with two backup dancers, Ebony Williams and Ashley Everett. Beyoncé's mother, Tina Knowles, designed the high-cut leotards after seeing something similar in the American musical films A Chorus Line and All That Jazz. The dance routine incorporates many styles, including jazz, tap, and hip hop, and is credited with popularizing J-Setting, a flamboyant lead and follow dance style prominent in many African American gay clubs across Atlanta and used by the all-female Prancing J-Settes dance troupe of Jackson State University. The video features Beyoncé and her two companions dancing inside an infinity cove, which alternates between black and white and places the focus on the complex choreography. Throughout the video the women click their heels and shake their hips and legs. However, the main intention is to attract the viewers' attention toward their hands and ring fingers as they do the hand-twirl move. At one point during the video, the dancers run up to a wall, which, according to Frank Gatson Jr., pays homage to Shirley MacLaine's act in the 1969 film Sweet Charity. Toward the end of the video, Beyoncé flashes her own wedding ring on her finger. ### Response and accolades Although the video for "Single Ladies" was the cheapest and quickest of all her videos to produce, Beyoncé felt that it ended up being "the most iconic ... something special". It spawned a dance craze and inspired thousands of imitations all over the world, many of which were posted on YouTube. In an interview with MTV, Beyoncé expressed her appreciation of the public's response to the video, and stated that she had spent much time watching several of these parodies: "It's beautiful to feel you touch people and bring a song to life with a video." Nava also expressed his surprise at the positive reception of the video, and attributed its success to the video's understated, less-is-more approach. In an interview with Chandler Levack for Eye Weekly, Toronto director Scott Cudmore stated that the Internet age has impacted the way music videos are made, as well as perceived by an audience. Although Cudmore believes that the music video as a medium is "disappearing ... from the mainstream public eye", he accredited "Single Ladies" with its resurgence, and stated that after the video appeared on the Internet, people began to "consciously look for music videos because of its art". The music video has won several awards and accolades. It was voted Best Dance Routine in the 2008 Popjustice Readers' Poll; and won Video of the Year becoming the first black and white music video since Don Henley's The Boys of Summer, Best Choreography, and Best Editing at the 2009 MTV Video Music Awards. The song also won Best Video at the 2009 MTV Europe Music Awards, the 2009 MOBO Awards, and the 2009 BET Awards. The video has also received many nominations: Best Video in the 2009 Popjustice Readers Polls (placed 4th); nine (including the three that it won) in the 2009 MTV Video Music Awards; Best International Artist Video at the 2009 MuchMusic Video Awards (losing to Lady Gaga's "Poker Face"); Outstanding Music Video at the 2009 NAACP Image Awards; and two at the 2009 MTV Australia Awards for Best Video and Best Moves. The video was ranked at number four on BET's Notarized: Top 100 Videos of 2008 countdown, and at number three on VH1's Top 40 Videos of 2009. The video was voted Best Music Video of the Decade by MUZU.TV It was voted fifth-best music video of the 2000s by readers of Billboard magazine. Claire Suddath of Time magazine included it in her 30 All-Time Best Music Videos, writing that "sometimes the best creations are also the simplest". In 2013, John Boone and Jennifer Cady of E! Online placed the video at number one on their list of Beyoncé's ten best music videos writing, "[It has] All of the sex appeal. Ever... Beyoncé doesn't need anything but an empty room in this one. It's all about the dancing. It's all about the leotard. It's all about the fierceness. And it's epic." The music video was certified platinum by CRIA for shipment of sales 10,000 units. In 2021, Rolling Stone named "Single Ladies" the 12th greatest music video of all time, while Slant Magazine named it the 36th. #### 2009 MTV Video Music Awards incident "Single Ladies" was nominated for nine awards at the 2009 MTV Video Music Awards, ultimately winning three including Video of the Year. Its lost the Best Female Video category to American country pop singer Taylor Swift's "You Belong with Me". Swift's acceptance speech was interrupted by rapper Kanye West, who grabbed climbed onto the stage, grabbed her microphone to declare the "Single Ladies" video as "one of the best videos of all time", shrugged, and left the stage. Footage of Beyoncé in the audience looking shocked was then shown. When Beyoncé won the Video of the Year award later that night, she reminisced about when she won her first MTV award with her former group, Destiny's Child, and called the experience "one of the most exciting moments in [her] life". She then invited Swift on-stage to finish her speech and "have her moment". ## Live performances Beyoncé first promoted "Single Ladies" in a concert organized by Power 105.1 radio in New York on October 29, 2008, and subsequently performed the song at various awards ceremonies, concerts and television shows, including Saturday Night Live (SNL) on November 15, 2008. That night, Beyoncé was featured in a parody of the song's music video, where the two female backup dancers from the video were replaced by pop singer Justin Timberlake and SNL cast members Andy Samberg and Bobby Moynihan. On November 16, 2008, Beyoncé performed a medley of "If I Were a Boy", "Single Ladies", and "Crazy in Love" during the final episode of Total Request Live. "Single Ladies" was also performed by Beyoncé on November 18, 2008, on 106 & Park, on November 25, 2008, on The Ellen DeGeneres Show and on November 26, 2008, at Rockefeller Plaza on The Today Show. She delivered a performance of "Single Ladies" with two male dancers on The Tyra Banks Show on January 9, 2009. In July 2009, Beyoncé gave a concert at the Staples Center in Los Angeles where American actor Tom Cruise danced with her and her dancers as they performed the dance routine of "Single Ladies". At the MTV Video Music Awards on September 13, 2009, Beyoncé performed "Single Ladies" backed by "an army of single ladies" on stage. In a poll conducted by Billboard magazine, the performance was ranked as the seventh best in the history of MTV Video Music Awards. A critic wrote in the magazine: "The world gave a collective 'whoa' when Beyonce unleashed her 'Single Ladies' video, but to see those dance moves come to life at the 2009 VMAs was beyond eye-popping." Erika Ramirez of the same publication placed the performance at number two on her list of Beyoncé's five biggest TV performances. "Single Ladies" was included on the set lists of Beyoncé's I Am... Yours concerts and her I Am... World Tour. The song was subsequently included on Beyoncé's live albums I Am... Yours: An Intimate Performance at Wynn Las Vegas (2009) and I Am... World Tour (2010). "Single Ladies" was later performed by Beyoncé in a pink fringe dress at a concert at Palais Nikaïa in Nice, France, on June 20, 2011, and at the Glastonbury Festival on June 26, 2011. On July 1, 2011, Beyoncé gave a free concert on Good Morning America as part of its Summer Concert Series, which included "Single Ladies". Backed by her all-female band and her backing singers The Mamas, Beyoncé performed "Single Ladies" in front of 3,500 people during the 4 Intimate Nights with Beyoncé revue at the Roseland Ballroom in New York, in August 2011. In May 2012, Beyoncé performed the song during her Revel Presents: Beyoncé Live revue at Revel Atlantic City, a hotel. Ben Ratliff of The New York Times mentioned "Single Ladies" in the "almost continuous high point" of the concert. Rebecca Thomas of MTV News wrote that Beyoncé's dancing during "Single Ladies" reflected the female empowerment theme of the song. On February 3, 2013, Beyoncé performed the song along with her former bandmates from Destiny's Child during the Super Bowl XLVII halftime show. The song was added to the set list of her Mrs. Carter Show World Tour (2013). Beyoncé's performed "Single Ladies" at The Sound of Change Live concert on June 1, 2013, at Twickenham Stadium, London as part of the Chime for Change movement. ## Cultural impact "Single Ladies" gained widespread popularity for its catchy hook and theme of female empowerment. Critics have compared the song to Aretha Franklin's "Respect" and Gloria Gaynor's "I Will Survive", prompted by their lyrics, which all promote female empowerment. The music video achieved fame for its intricate choreography and its deployment of jazz hands with a wrist twist. It has been credited with starting the "first major dance craze of both the new millennium and the Internet", triggering a number of parodies of the dance choreography. Billy Johnson of Yahoo! Music wrote that the video of "Single Ladies" was the top music-related viral hit of 2009. MTV News' James Montgomery wrote that "it appears like [the music video] was custom-made for the YouTube generation, which probably explains why making homages became a worldwide phenomenon." The video generated interest in J-Setting, the dance form that choreographer JaQuel Knight highlights in the video, and Beyoncé is credited with bringing the dance style to the mainstream. In a radio interview on NPR's All Things Considered, Knight shared his excitement that the popular video made people want to learn to dance. Trish Crawford from the Toronto Star observed how it has appealed to all age groups and genders, in contrast with the short-lived dance craze inspired by Soulja Boy two years before, which she considered "mainly a male hip-hop dance". Crawford mentioned, "Toddlers have tackled [the 'Single Ladies' dance]. [So have] recreation centre dance classes, sorority sisters in their dorm rooms, suburban teenagers in their basements and high school cheerleaders." In February 2009, Columbia Records announced the launch of a "Single Ladies" Dance Video Contest. Fans aged eighteen and older were invited to adhere precisely to the dance routine performed by Beyoncé and her two dancers in the original production. The winning video was included in her live album, I Am... World Tour. ### Parodies and homages "Single Ladies" was first parodied in the November 15, 2008, episode of SNL, which featured Beyoncé. She was initially reluctant to participate in the segment but agreed to after a visit from Timberlake in her dressing room. Beyoncé's choreographer, Frank Gatson Jr., expressed mixed emotions at the result, saying: "I was upset because I know that Justin's a great dancer and if he learned the choreography, he could do it really well... If they're making parodies [of our work] just like they make parodies of politicians and presidents, that means it must be big time. So in that respect, I have to take my hat off to them for doing it." Later, Joe Jonas of the pop rock band Jonas Brothers posted a video on their YouTube account where he imitated the dance in a black leotard and heels. In London, one hundred dancers wearing leotards similar to the one worn by Beyoncé performed the choreography on April 20, 2009, to promote Trident Unwrapped gum. The music video inspired a legion of amateur imitators to post videos of themselves performing the choreography on YouTube. One of the most viewed viral videos is that of Shane Mercado, who appeared on The Bonnie Hunt Show in bikini bottoms to perform the choreography. His subsequent meeting with Beyoncé became a media event. Beyoncé has acknowledged the popularity of the videos on YouTube; during her concert tour, excerpts from many of the YouTube videos were played in the background while Beyoncé was performing the song. Cubby, who is an on-air personality for Charlotte, North Carolina's 96.1 The Beat AM, based his parody on the SNL one. His video lead to a meet and greet with Beyoncé and eventually, an opportunity to join her on stage at a show stop in Atlanta during her I Am... World Tour. Many videos featuring babies of different ages, imitating the dance choreography of "Single Ladies", have been uploaded on YouTube. A video showing Cory Elliott, a baby boy from New Zealand, performing the dance while watching Beyoncé on television, gained significant coverage from several media outlets. Time magazine's critic Dan Fletcher ranked it as the fourth best viral video of 2009 and wrote, "Young children love songs with good rhythm and repetition, and 'Single Ladies' certainly has both." However, when a video of seven-year-old girls performing choreography from "Single Ladies" at a dance competition in Los Angeles went viral on YouTube, it created a controversy and sparked outrage from many viewers, who felt the girls were sexualized by the suggestive dance moves. In a video filmed by singer John Legend, US President Barack Obama appears with his wife Michelle performing part of the "Single Ladies" routine. He also briefly performed the hand-twirl move from the song's video at the Obama Inaugural Celebration. This video prompted an Obama look-alike, Iman Crosson, to do his own version of the "Single Ladies" choreography. Several other well-known personalities, including American environmentalist and politician Joe Nation and American actor Tom Hanks, have performed the dance. In the music video for "Dancin on Me" by DJ Webstar and Jim Jones, three females are featured in the background, imitating the "Single Ladies" dance. Jenna Ushkowitz, Chris Colfer and Heather Morris did the "Single Ladies" dance as part of the Glee Live tour in June 2011. The music minister at Geyer Springs First Baptist Church in Little Rock, Arkansas, thought it would be "an excellent idea" to attract interest in the church choir by using a remix of "Single Ladies" and having choir members dance to it. In the music video he made, the choir members sing, "All the singing ladies, all the singing fellas ... If you like the choir, then won't you come and sing in it." Cyndi Wilkerson, Music Ministry Assistant at Geyer Springs First Baptist Church, uploaded the video to YouTube on August 29, 2011. In April 2013, YouTube phenomenon Psy did the dance routine during a concert in Seoul while wearing a red leotard and red boots. A television advert for the South African cellular service Vodacom, used the song as a backdrop to an actor who was humorously mimicking Beyoncé's dancing, the advert quickly went viral and spawned several different variations. ### Usage in media "Single Ladies" has been used in various media including television shows, commercials and books. In the Best of 2009 issue of People magazine, Khloe, Kim, and Kourtney Kardashian were ranked at number nine on the magazine's list of "25 Most Intriguing People"; the photograph accompanying the article showed the three women in leotards mimicking the look from the "Single Ladies" video. The song has been included in many television shows, including CSI: Miami, Cougar Town, and in two episodes in Glee. In the United Kingdom, the video for "Single Ladies" was used for a 2009 television commercial for the new Doner kebab flavored Pot Noodle. In other media, issue 33 of comic book series The Brave and the Bold features a scene in which Wonder Woman, Zatanna, and Barbara Gordon sing a karaoke version of the song while at a club. A mash-up video of the theme of "Single Ladies" and The Andy Griffith Show circulated on the Internet in early 2010. It was produced by Party Ben at the end of 2008. In July 2010, the line "Put a Ring on It" was used by the Joint United Nations Programme on HIV/AIDS as the tagline for a female condom public awareness campaign in the US. The song appeared in Alvin and the Chipmunks: The Squeakquel a year after the song was released as it was sung by the Chipettes. It also appeared in the Marvel Studios film Doctor Strange and the 2016 rhythm game Just Dance 2017. ### Cover and remix versions Singers and bands of various genres have covered the song in their own style. British band Marmaduke Duke performed a cover version in April 2009 on BBC Radio 1's Live Lounge show. In October 2009, it was released on Radio 1's Live Lounge – Volume 4, a compilation of Live Lounge recordings. Australian singer Stan Walker sang a jazzier version of the song on the seventh series of Australian Idol in October 2009. The same year, elementary school group PS22 chorus covered "Single Ladies" and "Halo" (2009) during Billboard's annual Women In Music luncheon held at The Pierre in New York City. In her short-lived Broadway revue "All About Me" in March 2010, Dame Edna Everage performed a version of the song with backup dancers Gregory Butler and Jon-Paul Mateo. It was also covered by Jeff Tweedy and British singer-songwriter Alan Pownall. According to Simon Vozick-Levinson of Entertainment Weekly, Tweedy sang only a few bars; he gave "Single Ladies" an acoustic feel and recited the rest of the song's lyrics. He performed the hand movements that Beyoncé and her dancers do in the song's video. Pomplamoose, an American indie music duo consisting of Jack Conte and Nataly Dawn, recorded a cover of "Single Ladies" on video, which makes use of split screens to show Dawn on vocals and Conte playing the instruments. Inspired by the avant-garde Dogme 95 movement in cinema, Conte began to record songs on video as a quick way to create "organic and raw" music. They chose "Single Ladies" as they believed that it would help them grow their audience. During a concert at New York's Madison Square Garden, Prince performed a mash-up of his 1984 songs "Pop Life" and "I Would Die 4 U", incorporating a sample of "Single Ladies". During her tour in Melbourne, Australia, on August 13, 2010, Katy Perry performed "Single Ladies" and attempted to emulate the choreography. British composer of classical music Mark-Anthony Turnage composed a setting of the song which he titled "Hammered Out". Describing it as his "most R&B work to date", Turnage told Tim Rutherford-Johnson of The Guardian that he was motivated to put the "Single Ladies" reference in his work by his young son, a fan of the song. The piece premiered at the BBC Proms on August 27, 2010. Sara Bareilles covered the song as part of Billboard's "Mashup Mondays" and performed it as part of her set list on the 2010 Lilith Fair Tour. As stated by a critic writing for the magazine, Bareilles put "a piano-pop" twist on "Single Ladies" and turned it "into a slow, jazzy track, complete with creeping bassline and vocal harmonies". American rock band A Rocket to the Moon covered "Single Ladies" and placed it on their EP, The Rainy Day Sessions, which was released in October 2010. On September 26, 2010, Kharizma sang their version of the song on the second series of The X Factor Australia, and on May 31, 2011, Matthew Raymond-Barker sang the song live on the seventh prime of the second series of the X Factor France. During the finale of the tenth season of American Idol on May 25, 2011, the lady contestants joined onstage to perform "Single Ladies" and attempted the dance moves from the song's video. The film Sex and the City 2 features a performance of the song by American singer and actress Liza Minnelli. On October 18, 2011, Young Men Society sang "Single Ladies" on the third series of The X Factor Australia, and on June 30, 2014, Holly Tapp sang the song on the third series of The Voice Australia. In 2008, female rapper Nicki Minaj released an unofficial remix of "Single Ladies" with two rap verses. ## Formats and track listings - Australia, Germany and New Zealand CD single and download 1. "If I Were a Boy" – 4:08 2. "Single Ladies (Put a Ring on It)" – 3:13 - US dance remixes 1. "Single Ladies (Put a Ring on It)" (Dave Audé Club Remix) – 8:20 2. "Single Ladies (Put a Ring on It)" (Karmatronic Club Remix) – 5:54 3. "Single Ladies (Put a Ring on It)" (RedTop Club Remix) – 6:52 4. "Single Ladies (Put a Ring on It)" (DJ Escape & Tony Coluccio Club Remix) – 6:54 5. "Single Ladies (Put a Ring on It)" (Lost Daze Dating Service Club Remix) – 6:47 6. "Single Ladies (Put a Ring on It)" (Craig C's Master Blaster Club Remix) – 8:19 - UK CD single 1. "Single Ladies (Put a Ring on It)" – 3:13 2. "Single Ladies (Put a Ring on It)" (RedTop Remix Radio Edit) – 3:33 - UK and Europe remixes download 1. "Single Ladies (Put a Ring on It)" (Redtop Remix – Dance Remix) – 3:33 2. "Single Ladies (Put a Ring on It)" (My Digital Enemy Remix) – 6:38 3. "Single Ladies (Put a Ring on It)" (Olli Collins & Fred Portelli Remix) – 7:40 4. "Single Ladies (Put a Ring on It)" (Dave Audé Remix Club Version) – 8:20 5. "Single Ladies (Put a Ring on It)" (The Japanese Popstars Remix) – 7:46 6. "Single Ladies (Put a Ring on It)" – 3:13 ## Credits and personnel Credits adapted from I Am... Sasha Fierce album liner notes. - Jim Caruana – vocal tracks recorded by - Thaddis "Kuk" Harrell – music recorded by, songwriter - Jaycen Joshua – audio mixer - Beyoncé – vocals performed by, vocal producer, music producer, songwriter - Dave Pensado – audio mixer - Terius "The-Dream" Nash – music producer, songwriter - Christopher "Tricky" Stewart –music producer, songwriter - Brian "B-LUV" Thomas – music recorded by - Randy Urbanski – audio mixing assistant - Andrew Wuepper – audio mixing assistant ## Charts ### Weekly charts ### Year-end charts ### Decade-end charts ## Certifications ## Release history ## See also - List of best-selling singles in the United States - List of number-one R&B singles of 2008 (U.S.) - List of number-one R&B singles of 2009 (U.S.) - List of singles which have spent the most weeks on the UK Singles Chart - List of best-selling singles in Australia
37,742,992
Why Marx Was Right
1,170,124,583
2011 non-fiction book by Terry Eagleton
[ "2011 non-fiction books", "Books about Karl Marx", "Books about Marxism", "English-language books", "Marxist books", "Yale University Press books" ]
Why Marx Was Right is a 2011 non-fiction book by the British academic Terry Eagleton about the 19th-century philosopher Karl Marx and the schools of thought, collectively known as Marxism, that arose from his work. Written for laypeople, Why Marx Was Right outlines ten objections to Marxism that they may hold and aims to refute each one in turn. These include arguments that Marxism is irrelevant owing to changing social classes in the modern world, that it is deterministic and utopian, and that Marxists oppose all reforms and believe in an authoritarian state. In his counterarguments, Eagleton explains how class struggle is central to Marxism, and that history is seen as a progression of modes of production, like feudalism and capitalism, involving the materials, technology and social relations required to produce goods and services within the society. Under a capitalist economy, the working class, known as the proletariat, are those lacking significant autonomy over their labour conditions, and have no control over the means of production. Eagleton describes how revolutions could lead to a new mode of production—socialism—in which the working class have control, and an eventual communist society could make the state obsolete. He explores the failures of the Soviet Union and other Marxist–Leninist countries. As an author of both specialist and general books in the areas of literary theory, Marxism and Catholicism, Eagleton saw the historical moment as appropriate for Why Marx Was Right; critics said that the book was part of a resurgence in Marxist thought after the 2007–2008 financial crisis. It was first published in 2011 and reprinted in 2018 to mark 200 years since Marx's birth. In Canada, it entered Maclean's bestseller list for two weeks in 2011. Critics disagreed on whether the book succeeds in showing the relevance of Marxism. Its prose style garnered praise as witty and accessible from some reviewers, as well as criticism by others as lacking humour and using assertions rather than arguments. Experts, disagreeing about whether Eagleton's chosen objections were straw-men, suggested that the book would have benefited from coverage of the labour theory of value, the 2007–2008 financial crisis and modern Marxist thought. However, Eagleton's commentary on historical materialism was praised. Why Marx Was Right was largely criticised for its defence of the pre-Stalinist Soviet Union and other Marxist states. Some reviewers also believed that it contains economic mistakes and misrepresents Marx's views on human nature, reform and other subjects. ## Background Terry Eagleton is an academic in the fields of literary theory, Marxism and Catholicism. He turned to leftism while an undergraduate at the University of Cambridge in the 1960s, finding himself at the intersection of the New Left and Catholic progressivism in the Second Vatican Council reforms. Eagleton joined the UK branch of the International Socialists and then the Workers' Socialist League. His book Criticism and Ideology (1976) showcased a Marxist approach to literary theory. He rose to prominence with the text Literary Theory: An Introduction (1983), his best-known work. Alan Jacobs of First Things said that his style of writing "wittily and even elegantly" was unusual in literary theory at the time. After professorships in English literature at the University of Oxford (1992–2001) and cultural theory at the University of Manchester (2001–2008), Eagleton took visiting appointments at universities worldwide. In the book, Eagleton uses a number of terms from Marxist philosophy, which arose from the ideas of the 19th-century German philosopher Karl Marx. In describing a society's use of labour, he employs the phrase means of production to describe the raw materials and tools needed to produce goods and services; the productive forces refer collectively to the means of production, human knowledge, and division of labour within the society. A society also has relations of production: roles like wage labour, where a person sells their labour to a boss in exchange for money. The productive forces and relations of production—together called the mode of production—are seen by Marx as describing the fundamental structure of a society; example modes of production include capitalism and feudalism. In Marxian class theory, a person belongs to a specific social class (e.g. working class) based on the role they play in the mode of production. In capitalism, for example, the bourgeoisie are a class of property owners who control the means of production. Marx identified a pattern of one social class developing the productive forces until the relations of production are a barrier to further advancement. Class struggle—a proposed fundamental tension between different classes—is central to Marxists's understanding of how a new mode of production is established. Because he viewed societal development as rooted in physical conditions rather than abstract ideas, Marx was a historical materialist, rather than an idealist. Base and superstructure is a materialist model for describing society, wherein the mode of production ("base") is seen to shape the other aspects of the community: art, culture, science, etc. ("superstructure"). ## Synopsis Eagleton's chapters outline ten theoretical objections to Marxism, each followed by his counterargument. He begins with the objection that social class plays a lesser role in post-industrial societies, making Marxian class theory inapplicable. Eagleton's counterargument is that Marx anticipated phenomena such as globalisation and societal changes since Marx's era have not fundamentally changed the nature of capitalism. Eagleton finds that suppression of the labour movement was the predominant cause of declining popular support for Marxism from the mid-1970s onwards. The second objection is that Marxist governance results in mass murder, infringements on freedom, and other hardships. In the chapter, Eagleton describes approaches to socialism that differ from those of failed communist states and compares communist failures to capitalist ones. Regarding Marx, Vladimir Lenin and Leon Trotsky, Eagleton outlines conditions he believes are required for successful socialism: an educated population, existing prosperity, and international support after an initial revolution. He says that socialism with inadequate material resources results in regimes like Stalinist Russia, which was criticised by Trotskyist Marxists and libertarian socialists. An alternative mode of production is market socialism, in which the means of production would be collectively owned, but democratic worker cooperatives would compete in marketplace conditions. Third, Eagleton argues against the position that Marxism requires belief that societal change is predetermined. Marx's view was that societies can develop in different directions—for instance, capitalism could stagnate, or lead to socialism or fascism. Thus it is not deterministic. Fourth is the claim that Marxism is utopian, erasing human nature to depict a perfect world. Marx, however, was sceptical of utopian socialists and did not aim to describe an ideal future. He was a materialist who eschewed idealism, in opposition to liberal, Enlightenment thought. Marx likely thought that human nature exists, according to Eagleton, who writes that socialism would not require altruism from each citizen, only a structural change to social institutions. Marx, an individualist, viewed uniformity as a feature of capitalism, and communism as a realisation of individual freedom. He rejected a bourgeois view of equality as too abstract and obscuring capitalism's inherent inequalities. The fifth chapter analyses whether Marxism is a form of economic determinism, presenting all of life through a narrow framework of economics. Though Marxists view history as the study of progressing modes of production, so did Enlightenment thinkers such as Adam Smith. Marx's base and superstructure model is not deterministic, according to Eagleton, as the superstructure is not fully determined by the base, and can also cause the base to change. In Marxism, the class struggle may determine the progression of society, but a class is not just an economic status: it is associated with traditions, values and culture. Sixth is the assertion that Marxist materialism rejects spirituality and sees consciousness as merely a physical phenomenon. Though past materialists saw humans as just matter, Marx's form of materialism started with the fundamental concept that people are active beings with agency. In Eagleton's reading of Marx, the human mind is not something different from the human body, and spirituality and consciousness are matters of bodily experiences. Eagleton lists structures, like American born again churches, that can be part of both base and superstructure, and facets of life, like love, that cannot be categorised as either. The seventh chapter is framed by an anti-Marxist argument that social mobility is increasing and social classes have changed since Marx's day, rendering the ideology outdated; however, Eagleton sees modern capitalism as disguising class inequalities that still exist. In Marxism, class is about a person's role in production rather than their outlook. The proletariat (working class) includes everybody who has little control over their labour, which they are compelled to sell to advance a boss's capital. Eagleton argues that Marx's ideas are resilient to changes since his lifetime. In Marx's era, female domestic servants were the largest group of proletarians, but Marx identified a growing middle class of administrators and managers. White-collar workers can be working class, and culture, ethnicity, identity and sexuality are linked to social class. The eighth objection is that Marxists advocate a violent revolution by a minority of people who will instate a new society, making them anti-democracy and anti-reform. Eagleton says that some revolutions such as the October Revolution were less violent than, for instance, the American civil rights movement reforms; he sees revolution as a lengthy process with long-term causes. Though conceding that Marxism has led to much bloodshed, Eagleton argues that capitalism has too, and few modern Marxists defend Joseph Stalin or Mao Zedong. Socialist revolution would require the working class to overthrow the bourgeoisie—a democratic action, as most people are working class. Though some communists, deemed "ultra-leftists", reject all parliamentary democracy and reform attempts, others use these to work towards revolution. Marx participated in reformist groups like trade unions and may have believed that socialism could be achieved peacefully in some countries. Ninth is the argument that Marxism will install an authoritarian state led by a dictator. Though Marx spoke of a "dictatorship of the proletariat", in his era dictatorship meant "rule by the majority". Rather than authoritarianism, he wanted a withering away of the state—a communist society would have no violent state to defend the status quo, though central administrative bodies would remain. Contemporary Marxists do not wish to lead an authoritarian state as they believe that power held by private financial institutions would make socialism via state control impossible. The last idea is that recent radical movements—including environmentalism, feminism and gay liberation—are independent of Marxism and make it defunct. Eagleton aims to show that Marxism had a role in each of these movements. He writes that some Marxist culture is patriarchal (i.e. power is held by men), but Marxism and feminism have cross-pollinated as Marxist feminism. African nationalism incorporated Marxist ideas and Bolsheviks supported self-determination, despite Marx speaking in favour of imperialism in some cases. On the topic of naturalism, Eagleton describes Marx's views on the interplay between humans and nature: human history is part of natural history, but under capitalism, nature is seen only as a resource. ## Publication history The book was published in hardback on 17 June 2011 () and in paperback in 2012. A second edition with a new preface () marked Marx's bicentenary in 2018, accompanied by an audiobook read by Roger Clark. Commonweal published an extract from the original book. Throughout his career, Eagleton has aimed to alternate between specialist books and books for the general reader; Why Marx Was Right is in the latter category. He said that the historical moment was right for the book. Eagleton saw the September 11 attacks and the 2007–2008 financial crisis as crises that made capitalism more readily noticeable in daily life. While Marxism had been unfashionable due to the failures of the Soviet Union and modern China, these crises caused a resurgence in Marxist thought, leading to books like G. A. Cohen's Why Not Socialism? (2009) and Alain Badiou's The Communist Hypothesis (2010). Eagleton was motivated by "a feeling of the continued relevance of Marx in a world in which he seems to be so obsolete". Eagleton was interested in the rhetorical conceit of defending Marxism against individual points of layperson criticism, that Marxism is "irrelevant or offensive or authoritarian or backwards-looking", and believed Marx's views had been "extraordinarily caricatured". In a talk, Eagleton recounted that a reader sent a letter asking why the book was not called Why Marx Is Right in present tense and replied "he's dead, actually". Eagleton, who is from an Irish Catholic family, saw the book's sixth chapter to be among its most important. In arguing that spirituality is connected to the material world as a way to discuss "human relationships, historical realities, justice" and other topics, he drew connections between Catholic theology and Marx's base and superstructure construct. ## Reception The book spent two weeks on the Canadian Maclean's top-ten non-fiction bestseller list in June 2011. In 2016, the book was a non-fiction bestseller in Calgary. ### Critical reception Social Alternative, Publishers Weekly, Science & Society and Weekend Australian each affirmed that the book proved Marxism's contemporary value. Kavish Chetty (writing in both Cape Argus and Daily News) saw it as "still a necessary volume in the reinvigorated quest to rescue Marx", though he also had criticisms of the book. Economic and Political Weekly believed that Eagleton had succeeded in correcting "vulgar misconceptions", as did Social Scientist. Dissenting critics included Actualidad Económica, The Guardian's Tristram Hunt and The American Conservative, the last of which saw the book as failing to clearly explain Marx's beliefs or why they were compelling. Choice Reviews recommended the book as an introductory text and Estudios de Asia y Africa pointed to the book as providing a useful framework for considering the future of the 2011 Egyptian revolution. #### Writing style Science & Society, Publishers Weekly and The Irish Times all praised the book for its wit. Times Higher Education enjoyed Eagleton's "infallible dash, ... unnerving hyperbole and explosive jokes", while The Age likened Eagleton's "verbal exuberance" to that of George Bernard Shaw. Economic and Political Weekly reviewed the writing as both "great fun", but potentially confusing for those unfamiliar with the author. In contrast, critics including The Australian, Libertarian Papers and Chetty criticised Eagleton's humour as lacking; Hunt felt that the creativity and bravado of the Marxist tradition was absent. Social Scientist and The Irish Times considered its prose accessible, brimming with what Sunday Herald called Eagleton's "characteristic brio", making it as "readable and provocative" as his other works. However, The Christian Century found Eagleton's flourishes to sometimes distract from his core argument. Financial Times similarly judged Eagleton's cultural allusions to be "trying too hard to reach the general reader". Actualidad Económica said the book's prose was inferior to that of Marx himself. Commentary, First Things and Times Higher Education criticised what they considered weak argumentation throughout the book. The latter described Eagleton as using "more assertion than argument". The book is an apologia of Marx, according to The New Republic, despite Eagleton's protestations to the contrary. Slightly more positively, Financial Times identified "delicious imaginative insights" among "baffling" analogies. The American Conservative found Eagleton's theoretical analysis better than his historical analysis, but criticised his "arguments [as] often elementary and sometimes glib", a finding shared by Symploke, who saw Eagleton's "forceful" positions to be unoriginal. A review in The Christian Century concluded that Eagleton's arguments were convincing. #### Subject matter Critics highlighted topics omitted or insufficiently covered, such as Marxist economics (e.g. the labour theory of value), the 2007–2008 financial crisis, and post-Marxism. Two critics viewed Eagleton's definitions of terminology and supporting statistics as insufficient. Marx's theory of surplus-value, which Eagleton presents in the book, was seen as discredited by Libertarian Papers and Actualidad Económica. The Times Literary Supplement questioned why Eagleton's philosophical anthropology drew from early Marx. The American Conservative and The Guardian writer Owen Hatherley believed that the ten objections were not straw men, while Libertarian Papers and Financial Times felt they were arbitrarily chosen. The Australian suggested that Eagleton should have engaged directly with a "combative opponent". Reviewers criticised Eagleton's defence of the pre-Stalinist Soviet Union and other communist countries. The Irish Times and Weekend Australian found this to be the book's weakest part, believing that the states should not be praised. Science & Society found his brief mentions of China to be "woefully inadequate", thinking that a stronger defence could be made, and Commentary highlighted Eagleton's praise for childcare in East Germany as one of several "bizarre exculpations" of Marxist states. In rebuttal to Eagleton, who said that Eastern Europe and Maoist China transitioned away from feudalism with communism, The Irish Times commented that U.S. administrations of East Asia accomplished the same "at far less cost", as did the U.K. with Land Acts in Ireland. Hatherley was a dissenting critic, finding Eagleton "convincing" on the topic of the Soviet Union, while Rethinking Marxism criticised Eagleton from the left as "trapped within the confines of the market" for presenting market socialism as the alternative to Stalinism. Several reviews took issue with Eagleton's economic claims and interpretations of Marx's views. Both Hunt and Actualidad Económica criticised Eagleton's assertion that a third of British children live in poverty. Libertarian Papers critiqued that Eagleton conflated state interventionism with laissez-faire economics and The Irish Times said that he violated a basic rule of economics by suggesting that both price and quantity of goods can be fixed. Reviewers argued that Marx and Engels, in contrast with Eagleton's portrayal, saw communism as entailing a change in human nature. Other reviewers thought Eagleton exaggerated Marx's limited support or tolerance for reform, environmentalism and religion. Reviewers highlighted Eagleton's sections on materialism as particularly strong. Social Scientist enjoyed this content, while Hunt praised the book's coverage of democracy, free will and modernity. The Times Literary Supplement wrote that chapters three to six had a potential utility to historians, simple language and a vision of Marxism that matched Eagleton's other writings, which somewhat redeemed the rest of the book. The Irish Times described the sixth chapter, on materialism, as the book's "most enlightening". Dissenting, Times Higher Education thought that Eagleton gives too much weight to materialism, a topic that remains interesting only to "theological Marxists" since the writings of Ludwig Wittgenstein. ## See also - What Marx Really Meant, a 1934 book that similarly modernized Marxism for its own era - History and Class Consciousness, a 1923 book on Marx, Hegel and Bolshevism - Marx's critique of political economy, the basis of Marx's economic thought
5,689,803
Banksia oblongifolia
1,126,893,085
Species of plant
[ "Banksia taxa by scientific name", "Flora of New South Wales", "Flora of Queensland", "Plants described in 1800", "Taxa named by Antonio José Cavanilles" ]
Banksia oblongifolia, commonly known as the fern-leaved, dwarf or rusty banksia, is a species in the plant genus Banksia. Found along the eastern coast of Australia from Wollongong, New South Wales in the south to Rockhampton, Queensland in the north, it generally grows in sandy soils in heath, open forest or swamp margins and wet areas. A many-stemmed shrub up to 3 m (9.8 ft) high, it has leathery serrated leaves and rusty-coloured new growth. The yellow flower spikes, known as inflorescences, most commonly appear in autumn and early winter. Up to 80 follicles, or seed pods, develop on the spikes after flowering. Banksia oblongifolia resprouts from its woody lignotuber after bushfires, and the seed pods open and release seed when burnt, the seed germinating and growing on burnt ground. Some plants grow between fires from seed shed spontaneously. Spanish botanist Antonio José Cavanilles described B. oblongifolia in 1800, though it was known as Banksia aspleniifolia in New South Wales for many years. However, the latter name, originally coined by Richard Anthony Salisbury, proved invalid, and Banksia oblongifolia has been universally adopted as the correct scientific name since 1981. Two varieties were recognised in 1987, but these have not been generally accepted. A wide array of mammals, birds, and invertebrates visit the inflorescences. Though easily grown as a garden plant, it is not commonly seen in horticulture. ## Description Banksia oblongifolia is a shrub that can reach 3 m (9.8 ft) high, though is generally less than 2 m (6.6 ft) high, with several stems growing out of a woody base known as a lignotuber. The smooth bark is marked with horizontal lenticels, and is reddish-brown fading to greyish-brown with age. New leaves and branchlets are covered with a rusty fur. The leaves lose their fur and become smooth with maturity, and are alternately arranged along the stem. Measuring 5–11 cm (2.0–4.3 in) in length and 1.5–2 cm (0.59–0.79 in) in width, the leathery green leaves are oblong to obovate (egg-shaped) or truncate with a recessed midvein and mildly recurved margins, which are entire at the base and serrate towards the ends of the leaves. The sinuses (spaces between the teeth) are U-shaped and teeth are 1–2 mm long. The leaf underside is whitish with a reticulated vein pattern and a raised central midrib. The leaves sit on 2–5 mm long petioles. Flowering has been recorded between January and October, with a peak in autumn and early winter (April to June). The inflorescences, or flower spikes, arise from the end of 1 to 5 year old branchlets, and often have a whorl of branchlets arising from the node or base. Measuring 5–15 cm (2.0–5.9 in) high and 4 cm (1.6 in) wide, the yellow spikes often have blue-grey tinged limbs in bud, though occasionally pinkish, mauve or mauve-blue limbs are seen. Opening to a pale yellow after anthesis, the spikes lose their flowers with age and swell to up to 17.5 cm (6.9 in) high and 4 cm (1.6 in) wide, with up to 80 follicles. Covered with fine fur but becoming smooth with age, the oval-shaped follicles measure 1–1.8 cm (0.39–0.71 in) long by 0.2–0.7 cm high (0.1–0.3 in) and 0.3–0.7 cm (0.12–0.28 in) wide. The bare swollen spike, now known as an infructescence, is patterned with short spiky persistent bracts on its surface where follicles have not developed. Each follicle contains one or two obovate dark grey-brown to black seeds sandwiching a woody separator. Measuring 1.2–1.8 cm (0.47–0.71 in) long, they are made up of an oblong to semi-elliptic smooth or slightly ridged seed body, 0.7–1.1 cm (0.28–0.43 in) long by 0.3–0.7 cm (0.12–0.28 in) wide. The woody separator is the same shape as the seed, with an impression where the seed body lies next to it. Seedlings have bright obovate green cotyledons 1.2–1.5 cm (0.47–0.59 in) long and 0.5–0.7 cm (0.20–0.28 in) wide, which sit on a stalk, or 1 mm diameter finely hairy seedling stem, known as the hypocotyl, which is less than 1 cm high. The first seedling leaves to emerge are paired (oppositely arranged) and lanceolate with fine-toothed margins, measuring 2.5–3 cm long and 0.4–0.5 cm wide. Subsequent leaves are more oblanceolate, elliptic (oval-shaped) or linear. Young plants develop a lignotuber in their first year. Banksia oblongifolia can be distinguished from B. robur, which it often co-occurs with, by its smaller leaves and bare fruiting spikes. B. robur has more metallic green flower spikes, and often grows in wetter areas within the same region. B. plagiocarpa has longer leaves with more coarsely serrated margins, and its flower spikes are blue-grey in bud, and later bear wedge-shaped follicles. In the Sydney Basin, B. paludosa also bears a superficial resemblance to B. oblongifolia, but its leaves are more prominently spathulate (spoon-shaped) and tend to point up rather than down. The leaf undersides are white and lack the prominent midrib of B. oblongifolia, the new growth is bare and lacks the rusty fur, and the aged flower parts remain on the old spikes. ## Taxonomy First collected by Luis Née between March and April 1793, the fern-leaved banksia was described by Antonio José Cavanilles in 1800 as two separate species from two collections, first as Banksia oblongifolia from the vicinity of Port Jackson (Sydney), and then as Banksia salicifolia from around Botany Bay. Derived from the Latin words oblongus "oblong", and folium "leaf", the species name refers to the shape of the leaves. Richard Anthony Salisbury had published the name Banksia aspleniifolia in 1796 based on leaves of cultivated material. Robert Brown recorded 31 species of Banksia in his 1810 work Prodromus Florae Novae Hollandiae et Insulae Van Diemen, and used the epithet oblongifolia in his taxonomic arrangement, placing the taxon in the subgenus Banksia verae, the "True Banksias", because the inflorescence is a typical Banksia flower spike. He recognised B. salicifolia as the same species at this point, but was unsure whether Salisbury's B. aspleniifolia belonged under the same name. By the time Carl Meissner published his 1856 arrangement of the genus, there were 58 described Banksia species. Meissner divided Brown's Banksia verae, which had been renamed Eubanksia by Stephan Endlicher in 1847, into four series based on leaf properties. He followed Brown in using the name B. oblongifolia, and placed it in the series Salicinae. In 1870, George Bentham published a thorough revision of Banksia in his landmark publication Flora Australiensis. In Bentham's arrangement, the number of recognised Banksia species was reduced from 60 to 46. He declared B. oblongifolia referrable to, and a synonym of, B. integrifolia. Bentham defined four sections based on leaf, style and pollen-presenter characters. B. integrifolia was placed in section Eubanksia. Botanists in the 20th century recognised B. oblongifolia as a species in its own right, but disagreed on the name. Those in Queensland felt Salisbury's name was invalid and used Banksia oblongifolia, while New South Wales authorities used Banksia aspleniifolia as it was the oldest published name for the species. Botanist and banksia authority Alex George ruled that oblongifolia was the correct name in his 1981 revision of the genus. After reviewing Salisbury's original species description, which is of the leaves alone, he concluded that it does not diagnose the species to the exclusion of others and is hence not a validly published name—the description could have applied to juvenile leaves of B. paludosa, B. integrifolia or even B. marginata. ### Placement within Banksia The current taxonomic arrangement of the genus Banksia is based on botanist Alex George's 1999 monograph for the Flora of Australia book series. In this arrangement, B. oblongifolia is placed in Banksia subgenus Banksia, because its inflorescences take the form of Banksia's characteristic flower spikes, section Banksia because of its straight styles, and series Salicinae because its inflorescences are cylindrical. In a morphological cladistic analysis published in 1994, Kevin Thiele placed it in the newly described subseries Acclives along with B. plagiocarpa, B. robur and B. dentata within the series Salicinae. However, this subgrouping of the Salicinae was not supported by George. B. oblongifolia's placement within Banksia may be summarised as follows: Genus Banksia : Subgenus Isostylis : Subgenus Banksia : : Section Oncostylis : : Section Coccinea : : Section Banksia : : : Series Grandes : : : Series Banksia : : : Series Crocinae : : : Series Prostratae : : : Series Cyrtostylis : : : Series Tetragonae : : : Series Bauerinae : : : Series Quercinae : : : Series Salicinae : : : : B. dentata – B. aquilonia – B. integrifolia – B. plagiocarpa – B. oblongifolia – B. robur – B. conferta – B. paludosa – B. marginata – B. canei – B. saxicola Since 1998, American botanist Austin Mast and co-authors have been publishing results of ongoing cladistic analyses of DNA sequence data for the subtribe Banksiinae, which then comprised genera Banksia and Dryandra. Their analyses suggest a phylogeny that differs greatly from George's taxonomic arrangement. Banksia oblongifolia resolves as the closest relative, or "sister", to B. robur, with B. plagiocarpa as next closest relative. In 2007, Mast and Thiele rearranged the genus Banksia by merging Dryandra into it, and published B. subg. Spathulatae for the taxa having spoon-shaped cotyledons; thus B. subg. Banksia was redefined as encompassing taxa lacking spoon-shaped cotyledons. They foreshadowed publishing a full arrangement once DNA sampling of Dryandra was complete; in the meantime, if Mast and Thiele's nomenclatural changes are taken as an interim arrangement, B. oblongifolia is placed in B. subg. Spathulatae. ### Variation George noted that Banksia oblongifolia showed considerable variation in habit, and in 1987 Conran and Clifford separated the taxon into two subspecies. In examining populations in southern Queensland, they reported that the two forms were distinct in growth habit and habitat, and that they did not find any intermediate forms. New South Wales botanists Joseph Maiden and Julius Henry Camfield had collected this taller form of B. oblongifolia in Kogarah in 1898, and given it the name Banksia latifolia variety minor—B. latifolia being a published name by which B. robur was known—before Maiden and Ernst Betche renamed it Banksia robur variety minor. This name (confusingly) thus became the name for the taller variety. They defined variety oblongifolia as a multistemmed shrub 0.5–1.3 m (20–51 in) high, with leaves 3–11 cm (1.2–4.3 in) long and 1–2.5 cm (0.39–0.98 in) wide, and flower spikes 4–10 cm (1.6–3.9 in) high. The habitat is swamps and swamp borders, or rarely sandstone ridges. Variety minor is a taller shrub 1–3.5 m (3.3–11.5 ft) high with leaves up to 16 cm (6.3 in) long and spikes 6 to 14 cm (2.4 to 5.5 in) high. It is an understory plant in sclerophyll forests, associated with Eucalyptus signata and Banksia spinulosa var. collina. Both subspecies occur throughout the range. However, George rejected the varieties, stating the variability was continuous. ### Hybridization Banksia robur and B. oblongifolia hybrids have been recorded at several locations along the eastern coastline. Field workers for The Banksia Atlas recorded 20 populations between Wollongong and Pialba in central Queensland. Locales include Calga north of Sydney, Ku-ring-gai Chase National Park, and Cordeaux Dam near Wollongong. A study of an area of extensive hybridization between the two near Darkes Forest on the Woronora Plateau south of Sydney revealed extensive hybridization in mixed species stands but almost none in pure stands of either species there. Genetic analysis showed generations of crossing and complex ancestry. Morphology generally correlated with genetic profile, but occasionally plants that resembled one parent had some degree of genetic hybridization. Furthermore, there were a few plants with morphology suggestive of a third species, B. paludosa, in their parentage, and requiring further investigation. A possible hybrid between B. oblongifolia and B. integrifolia was recorded near Caloundra by Banksia Atlas volunteers. ## Distribution and habitat Banksia oblongifolia occurs along the eastern coast of Australia from Wollongong, New South Wales, in the south to Rockhampton, Queensland, in the north. There are isolated populations offshore on Fraser Island, and inland at Blackdown Tableland National Park and Crows Nest in Queensland, and also inland incursions at the base of the Glasshouse Mountains in southern Queensland, at Grafton in northern New South Wales, and Bilpin and Lawson in the Blue Mountains west of Sydney.B. oblongifolia grows in a range of habitats—in damp areas with poor drainage, along the edges of swamps and flats, as well as wallum shrubland, or coastal plateaux. It is also found in open forest or woodland, where it grows on ridges or slopes, or heath. Soils are predominantly sandy or sandstone-based, though granite-based and clay-loams are sometimes present. Associated species in the Sydney region include heathland species such as heath banksia (Banksia ericifolia), coral heath (Epacris microphylla) and mountain devil (Lambertia formosa), and tick bush (Kunzea ambigua) and prickly-leaved paperbark (Melaleuca nodosa) in taller scrub, and under trees such as scribbly gum (Eucalyptus sclerophylla) and narrow-leaved apple (Angophora bakeri) in woodland. The Agnes Banks Woodland in western Sydney has been recognised by the New South Wales Government as an Endangered Ecological Community. Here B. oblongifolia is an understory plant in low open woodland, with scribbly gum, narrow-leaved apple and old man banksia (B. serrata) as canopy trees, and wallum banksia (B. aemula), variable smoke-bush (Conospermum taxifolium), wedding bush (Ricinocarpos pinifolius), showy parrot-pea (Dillwynia sericea) and nodding geebung (Persoonia nutans) as other understory species. ## Biology Banksia oblongifolia plants can live for more than 60 years. They respond to bushfire by resprouting from buds located on the large woody lignotuber. Larger lignotubers have the greatest number of buds, although buds are more densely spaced on smaller lignotubers. A 1988 field study in Ku-ring-gai Chase National Park found that shoots grow longer after fire, particularly one within the previous four years, and that new buds grow within six months after a fire. These shoots are able to grow, flower and set seed two to three years after a fire. The woody infructescences also release seeds as their follicles are opened with heat, although a proportion do open spontaneously at other times. One field study in Ku-ring-gai Chase National Park found 10% opened in the absence of bushfire, and that seeds germinated, and young plants do grow. Older plants are serotinous, that is, they store large numbers of seed in an aerial seed bank in their canopy that are released after fire. Being relatively heavy, the seeds do not disperse far from the parent plant. Bird species that have been observed foraging and feeding at the flowers include the red wattlebird (Anthochaera carunculata), Lewin's honeyeater (Meliphaga lewinii), brown honeyeater (Lichmera indistincta), tawny-crowned honeyeater (Gliciphila melanops), yellow-faced honeyeater (Lichenostomus chrysops), white-plumed honeyeater (L. penicillatus), white-cheeked honeyeater (Phylidonyris niger), New Holland honeyeater (P. novaehollandiae), noisy friarbird (Philemon corniculatus), noisy miner (Manorina melanocephala) and eastern spinebill (Acanthorhynchus tenuirostris). Insects recorded visiting flower spikes include the European honey bee and ants. The swamp wallaby (Wallabia bicolor) eats new shoots that grow from lignotubers after bushfire. One field study found 30% of seeds were eaten by insects between bushfires. Insects recovered from inflorescences include the banksia boring moth (Arotrophora arcuatalis), younger instars of which eat flower and bract parts before tunneling into the woody axis of the spike as they get older and boring into follicles and eating seeds. Other seed predators include unidentified species of moth of the genera Cryptophasa and Xylorycta, as well as Scieropepla rimata, Chalarotona intabescens and Chalarotona melipnoa and an unidentified weevil species. The fungal species Asterina systema-solare, Episphaerella banksiae and Lincostromea banksiae have been recorded on the leaves. Like most other proteaceae, B. oblongifolia has proteoid roots—roots with dense clusters of short lateral rootlets that form a mat in the soil just below the leaf litter. These enhance solubilisation of nutrients, allowing nutrient uptake in low-nutrient soils such as the phosphorus-deficient native soils of Australia. A study of coastal heaths on Pleistocene sand dunes around the Myall Lakes found B. oblongifolia on slopes (wet heath) and B. aemula grew on ridges (dry heath), and the two species did not overlap. Manipulation of seedlings in the same study area showed that B. oblongifolia can grow longer roots seeking water than other wet heath species and that seedlings can establish in dry heath, but it is as yet unclear why the species does not grow in dry heath as well as wet heath. Unlike similar situations with Banksia species in Western Australia, the two species did not appear to impact negatively on each other. ## Cultivation Conrad Loddiges and his sons wrote of Banksia oblongifolia in volume 3 of their work The Botanical Cabinet in 1818, reporting it had been brought into cultivation in 1792, though had been initially and incorrectly called Banksia dentata. It flowered in November in the United Kingdom, and was grown in a greenhouse over winter. Not commonly cultivated, it adapts readily to garden conditions and tolerates most soils in part-shade or full sun. The colours of the inflorescences in bud, and timing of flowers into winter give it horticultural value, as does its reddish new growth. Larger plants have taller flower spikes. It is propagated readily from seed, with young plants taking five to seven years to flower from seed. Pruning can improve the shrub's appearance, and it is a potential bonsai subject.
14,882,551
2000 Sugar Bowl
1,171,293,299
null
[ "1999–2000 NCAA football bowl games", "2000 in sports in Louisiana", "BCS National Championship Game", "Florida State Seminoles football bowl games", "January 2000 sports events in the United States", "Sugar Bowl", "Virginia Tech Hokies football bowl games" ]
The 2000 Sugar Bowl was the designated Bowl Championship Series (BCS) National Championship Game for the 1999 NCAA Division I-A football season and was played on January 4, 2000, at the Louisiana Superdome in New Orleans. The Florida State Seminoles, representing the Atlantic Coast Conference, defeated the Virginia Tech Hokies, representing the Big East Conference, by a score of 46–29. With the win, Florida State clinched the 1999 BCS national championship, the team's second national championship in its history. An estimated total of 79,280 people attended the game in person, while approximately 18.4 million US viewers watched the game on ABC television. The resulting 17.5 television rating was the third-largest ever recorded for a BCS college football game. Tickets were in high demand for the game, withs tens of thousands of fans from both teams attending, many using scalped tickets to gain entry. The game kicked off at 8 p.m. EST, and Virginia Tech received the ball to begin the game. Though Tech advanced down the field, Florida State scored first and took advantage of a blocked punt for a touchdown, giving the Seminoles a 14–0 lead in the first quarter. Tech answered with a touchdown drive of its own before the end of the quarter, but Florida State scored two quick touchdowns to begin the second quarter. Virginia Tech scored a touchdown before halftime, but halfway through the game, Florida State held a 28–14 lead. In the third quarter, Virginia Tech's offense gave the Hokies a lead with a field goal and two touchdowns. Tech failed to convert two two-point conversions, but held a 29–28 lead at the end of the third quarter. Florida State answered in the fourth quarter, however, taking a 36–29 lead with a touchdown and successful two-point conversion early in the quarter. From this point, the Seminoles did not relinquish the lead, extending it to 46–29 with another touchdown and a field goal. For his performance in the game, Florida State wide receiver Peter Warrick was named the game's most valuable player. Although Tech lost the game, several of its players won postseason awards—most notably Michael Vick, who earned an ESPY for his performance during the Sugar Bowl and the regular season. Several players from each team entered the National Football League after graduation, being selected either in the 2000 NFL Draft or later editions of that selection process. ## Team selection By contract, the top two teams in the BCS Poll at the conclusion of the regular season were invited to the BCS national championship game. In 2000, the BCS Poll was a combination of four different systems: media and coaches' polls (Associated Press college football poll and USA Today Coaches' Poll), team records, a collection of eight different computer ranking systems, and a strength-of-schedule component based on opponent records. Under the BCS, the site of the national championship game rotated every year. In 2000, there were four BCS bowl games: the Rose Bowl, the Sugar Bowl, the Orange Bowl, and the Fiesta Bowl. The national championship game rotated to a different location each year, and the other three games served as bowl games for lower-ranked teams. Later, in 2007, the BCS National Championship was created, adding a fifth BCS bowl. In 2000, the Sugar Bowl was scheduled to host the national championship game. ### Florida State The Florida State Seminoles ended the 1998 college football season with a 23–16 loss to the Tennessee Volunteers in the 1999 Fiesta Bowl, which was the national championship game that year. The loss was only the second of the season for Florida State, which had entered the game ranked No. 2 and favored against the No. 1 ranked Volunteers. Florida State players and coaches entered the off-season hoping to improve upon their runner-up finish in the national championship game the year before, and were voted the No. 1 team in the country in the annual Associated Press preseason poll. Florida State lived up to its No. 1 ranking in its first game of the 1999 college football season, routing unranked Louisiana Tech, 41–7. The following week, in their ACC opener, the Seminoles had a closer contest against Georgia Tech, but still earned a 41–35 victory. As the weeks went by, the wins continued to accumulate. FSU defeated North Carolina State, 42–11; North Carolina, 42–10; and Duke, 51–23. In the seventh week of the college football season, the Seminoles faced off against a traditional rival: the Miami Hurricanes. Heading into the game, the Seminoles were without star wide receiver and potential Heisman Trophy candidate Peter Warrick, who was suspended from the team after being arrested for participating in a scheme to underpay for clothes at a Tallahassee, Florida clothing store. Despite the loss of Warrick, Florida State eked out a 31–21 victory over the Hurricanes after being tied, 21–21, at halftime. The week after the Miami game, the Seminoles had an even closer call against the Clemson Tigers—their closest, in fact, of the entire season. Despite the return of Peter Warrick, who was cleared of charges in a Florida courtroom, Florida State fell behind the Tigers in the first half. Trailing in Clemson, South Carolina, 14–3 at halftime, Florida State cut the gap to 14–6 with a field goal midway through the third quarter, then tied the game at the end of the third quarter with a touchdown and two-point conversion. The Seminoles clinched the victory after a field goal late in the fourth quarter gave them a 17–14 lead and cemented the victory when a Clemson attempt to even the score with a field goal fell short. The victory was FSU head coach Bobby Bowden's 300th win and came against his son, Tommy Bowden, coach of the Tigers. Florida State earned easy wins with a 35–10 victory over Virginia and a 49–10 win over Maryland before facing the rival Florida Gators in the final game of the Seminoles' regular season. Florida State led throughout the game, but had to fend off a last-minute Florida drive in order to clinch a 30–23 win and just the third perfect regular season in Florida State history. This season later was termed the "Wire to Wire" season as the Seminoles kept their No. 1 ranking the entire season. ### Virginia Tech Like Florida State, the Virginia Tech Hokies began the 1999 college football season with raised expectations. In 1998, the Hokies had gone 9–3 during the regular season and had posted a 5–2 record against fellow Big East Conference teams. The Hokies concluded that 1998 season—which was supposed to be a rebuilding year—in the 1998 Music City Bowl, where the Hokies defeated the Alabama Crimson Tide, 38–7. With the addition of redshirt freshman quarterback Michael Vick to a team that had allowed an average of just 12.9 points per game on defense, there was the possibility that Tech could improve upon its previous season's performance. Sports Illustrated, for example, predicted that the Hokies might challenge Miami for the Big East football championship, and the preseason Coaches' Poll ranked the Hokies No. 14 prior to the first game of the season. In their first game of the season, the Hokies lived up to expectations, shutting out James Madison University, 47–0. The game was the first time Tech had shut out an opponent in a season opener since 1953. The game was marred, however, by a leg injury to Michael Vick that caused him to leave the game. The following week, against the University of Alabama Birmingham, Vick did not play. Despite his absence, the Hokies still managed a 31–10 win. This was followed by a 31–11 Thursday-night victory over Clemson in Virginia Tech's first game against a Division I-A opponent during the season. Following the win over Clemson, Tech faced traditional rival Virginia in the annual battle for the Commonwealth Cup. Despite the rivalry and the fact that Virginia was ranked the No. 24 team in the country, the Cavaliers put up even less of a struggle than Clemson. Virginia Tech won, 31–7. Now No. 5 in the country, Tech began to distance itself from other highly ranked teams with consecutive wins over Rutgers and Syracuse. The 62–0 shutout of No. 16 Syracuse was the largest victory ever recorded against a team ranked in the AP Poll. By this time, the Hokies were being described in media reports as a national championship contender. Following a 30–17 victory at Pittsburgh, Virginia Tech traveled to Morgantown, West Virginia, to face the West Virginia Mountaineers in the annual battle for the Black Diamond Trophy. In West Virginia, Virginia Tech eked out a 22–20 victory with a last-second field goal from placekicker Shayne Graham. It was Tech's closest victory of the season and moved the Hokies to the No. 2 ranking in the country. Following the win over West Virginia, Tech defeated Miami, 43–10, and Temple, 62–7, to clinch the Big East championship. In the final game of the regular season, the Hokies beat Boston College, 38–14, cementing the third unbeaten season in Virginia Tech history and the Hokies' first since 1954. ## Pregame buildup In the month prior to the Sugar Bowl, media attention focused on Virginia Tech's sudden rise to national prominence and Florida State's perennial appearance in the national championship game. The Seminoles had the most top-5 finishes and the most national championship game appearances of any team in the 1990s, including a national championship victory in 1993. Many media stories focused on the apparent David and Goliath showdown between the two teams, with the Seminoles in the role of the overdog and the Hokies in the role of the underdog. Because of this fact, spread bettors favored Florida State to win the game by 5.5 points. Tens of thousands of fans from both teams traveled to the game, often purchasing ticket and travel packages for thousands of dollars. The limited numbers of tickets available for the game were in high demand by fans of both teams. ### Florida State offense The Seminoles threw for no fewer than 229 passing yards in every game during the regular season and averaged 12.7 points per game more than its opponents. On the ground, the Seminoles averaged 122.8 rushing yards per game. Leading the Florida State offense was quarterback Chris Weinke, a former baseball player who, at 27 years old, was by far the oldest player on the Seminoles' team. After suffering a neck injury in the 1998 college football season, Weinke recovered to complete 232 of 377 pass attempts for 3,103 yards, 25 touchdowns, and 14 interceptions. Weinke's favorite target was wide receiver Peter Warrick, who led all Seminole receivers with 71 receptions and 931 yards in just nine games during the regular season. Five times, Warrick earned more than 100 receiving yards in a game. Warrick's season was shortened by a two-game suspension following his arrest for underpaying for clothes, but he still was named an All-America selection at wide receiver, signifying his status as one of the best players in the country at the position. Florida State placekicker Sebastian Janikowski, who was born in Poland, also was a key component of the Seminoles' scoring offense. In his career at Florida State prior to the Sugar Bowl, Janikowski made 65 of 83 field goal attempts, including 33 of his previous 38 kicks of less than 50 yards. Janikowski also handled kickoffs, kicking the ball so hard that 57 of his 83 kickoffs were touchbacks. Janikowski was considered to have the potential to be an early selection in the 2000 NFL Draft by several scouts for professional teams. ### Virginia Tech offense During the regular season, Virginia Tech's offense outscored opponents by an average of 31 points per game. Tech averaged 254 yards rushing per game, the eighth-highest average in the nation. Important to that success was running back Shyrone Stith, who had 1,119 rushing yards during the regular season. Even more important to the Hokies' success, however, was quarterback Michael Vick. Vick was recognized by multiple nationwide publications for his performance during the regular season. His passer rating was the highest of any quarterback in the country, and he completed 59.2 percent of his 152 passes for 1,840 yards, 12 touchdowns, and five interceptions. In addition, He rushed for 585 yards and eight touchdowns on 108 carries. Vick was named Big East Offensive Player of the Year and was the runner-up in voting for the Associated Press Player of the Year. Vick's average of 242 yards of total offense per game were the most in the country, and his 184 passing yards per game were the second-most. In addition, Vick finished third in the voting for the Heisman Trophy, traditionally given to the best college football player in the country. He was featured in multiple national publications, including on the cover of Sports Illustrated twice. A handful of days before the Sugar Bowl, Tech wide receiver Ricky Hall broke a bone in his foot during practice and was considered unlikely to play. Hall was Tech's second-leading receiver, having caught 25 passes for 398 yards and three touchdowns. In addition, Hall was the Hokies' starting punt returner, and had returned 40 kicks for 510 yards and one touchdown, setting a school record for punt return yardage. Tech placekicker Shayne Graham won Big East Special Teams Player of the Year honors after scoring 107 points during the regular season. That mark set a Big East record, and Graham's 372 career points during his four years with the Hokies were an NCAA record at the time. Graham's award ensured Tech won all five of the Big East's player and coach of the year awards. ### Florida State defense The Florida State defense was considered key to reining in Tech quarterback Vick. The Seminoles allowed less than 100 rushing yards per game on average, and intercepted 22 passes during the regular season. The Seminoles were ranked 15th nationally in pass defense at the end of the regular season but had allowed increasing amounts of pass yardage in the latter games of the season. Despite that fact, the Florida State defense's main concern was Michael Vick's ability to run the football. Said Florida State defensive coordinator Mickey Andrews: "A guy like that usually gives us problems, considering the type of (4–3 gap) defense we run. When a quarterback gets out of the pocket, that could hurt us for big yardage." The Seminole defense was led by nose guard Corey Simon, who accumulated 48 solo tackles, four sacks, and one interception. For his accomplishments during the regular season, Simon earned consensus first-team All-America honors. Despite his accomplishments, Simon was not the Seminoles' leading tackler. That honor went to linebacker Tommy Polley, who accumulated 67 tackles during the season. Fellow linebacker Brian Allen contributed five quarterback sacks, the most in that statistical category for Florida State. ### Virginia Tech defense In the important category of scoring defense, the Hokies were the top-ranked defense in the country, allowing only 10.5 points per game. The team was ranked No. 3 in the country in both total defense and rushing defense. On average, Tech allowed just 247.3 total yards and 75.9 rushing yards per game. Tech's pass defense was No. 7 in the country, allowing an average of 171.4 passing yards per game. The Hokies permitted no more than 226 passing yards to any team during the regular season, and no opposing player earned 100 receiving yards. Tech defenders also accumulated 58 sacks during the season. Virginia Tech defensive end Corey Moore was the top performer on the Hokie defense. Moore accumulated 55 tackles and 17 sacks during the regular season, and was named Big East Defensive Player of the Year and to the Associated Press All-America team. In the first week of December, Moore was awarded the Bronko Nagurski Trophy, given to the best defensive college football player in the country. Tech's other defensive end was John Engelberger, who earned seven sacks, six other tackles for loss and 16 quarterback hurries. Engelberger was projected by pro scouts to be the first Tech player selected in the 2000 NFL Draft. ## Game summary The 2000 Sugar Bowl kicked off at 8 p.m. EST on January 4, 2000, at the Louisiana Superdome, in New Orleans. A crowd of 79,280 people attended the game in person, and an estimated 18.4 million people watched the game's television broadcast on ABC, earning the broadcast a television rating of 17.5, the third-highest rating ever recorded for a BCS game. ABC estimates were higher, speculating that at least 54 million people watched at least a portion of the broadcast. Brent Musburger, Gary Danielson, Lynn Swann, and Jack Arute were the television commentators for the event, and Ron Franklin, Mike Gottfried, and Adrian Karsten provided commentary for the ESPN Radio broadcast of the game. In exchange for their performance at the game, Virginia Tech and Florida State each received more than \$4 million. The traditional pregame singing of the national anthem was performed by the Zion Harmonizers, a New Orleans gospel quartet. Steve Shaw was the referee. Actor John Goodman performed the ceremonial pre-game coin toss to determine first possession of the ball. Florida State won the coin toss and elected to kick off to Virginia Tech to begin the game. ### First quarter Virginia Tech received the game's opening kickoff in their end zone for a touchback, and the Tech offense began at its 20-yard line. On the game's first play, Tech committed a five-yard false start penalty. Running back Shyrone Stith was stopped for a loss on the first non-penalty play of the game, but Tech made up both that loss and the penalty when quarterback Michael Vick scrambled for 25 yards and a first down. Vick then ran for another nine yards, pushing the line of scrimmage near midfield. Tech executed an option run to Stith, who ran inside the Florida State 30-yard line. Tech picked up a few yards with a run up the middle, then Vick completed a pass to Davis, giving the Hokies a first down at the Florida State 13-yard line. Stith picked up seven yards on a rush to the six-yard line, but the Seminole defense stiffened, and Tech was unable to pick up the remaining three yards needed for a first down. Facing a fourth down and needing less than a yard to pick up another first down inside the Florida State three-yard line, Tech head coach Frank Beamer kept his offense on the field to attempt to gain the first down rather than kick a field goal. On the attempt, however, Vick fumbled the ball forward into the end zone, where Florida State recovered it for a touchback. Virginia Tech was thus denied the first score of the game, and Florida State's offense entered the game for the first time. Starting at their 20-yard line after the touchback, Florida State's first play was a five-yard rush by running back Travis Minor. Quarterback Chris Weinke then completed a three-yard pass to wide receiver Peter Warrick, who was stopped short of the first down. After the next play failed to gain positive yardage, the Seminoles were forced to punt. Virginia Tech's offense began their second series after a short punt return to the 31-yard line. After an incomplete pass from Vick, Stith picked up a Tech first down with two running plays. From their 43-yard line, Tech executed an end-around for a first down. Florida State also committed a five-yard facemask penalty that pushed Tech to the Seminoles' 40-yard line. Tech was stopped for losses on subsequent plays and committed a five-yard false start penalty, but Vick completed an 18-yard pass to Davis for a first down, making up the losses. Tech was unable to make good the losses accumulated on the next three plays, when Vick was sacked after throwing two incomplete passes. Tech punted, the ball rolled into the end zone, and Florida State's offense began again at its 20-yard line. Weinke threw two incomplete passes before connecting on a first-down throw to wide receiver Ron Dugans. On the next play, Weinke connected on a 64-yard throw to Warrick for a Florida State touchdown and the first points of the game. The extra point attempt was successful, and Florida State took a 7–0 lead with 3:22 remaining in the first quarter. Following Florida State's post-touchdown kickoff, Virginia Tech's offense began its third possession of the game at the Tech 24-yard line after a short kick return. Running back Andre Kendrick ran for a short gain, but on the next play Vick was called for an intentional grounding penalty while attempting to avoid a sack. The Hokies were unable to make up the yardage lost by the penalty and punted after failing to gain a first down. Owing to the penalty, Tech punter Jimmy Kibble was forced to kick from his own end zone. Florida State was able to break through the Tech offensive line during the punt and blocked the kick. The ball was picked up by Florida State defender Jeff Chaney, who dashed into the end zone for Florida State's second touchdown of the game. The score and extra point gave Florida State a 14–0 lead with 2:14 remaining in the first quarter. Florida State's kickoff was downed for a touchback, and Tech began at its 20-yard line. On the first play of the possession, Florida State committed a 15-yard pass interference penalty that gave Tech a first down at its 35-yard line. Tech was further aided by two five-yard penalties against Florida State that gave the Hokies another first down, and Vick completed a short pass across midfield. On the first play in Florida State territory, Vick completed a 49-yard throw to wide receiver André Davis for Tech's first touchdown of the game. The extra point attempt was good, and with 30 seconds remaining in the quarter, Tech narrowed Florida State's lead to 14–7. Following Virginia Tech's kickoff and a touchback, Florida State's offense started work at its 20-yard line. Tech committed a five-yard penalty, and as the final seconds of the quarter ticked off, Florida State ran up the middle for five yards and a first down. At the end of the first quarter, the score was 14–7, Florida State leading. ### Second quarter The second quarter began with Florida State in possession of the ball, facing a first down at its 30-yard line. After picking up short yardage on two consecutive plays, Weinke completed a 63-yard pass to Dugans, who ran down the field for a touchdown. The extra point was successful, and with 13:45 remaining in the second quarter, Florida State extended its lead to 21–7. Following the Florida State kickoff, Virginia Tech returned the ball to the 33-yard line, where Tech's offense began operations. Tech committed an offensive pass interference penalty, and Tech was forced to punt after being unable to gain a first down after the penalty. The Seminoles' Peter Warrick was assigned to return the punt, and he fielded the ball at the Florida State 41-yard line. Thanks to several key blocks from other Florida State players, Warrick was able to run 59 unimpeded yards to the end zone for a touchdown. With 11:34 still remaining before halftime, the Seminoles extended their lead to 28–7. Following the Florida State kickoff, Virginia Tech attempted to answer Florida State's kick-return touchdown with one of its own. Kendrick fielded the ball at the Virginia Tech goal line and returned it 63 yards, all the way to the Florida State 37-yard line, where the Hokie offense began work. Despite the good field position, Tech was unable to gain a first down. Tech kicker Shayne Graham was sent into the game, seemingly to attempt a 51-yard field goal. Instead of kicking the ball, Graham attempted to run the ball for a first down. Graham fumbled short of the first down, and Florida State took over on offense with 9:43 remaining in the first half. On the Seminoles' first offensive play of the drive, they attempted a flea flicker pass, which was caught by Warrick at the Virginia Tech 33-yard line for a 33-yard gain. Following the play, Weinke was sacked for the first time by the Tech defense. This was followed immediately by Tech's second sack, which pushed Weinke back into the Seminoles' side of the field. On the third play of the Seminole drive, Weinke attempted to scramble for yardage, but was stopped short of the needed mark. Florida State's punt was downed at the Virginia Tech one-yard line, which was where the Tech offense began work. Florida State's defense prevented the Hokies from gaining a first down, and Tech again had to punt from its end zone. Following the kick and a short return, Florida State began a drive at the Tech 34-yard line, seemingly in excellent field position. But on the first play of the State drive, the Seminoles were stopped for a loss. State was able to pick up a short gain on the second play, but on the third, Weinke was sacked for the third time in the game. After the Seminole punt and a touchback, Tech's offense started at its 20-yard line. The Hokies picked up a first down with an option run to Stith, then Vick ran for a long gain and another first down at the Florida State 20-yard line. Stith picked up seven yards on a rush up the middle of the field, then Vick completed a first-down pass to Derek Carter inside the Seminole 10-yard line. Kendrick advanced the ball to the Seminole three-yard line, then Vick ran the remaining yardage for a touchdown. Following the extra point, Tech cut Florida State's lead to 28–14 with 37 seconds remaining in the first half. After the Virginia Tech kickoff and a Florida State return to their 17-yard line, Florida State began running out the clock to bring the half to an end. At halftime, Florida State held a 28–14 lead over Virginia Tech. ### Halftime At halftime, several organizations and groups performed under the overarching theme of a "Gospel Jubilee." The halftime show was organized by Douglas K. Green and Bowl Games of America, a company founded to provide similar services to bowl games across the United States. Multiple high school bands and dance teams from Kansas to Florida entertained the crowd. ### Third quarter Because Virginia Tech received the ball to begin the game, Florida State received the ball to begin the second half. The Seminoles returned the kickoff to their 22-yard line, and on the first play of the second half attempted a lateral pass. Virginia Tech defender Corey Moore knocked the ball down and out of bounds, causing a loss of 16 yards. Despite the loss, Weinke was able to make good the needed yards with a 28-yard pass to Minor. Minor picked up short yardage on a run up the middle, then Weinke passed for another first down, advancing the ball to the State 45-yard line. On first down, Weinke fumbled, but managed to recover the ball after a five-yard loss. Unlike before, State was unable to regain the lost yardage and was forced to punt. Virginia Tech returned the kick to their 33-yard line, where the Tech offense began work. Vick passed for six yards, then ran an option for 12 yards and a first down. Now on State's side of the field, however, the Tech offense was unable to gain another first down and punted back to Florida State, which returned the kick to its 21-yard line. State was stopped short on consecutive plays, committed a five-yard false start penalty, then was stopped for no gain on third down. After going three and out, State punted back to the Hokies, who returned the ball to the Seminoles' 41-yard line. On the first play of the drive, Vick completed a 26-yard pass to the Tech fullback, Hawkins. After three rushes failed to pick up the first down at the Florida State five-yard line, Tech coach Frank Beamer sent in Graham to kick a 23-yard field goal. The kick was successful, and with 7:54 remaining in the quarter, Tech cut Florida State's lead to 28–17. Virginia Tech's post-touchdown kickoff was downed for a touchback, and Florida State's offense started a drive at its 20-yard line. Weinke completed one pass, but two others fell incomplete, and Florida State punted after again going three and out. The Hokies returned the State kick to the Seminoles' 36-yard line with a 45-yard return. Vick threw an incomplete pass, ran for seven yards, then handed it off to Kendrick, who broke through the Florida State defense and ran ahead 29 yards for a touchdown. Rather than attempt an extra point, Beamer ordered a two-point conversion in an attempt to cut Florida State's lead to just three points. The play, which was Tech's first two-point attempt that season, failed. Even without an extra point, the touchdown still cut Florida State's lead to 28–23. After the post-score kickoff and return, Florida State began at its 22-yard line. Weinke completed a first-down pass to Warrick, but Warrick committed a 15-yard personal foul penalty in the process. On the next play, Weinke attempted a long pass downfield, but Tech defender Anthony Midget intercepted the ball at the Tech 41-yard line. Trailing by five, Tech's offense began a drive to potentially further cut Florida State's lead or put the Hokies in the lead themselves. After slipping on the field and taking a loss, Vick completed a 20-yard pass to Hawkins, who picked up a first down and pushed Tech to the Florida State 39-yard line. After a short rush by Kendrick, Vick scrambled to the State 21-yard line for another first down. On the next play, Vick was sacked for a seven-yard loss, but recovered the lost ground by running for 22 yards on the next play. With a first down at the Seminoles' seven-yard line, Vick handed the ball to Kendrick, who ran seven yards straight ahead for a Tech touchdown, giving the Hokies the lead for the first time in the game. Again, Beamer ordered a two-point conversion attempt, but again, Florida State stopped the Hokies short. Despite that failure to pick up the two-point conversion, Tech took a 29–28 lead with 2:13 remaining in the quarter. The Seminoles returned Tech's post-score kick to their 15-yard line, where Florida State's offense began work again, hoping to regain the lead for State. Weinke completed a seven-yard pass to Warrick, then was sacked by the Virginia Tech defense. Weinke overcame the loss on the next play with a 19-yard first-down pass. State continued to advance the ball with short passes, as Weinke completed a five-yard throw. Chaney gained three yards on a rush to the right as the final seconds of the third quarter ticked off the clock, setting up an important third-down play. With one quarter remaining, Virginia Tech led Florida State, 29–28. ### Fourth quarter Florida State began the fourth quarter in possession of the ball and facing a third down, needing three yards for a first down. Weinke completed a pass for just short of the needed three yards. Instead of punting the ball away, Florida State head coach Bobby Bowden ordered the team to attempt to convert the first down. He brought backup quarterback Marcus Outzen into the game as a misdirection move, and instead of running a quarterback sneak as anticipated, Outzen tossed the ball to Minor, who ran for 16 yards and a first down. During the play, Virginia Tech committed a 15-yard personal foul penalty that advanced the ball further and gave Florida State a first down at the Virginia Tech 23-yard line. Weinke returned to the game and threw a 10-yard pass to Chaney for another first down. Weinke threw an incomplete pass, then Tech stopped a rush up the middle for no gain. On third down, Weinke connected on a touchdown pass to Dugans, returning the lead to Florida State. The Seminoles, as had Virginia Tech before them, elected to attempt a two-point conversion. Unlike Virginia Tech's failed two-point conversions, the Seminoles successfully earned two points with a pass to Warrick, and the scores gave Florida State a 36–29 lead with 12:59 remaining. Tech received Florida State's kickoff at its goal line and returned the ball to the 11-yard line, where Tech's offense took over. Kendrick ran for 12 yards and a first down, but then Vick fumbled on a rush to the left. Florida State recovered the ball, and the Seminoles' offense was given the ball at the Virginia Tech 35-yard line. On the first play after the fumble, Chaney broke free for a long run that gave State a first down inside the Virginia Tech 10-yard line. The Seminoles were pushed backward on two consecutive plays and committed a chop block before Bowden was forced to send in Janikowski to kick a 32-yard field goal. The kick gave Florida State a 39–29 lead with 10:26 remaining in the game. Janikowski's post-score kickoff was downed for a touchback, and Vick and the Tech offense began at their 20-yard line. On the Hokies' first play, Davis ran for 16 yards and a first down on an end-around similar to the one he ran in the first quarter. Despite that success, the Hokies were unable to gain another first down. Appearing to punt the ball away, Tech ran a trick play where the punter attempted to rush for a first down. He was stopped short of the needed mark, however, and Florida State's offense returned to the field, beginning at the Tech 43-yard line. On the first play after taking over, Weinke completed a 43-yard pass to Warrick for a touchdown. The score and extra point gave Florida State a 46–29 lead with 7:42 remaining in the game. With less than half a quarter remaining and down by three scores, Virginia Tech had a nearly insurmountable deficit to overcome. The Hokies fielded the kickoff for a touchback, and the Tech offense began at its 20-yard line. On the Hokies' first and second plays of the drive, Vick was sacked for losses. The third play was an incomplete pass, and the Hokies were forced to punt. After fielding the kick at their 38-yard line, Florida State began running out the clock by running the ball. After failing to gain a first down on two consecutive rushes and an incomplete pass, Florida State punted. The ball rolled into the end zone, and Tech's offense began again at its 20-yard line. Vick threw for short yardage, then Kendrick ran for a first down at the Tech 37-yard line. Vick completed a 23-yard first down pass to Emmet Johnson, and the Hokies entered Florida State territory with the clock ticking steadily down. On the first play in Seminoles' territory, Vick completed another 23-yard pass, this time to Davis, who picked up a first down at the Florida State 23-yard line. Thanks to a holding penalty against the Seminoles, Tech was granted a first down at the State eight-yard line. Vick threw an incomplete pass, ran for three yards, and then threw another pass to a player who was stopped short of the goal line. Facing fourth down and needing just two yards for a touchdown, Tech attempted to pass for the touchdown, but Vick was sacked and turned the ball over on downs with 1:12 remaining. With almost no time remaining, Florida State continued running down the clock and earned the 46–29 victory. ## Scoring summary ## Statistical summary In recognition of his performance during the game, Florida State wide receiver Peter Warrick was named the game's most valuable player. Warrick caught six passes for 163 yards and two touchdowns, leading all receivers in yardage and scores. Warrick also had a 59-yard punt return for a touchdown and a two-point conversion, accounting for 20 of the Seminoles' 46 points. The 20 points scored by Warrick were a Sugar Bowl record for most points scored by an individual player. Despite Warrick's individual performance, Virginia Tech was more successful in a team effort, compiling 503 total yards compared to Florida State's 359 yards. Virginia Tech quarterback Michael Vick completed 15 of 29 passes for 225 passing yards and one passing touchdown. Vick also ran the ball 23 times for 97 yards in his performance as the game's leading rusher. Florida State quarterback Chris Weinke was the game's best passer, completing 20 of his 34 pass attempts for 329 yards, four touchdowns, and one interception. Weinke's favorite target was game MVP Peter Warrick, but several other Seminoles also benefited from Weinke's passing efficiency. Ron Dugans caught five passes for 99 yards and two touchdowns, Minnis caught two passes for 25 yards, and Minor caught two for 23 yards. For Virginia Tech, Davis caught seven passes for 108 yards and a touchdown, Hawkins caught two passes for 49 yards, and Kendrick caught two passes for 27 yards. In terms of rushing offense, the two teams differed wildly. Virginia Tech, led by Vick, ran for 278 rushing yards. Florida State, meanwhile, ran for just 30 yards. The Seminoles were led on the ground by Chaney, who carried the ball four times for 43 yards, and Minor, who carried the ball nine times for 35 yards. Much of these two players' rushing total was negated by Chris Weinke, who lost 41 yards on seven carries. Virginia Tech, bolstered by Vick's 97 rushing yards, also saw André Kendrick accumulate 69 yards and two touchdowns with 12 carries and Shyrone Stith pick up 68 yards on 11 carries. ## Postgame effects Florida State's victory earned it the 1999 BCS national championship and brought the Seminoles' season to an end with an undefeated 12–0 record. By beginning the season at No. 1 and ending it in the same position, Florida State became the first college football team to stay ranked No. 1 for every week of the season after being ranked No. 1 in the preseason poll. Virginia Tech's loss brought it to a final record of 11–1, but the Hokies still completed their first 11-win season in school history. The 75 total points scored in the 2000 Sugar Bowl were a Sugar Bowl record at that point in the game's history. ### Coaching changes Both teams made changes to their respective coaching staffs in the weeks that followed the Sugar Bowl. Chuck Amato resigned from his position as linebackers coach for Florida State to take the head coaching position at North Carolina State. His role as linebackers coach was filled by Joe Kines, whom Bobby Bowden hired from the University of Georgia. Amato's role as assistant head coach was filled by Jim Gladden, who had been a coach at Florida State for more than 25 years at the time he was named the assistant head coach. At Virginia Tech, head coach Frank Beamer also made some changes to his coaching staff, promoting several position coaches to higher positions in the Tech football hierarchy. ### Postseason awards In recognition of their achievements during the regular season and during the 2000 Sugar Bowl, multiple players and coaches from each team earned awards and recognition after the conclusion of the game. Tech quarterback Michael Vick, despite leading the losing team in the Sugar Bowl, won an ESPY for college football player of the year on February 14, more than a month after the Sugar Bowl. In addition, Virginia Tech head coach Frank Beamer won multiple coach of the year awards, most notably the Bobby Dodd Coach of the Year Award, which was presented to Beamer on March 6. One of Beamer's assistant coaches, Bud Foster, was named the top defensive coordinator in Division I-A football by American Football Coach Magazine in its annual award. Florida State quarterback Chris Weinke won the 2000 Heisman Trophy after the conclusion of the 2000 college football season. ### 2000 NFL Draft Several players from each team were picked by professional teams to play in the National Football League during the 2000 NFL Draft, held April 15 and 16, in New York City. Florida State had three players selected in the first round of the draft and seven players taken overall. Peter Warrick was the first player picked, selected with the fourth overall selection by the Cincinnati Bengals. Defensive tackle Corey Simon was selected two picks later with the sixth overall selection, and placekicker Sebastian Janikowski was taken 17th. Later rounds saw Ron Dugans (66th), Laveranues Coles (78th), Jerry Johnson (101st), and Mario Edwards (180th) taken in the draft from Florida State. Other Players, Andrew Howard {CB}- Minnesota Vikins and Cory Burkhead {LB}- Pittsburg Steelers, went to the supplement draft. Virginia Tech had no players selected in the first round of the draft but saw five players taken from the second round onward. Defensive end John Engelberger was the first Hokie taken in the 2000 draft, and was picked with the 35th overall selection. He was followed by cornerback Ike Charlton, who was taken with the 52nd pick in the draft. Corey Moore (89th), Anthony Midget (134th), and Shyrone Stith (243rd) also were taken. Some players who participated in the 2000 Sugar Bowl elected to delay their entry into the NFL Draft, either because they hoped to finish their education or because they were not three years removed from their high school graduations and thus were not eligible to enter the draft. Examples of these players included Florida State quarterback Chris Weinke, who returned to Florida State to complete his senior year, and Virginia Tech quarterback Michael Vick, who was not eligible to enter the draft in 2000, but who was taken with the first overall selection in the 2001 NFL Draft. ### Subsequent seasons Florida State entered the 2000 college football season with hopes of following up its victory in the 2000 Sugar Bowl with another national championship. The Seminoles' regular-season performance differed slightly from 1999, as they lost a regular-season game to Miami, yet still appeared in a third consecutive national championship game: the 2001 Orange Bowl. Unlike in 2000, the Seminoles emerged on the losing side of a 13–2 score. Virginia Tech, like Florida State, had hoped to attend the national championship game again, but an injury to star quarterback Michael Vick caused the Hokies to lose a regular-season game at third-ranked Miami, eliminating them from national championship contention. The following season, neither Florida State nor Virginia Tech competed for a national championship, but both teams played in the 2002 Gator Bowl, their first matchup in two years. Following the Gator Bowl, Florida State next met Virginia Tech in the 2005 ACC Championship Game after the Hokies left the Big East for the Atlantic Coast Conference. Florida State won that contest, 27–22. Not until the 2007 college football season did Virginia Tech finally avenge its losses to the Seminoles with a 40–21 win en route to an Atlantic Coast Conference championship. It was the first game in fifteen consecutive matchups between the two teams that Virginia Tech had won.
33,206,021
One Tree Hill (song)
1,168,263,784
1988 single by U2
[ "1987 songs", "1988 singles", "Commemoration songs", "Island Records singles", "Music videos directed by Phil Joanou", "Number-one singles in New Zealand", "Song recordings produced by Brian Eno", "Song recordings produced by Daniel Lanois", "Songs written by Adam Clayton", "Songs written by Bono", "Songs written by Larry Mullen Jr.", "Songs written by the Edge", "U2 songs" ]
"One Tree Hill" is a song by Irish rock band U2 and the ninth track on their 1987 album The Joshua Tree. In March 1988, it was released as the fourth single from the album in New Zealand and Australia, while "In God's Country" was released as the fourth single in North America. "One Tree Hill" charted at number one on the New Zealand Singles Chart and was the country's second-most-successful hit of 1988. The track was written in memory of Greg Carroll, a New Zealander the band first met in Auckland during the Unforgettable Fire Tour in 1984. He became very close friends with lead singer Bono and later served as a roadie for the group. Carroll was killed in July 1986 in a motorcycle accident in Dublin. After Carroll's tangi (funeral) in New Zealand, Bono wrote the lyrics to "One Tree Hill" in his memory. The lyrics reflect Bono's thoughts at the tangi and during his first night in New Zealand when Carroll took him up Auckland's One Tree Hill. They also pay homage to Chilean singer-songwriter and activist Víctor Jara. Musically, the song was developed in a jam session with producer Brian Eno. The vocals were recorded in a single take, as Bono felt incapable of singing them a second time. "One Tree Hill" was received favourably by critics, who variously described it as "a soft, haunting benediction", "a remarkable musical centrepiece", and a celebration of life. U2 delayed performing the song on the Joshua Tree Tour in 1987 because of Bono's fears over his emotional state. After its live debut on the tour's third leg and an enthusiastic reaction from audiences, the song was played occasionally for the rest of the tour and semi-regularly during the Lovetown Tour of 1989–1990. It has appeared only sporadically since then, and most renditions were performed in New Zealand. Performances in November 2010 on the U2 360° Tour were dedicated to the miners who died in the Pike River Mine disaster. During the Joshua Tree Tours 2017 and 2019 to commemorate the 30th anniversary of the album, "One Tree Hill" was performed at each show. ## Inspiration, writing, and recording U2 first visited Australia and New Zealand in 1984 to open The Unforgettable Fire Tour. After a 24-hour flight into Auckland, lead singer Bono was unable to adjust to the time difference between New Zealand and Europe. He left his hotel room during the night and met some people who showed him around the city. Greg Carroll was part of that group: he had met U2's production manager Steve Iredale and been offered a job helping the band for their upcoming concert on account of Greg's experience with local rock bands. They ended up taking Bono up One Tree Hill (Maungakiekie), one of the highest – and more spiritually significant to Māori people – of Auckland's largest volcanoes. Greg worked as a stage hand gently stopping people getting on stage, and was described as "this very helpful fellah running around the place". U2's manager Paul McGuinness thought Carroll was so helpful that he should accompany the band for the remainder of the tour. The group helped him obtain a passport, and he subsequently joined them on the road in Australia and the United States as their assistant. He became very close friends with Bono and his wife Ali Hewson, and following the conclusion of the tour, he worked for U2 in Dublin. On 3 July 1986, just before the start of the recording sessions for The Joshua Tree, Carroll was killed in a motorcycle accident while on a courier run. A car had pulled in front of him, and unable to stop in the rain, Carroll struck the side of the car and was killed instantly. The event shocked the entire band; drummer Larry Mullen Jr. said, "his death really rocked us – it was the first time anyone in our working circle had been killed." Guitarist the Edge said, "Greg was like a member of the family, but the fact that he had come under our wing and had travelled so far from home to be in Dublin to work with us made it all the more difficult to deal with." Bassist Adam Clayton described it as "a very sobering moment", saying, "it inspired the awareness that there are more important things than rock 'n' roll. That your family, your friends and indeed the other members of the band – you don't know how much time you've got left with them." Bono said, "it was a devastating blow. He was doing me a favour. He was taking my bike home." He later commented, "it brought gravitas to the recording of The Joshua Tree. We had to fill the hole in our heart with something very, very large indeed, we loved him so much." Accompanied by Bono, Ali, Mullen, and other members of the U2 organisation, Carroll's body was flown back to New Zealand and buried in the traditional Māori manner at Kai-iwi Marae near Whanganui, Carroll's hometown. Bono sang "Let It Be" and "Knockin' on Heaven's Door" for him at the funeral. Shortly after returning to Dublin, Bono wrote lyrics for a song about the funeral that he titled "One Tree Hill" after the hill he remembered from his visit to Auckland in 1984. The music was developed early in the recording sessions for The Joshua Tree. The Edge said, "We were jamming with Brian [Eno]. He was playing keyboards ... we just got this groove going, and this part began to come through. It's almost highlife, although it's not African at all ... the sound was for me at that time a very elaborate one. I would never have dreamt of using a sound like that before then, but it just felt right, and I went with it." Bono recorded his vocals in a single take, as he felt that he could not sing the lyrics a second time. The Edge used a Bond Electraglide guitar to play a solo with a "heavy fuzz" sound at the end of the song. Three musicians from Toronto—Dick, Paul and Adele Armin—recorded string pieces for the song in Grant Avenue Studio in Hamilton, Ontario. In a six-hour phone call with the Edge, and under the supervision of producer Daniel Lanois, the Armins used "sophisticated 'electro-acoustic' string instrument[s]" they developed called Raads to record a piece created for the song. Dick Armin said, "[U2] were interested in using strings, but not in the conventional style of sweetening. They didn't want a 19th-century group playing behind them." Bono found the song so emotional, he was unable to listen to it after it had been recorded. In the song, Bono included the lyric: "Jara sang, his song a weapon in the hands of love / You know his blood still cries from the ground". This refers to the Chilean political activist and folk singer Víctor Jara, who became a symbol of the resistance against the Augusto Pinochet military dictatorship after he was tortured and killed during the 1973 Chilean coup d'état. Bono learned of Jara after meeting René Castro, a Chilean mural artist, while on Amnesty International's A Conspiracy of Hope tour. Castro had been tortured and held in a concentration camp for two years by the military because his artwork criticised the Pinochet-led regime that had seized power in 1973 during the coup. While purchasing a silkscreen of Martin Luther King Jr. that Castro had created, Bono noticed a print of Jara. He became more familiar with him after reading Una Canción Truncada (An Unfinished Song), written by Jara's widow Joan Turner. "One Tree Hill" and The Joshua Tree are dedicated to Carroll's memory. The track was recorded by Flood and Pat McCarthy, mixed by Dave Meegan, and produced by Lanois and Eno. ## Composition and theme "One Tree Hill" runs for 5 minutes, 23 seconds. It is played in common time at a tempo of 120 beats per minute. The song begins with a highlife-influenced riff by the Edge on guitar, which repeats in the background throughout the song. Percussion from drummer Larry Mullen Jr. enters after two seconds. At 0:07, a second guitar enters. At 0:15, Clayton's bass and Mullen's drums enter, and at 0:31, the verse chord progression of C–F–B–F–C is introduced. The first verse begins at 0:47. At 1:32, the song moves to the chorus, switching to a C–B–F–C chord progression. The second verse then begins at 1:49, and after the second chorus, a brief musical interlude begins at 2:36, in which the Edge's guitar is replaced by the Raad strings. The third verse begins at 3:07, and the Edge's guitar resumes at 3:38 in the chorus. A guitar solo begins at 4:16 and is played until the instrumentation comes to a close at 4:36. After two seconds of silence, the Raad strings fade in and Bono proceeds to sing the coda. The final lyric and strings fade out over the final six seconds. Clayton called it part of a trilogy of songs on the album, along with "Bullet the Blue Sky" and "Mothers of the Disappeared", that decry the involvement of the United States in the Chilean coup. McGuinness stated that the imagery in the song described the sense of tragedy felt by the band over Carroll's death. Colm O'Hare of Hot Press believed the Edge's guitar riff personified the lyric "run like a river runs to the sea". Thom Duffy of the Orlando Sentinel felt the song reflected the seduction of a lover. Richard Harrington of The Washington Post acknowledged the tribute to Carroll, adding that it demonstrated U2's belief that music could spur change. Like many other U2 songs, "One Tree Hill" can be interpreted in a religious manner. Hot Press editor Niall Stokes called it "a spiritual tour de force", saying "it is a hymn of praise and celebration which described the traditional Māori burial of their friend on One Tree Hill and links it poetically with themes of renewal and redemption." Beth Maynard, a Church rector from Fairhaven, Massachusetts, felt the song "vows faith in the face of loss, combining elegiac lines about a friend ... and the martyred Chilean activist and folk singer Victor Jara, with a subtle evocation of end-time redemption and a wrenching wail to God to send the pentecostal Latter Rain." Matt Soper, Senior Minister of the West Houston Church of Christ, believed the lyrics were an attempt by Bono to understand God's place in the world. Steve Stockman, a chaplain at Queen's University Belfast, felt that the song alluded to "transcendent places beyond the space and time of earth". Music journalist Bill Graham noted "the lyrics, with their reference to traditional Māori burial ceremonies on One Tree Hill, indicated that the band's faith didn't exclude an empathy with others' beliefs and rituals. Their Christianity wouldn't plaster over the universal archetypes of mourning." ## Release and reception "One Tree Hill" was released on The Joshua Tree on 9 March 1987 as the ninth song on the album. Some CD pressings incorrectly split the tracks, with the song's coda included as part of the track for the following song, "Exit". In New Zealand and Australia, "One Tree Hill" was released as a 7-inch single in March 1988. The cover art (photographed by Anton Corbijn), sleeve (designed by Steve Averill), and B-sides ("Bullet the Blue Sky" and "Running to Stand Still") were identical to those used for U2's 1987 single "In God's Country", released only in North America. A cassette single, available only in New Zealand, was also released. The song reached number one on the New Zealand Singles Chart. "One Tree Hill" was included as a bonus track on the Japanese version of U2's 1998 compilation album, The Best of 1980–1990. The accompanying video compilation included the song's music video, directed by Phil Joanou, which features a live performance taken from a previously unreleased cut of U2's 1988 rockumentary Rattle and Hum. Additional live performances were released on the digital album Live from the Point Depot (2004) and the U2.com member-exclusive album U22 (2012). Select editions of the 30th anniversary release of The Joshua Tree in 2017 featured a remix of "One Tree Hill" by St Francis Hotel and a new mix of the song's reprise by Brian Eno. "One Tree Hill" was received favourably by critics. Hot Press editor Niall Stokes described it as one of U2's best tracks, calling it a "fitting tribute" to Carroll. The Toronto Star felt it was one of the best songs on the album. Steve Morse of The Boston Globe compared Bono's vocals at the song's conclusion to the passion of American soul singer Otis Redding, also noting that the coda was reminiscent of the hymn "Amazing Grace". Steve Pond of Rolling Stone called it "a soft, haunting benediction". Bill Graham of Hot Press said the song was "hopeful, not grim", describing the lyric "We run like a river to the sea" as "[musician Mike Scott's] metaphor recast in terms of eternal life and the Maori's own belief." He described the Edge's playing as "a loose-limbed guitar melody with both an African and a Hawaiian tinge", concluding by saying "despite its moving vocal coda, 'One Tree Hill' isn't sombre. It celebrates the life of the spirit not its extinction." Writing for The New York Times, John Rockwell felt that it was an example of U2 stretching their range, saying "the inclusion of musical idioms [is] never so overtly explored before on a U2 record, especially the gospel chorus of 'One Tree Hill'". Colin Hogg of The New Zealand Herald called it "a remarkable musical centrepiece", believing it to be the best song on the album. Colm O'Hare of Hot Press said it was "arguably the most poignant, emotionally-charged song U2 have ever recorded." He added that it was the "least instrumentally adorned song on the album, resplendent in a feeling of space and openness." McGuinness called it one of his favourite U2 songs. The American television drama One Tree Hill was named for the song after series creator Mark Schwahn was listening to The Joshua Tree when writing the idea for the show. "One Tree Hill" was also the name of the series finale, which featured the song in the episode's final scene. ## Live performances "One Tree Hill" made its live debut on 10 September 1987 in Uniondale, New York, the opening night of the third leg of the Joshua Tree Tour, where it opened the encore. The song had been left out of the set up to this point because Bono feared he would be unable to overcome his emotions in the live setting. Despite his fears, the song received an enthusiastic reaction from the audience. It was performed a further six times and then dropped from the show for a period of two months. It was revived in the main set on 17 November 1987 in Los Angeles, California, and played a further nine times on the tour. "One Tree Hill" was played occasionally on the Lovetown Tour, appearing at 19 of 47 concerts. The penultimate performance, on 31 December 1989, was broadcast live on radio to 21 countries throughout Europe as a New Year's Eve present from the band. "One Tree Hill" was absent during the majority of the Zoo TV Tour, only appearing as an extended snippet at the end of "One" at both concerts in New Zealand in 1993. It did not appear again until 24 November 2006 in Auckland, New Zealand, on the final leg of the Vertigo Tour. It was considered to close the concert, but tour designer Willie Williams voiced concern as it had not been performed in full since 1990. The song was performed before "Sometimes You Can't Make It on Your Own" in the main set instead. U2 performed it an additional three times on the tour. "One Tree Hill" was absent for the majority of the U2 360° Tour but was revived in November 2010 for two concerts in New Zealand, where it was dedicated to the miners who died in the Pike River Mine disaster; their names were displayed on the video screen during the song. Dedicating the song, Bono said, "we wrote it for Greg Carroll, whose family are with us tonight. But tonight it belongs to the miners of the West Coast Pike River." U2 played "One Tree Hill" on 25 March 2011, in Santiago, Chile, in a duet with Francisca Valenzuela, and they dedicated it to Victor Jara. It was also played during the encore at a show in Chicago on 5 July 2011 to honour the 25th anniversary of the Carroll's death. In 2009, when asked about the likelihood of U2 performing the song, the Edge said, "it's one we kind of keep for special occasions, like playing New Zealand." Bono added, "it's a very special song that holds inside of it a lot of strong feelings, and I don't know if we're afraid of it or something, but we should be playing it more." McGuinness said that U2 found it difficult to play live. "One Tree Hill" returned to live performances during the Joshua Tree Tours 2017 and 2019, which featured 51 concerts in mid-2017 and 15 concerts in late-2019; each concert featured a full performance of the entire Joshua Tree album in running order. Each song from the album was accompanied by a video shown on the set's LED video screen that served as a backdrop to the band's performance. The video played during "One Tree Hill" featured images of a blood red-coloured moon that faded into footage of Native American people. It was directed by Anton Corbijn and filmed in Lancaster, California over a 14-hour film shoot. Performances in 2017 were dedicated to singer Chris Cornell (who died in May 2017), to singer Chester Bennington (who died in July 2017), to the victims of the Manchester Arena bombing, and to the victims of the Orlando nightclub shooting. The opening concerts of the 2019 tour took place in Auckland, where the band paid tribute to Greg Carroll prior to performing "One Tree Hill" and featured an image of Carroll on the screen at the end of the song. ## Track listing ## Credits and personnel U2 - Bono – vocals - The Edge – guitar, backing vocals - Adam Clayton – bass guitar - Larry Mullen Jr. – drums, percussion Additional performers - Dick Armin – Raad cello - Paul Armin – Raad viola - Adele Armin – Raad violin Technical - Production – Brian Eno, Daniel Lanois - Recording – Flood - Recording assistance – Pat McCarthy - Mixing – Dave Meegan ## Charts ### Weekly charts ### Year-end charts ## Certifications ## See also - List of cover versions of U2 songs – One Tree Hill
17,970,061
1920–21 Gillingham F.C. season
1,155,096,673
null
[ "English football clubs 1920–21 season", "Gillingham F.C. seasons" ]
During the 1920–21 English football season, Gillingham F.C. competed in the Football League for the first time. The team had previously played in Division One of the Southern League, but in 1920 the Football League added the Third Division to its existing set-up by absorbing the entire Southern League Division One. The club appointed Robert Brown as manager, but the arrangement turned out to be only a casual one and he accepted another job before the season started. Under his replacement, John McMillan, Gillingham's results were poor, including a spell of over three months without a league victory, and at the end of the season they finished bottom of the league table. Gillingham also competed in the FA Cup, being eliminated in the sixth qualifying round. The team played 45 competitive matches, winning 10, drawing 12 and losing 23. Tommy Hall was the team's top goalscorer; he scored nine goals in league matches and two in the FA Cup. He was one of three players who tied for the most appearances made during the season: Hall, Jack Branfield and Jock Robertson each missed only one game. The highest attendance recorded at the club's home ground, Priestfield Road, was approximately 12,000 for league games against Southampton on 28 August and Millwall on 30 October. ## Background and pre-season Gillingham, founded in 1893, had played in the Southern League since the competition's formation in 1894, apart from when the league was suspended due to the First World War, but had achieved minimal success and had finished bottom of Division One in the 1919–20 season. At the annual general meeting (AGM) of the national Football League on 31 May 1920, the clubs in the existing two divisions voted to admit those in the Southern League's top division en masse to form the new Third Division. Initially it was unclear if Gillingham, by virtue of their last-place finish, would be relegated to the Southern League Division Two before this took effect and thus miss out on a place in the Football League; at the club's own AGM on 3 June angry supporters demanded to know what the club's status would be for the coming season but the board of directors was unable to give an answer. Shortly afterwards it was confirmed that Gillingham would indeed be entering the Football League. On 12 May, the club appointed Robert Brown as the club's new manager, replacing George Collins. At the time, the impression was given that Brown had been appointed on a permanent basis, but at the AGM the directors admitted that he was only working on a temporary basis, although they hoped to persuade him to stay. Less than a week later, he resigned to take the manager's position at Sheffield Wednesday, leaving Gillingham without ever taking charge of a match. He was replaced by John McMillan, who was paid a wage of per week, and who was assisted by Jim Kennedy as trainer. In preparation for the new season the club signed Wally Battiste from Grimsby Town, Tommy Hall from Newcastle United, Tom Thompson from Sunderland, Tom Gilbey from Darlington, Tom Baxter from Chelsea, Tom Sisson from Hucknall Byron, Clive Wigmore from Aston Villa, George Needham from Derby County, Andy Holt from Chesterfield Municipal and Archie Roe from Birmingham. Hall was signed for a transfer fee of over , the first time that Gillingham had paid a four-figure sum to sign a player. Only 6 out of the 39 players who had represented the club in the final Southern League season went on to make appearances in the Football League: Jock Robertson, Jack Branfield, Syd Gore, Joseph Griffiths, Donald McCormick and Arthur Wood, and of these only Robertson, Branfield and Wood remained regular starters. The team wore their usual black and white-striped shirts with white shorts and black socks. ## Third Division ### August–December The club's first Football League match was against Southampton at Gillingham's home ground, Priestfield Road; Sisson, Battiste, Baxter, Wigmore, Holt, Hall, Gilbey and Roe all made their debuts. In front of a crowd of approximately 12,000 fans, Gilbey scored the club's first Third Division goal, but the game ended in a 1–1 draw. Gilbey scored again four days later as Gillingham beat Reading 2–1 away from home to register their first victory. Gillingham lost 3–0 away to Southampton on 4 September, a game in which the correspondent for the Daily Herald described them as "hopelessly outclassed", but then again beat Reading, after which they were 7th out of 22 teams in the league table, only one point behind leaders Portsmouth. The team drew their next two games but were then heavily defeated away to Merthyr Town, losing 6–1. The Daily Telegraph's reporter stated that the size of the defeat was more down to Gillingham's poor defence than the quality of their opponents' attack. Needham made his debut in the defeat and would play in every game for the remainder of the season. Gillingham played Plymouth Argyle in both the last match of September and first of October and lost both games. Wood, Gillingham's top goalscorer in the previous season, played his first game of the season in the second of the two matches. He failed to score as Gillingham lost 1–0 to a goal in the last few minutes which the Weekly Dispatch's reporter described as "unexpected". Wood scored his first Football League goal for Gillingham a week later against Exeter City but his team lost for the fourth consecutive game, after which they had slipped to 19th in the table. Gillingham beat Exeter 2–1 at Priestfield Road on 16 October, but it would prove to be their last Third Division win for more than three months. They lost 4–0 away to Millwall on 23 October, after which the Dispatch's reporter noted that Millwall were considerably better than Gillingham, and ended the month with a goalless draw against the same opponents. The crowd for the latter game was reported as 12,000, tying with the game against Southampton in August for the largest recorded attendance of the season at Priestfield Road. For the third consecutive match, Gillingham failed to score a goal when they lost 1–0 away to Newport County in the first match of November. A 4–1 defeat at home to the same opponents a week later left Gillingham bottom of the table. Hall was absent from the team for the only time during the season on 27 November; his replacement Thomas Robinson scored Gillingham's second goal as they came back from two goals down to draw 2–2 against Portsmouth, but it would prove to be the last of his three appearances for the team. Gillingham drew 1–1 away to Swindon Town on 11 December but then began a run of five consecutive league defeats. A week after losing to Northampton Town in the FA Cup, Gillingham were defeated by the same opponents in the league, losing 5–2 at home on Christmas Day. They then lost away to Northampton two days later. Gillingham's final match of 1920 resulted in another heavy defeat as the team lost 5–0 away to Luton Town; the correspondent for the Daily Telegraph stated that, had it not been for the performance of goalkeeper Branfield, "Gillingham would have suffered a more severe reverse". At the end of December Gillingham remained bottom of the Third Division. ### January–April Gillingham's first match of 1921 was away to Watford on 1 January and resulted in a fourth consecutive league defeat. Roe, brought into the team for the first time since November, gave his team the lead but Watford scored three times to secure what the Dispatch's correspondent described as an easy victory. Two weeks later, the team lost by the same score at home to Brentford; both Branfield and Robertson were missing from the team, the only time that either was absent during the season. The losing run came to an end with a 3–3 draw against Brentford on 22 January; Needham scored two goals and Wood one. Seven days later the team won their first Third Division match for more than three months when a goal from Needham gave them a 1–0 win at home to Bristol Rovers. Despite the victory, Gillingham remained bottom of the division at the end of January. In February, Gillingham played four matches, which resulted in a draw and three defeats; the team only scored one goal in the four games. A week after beating Bristol Rovers at home, they lost 2–0 away to the same opponents, and then drew 0–0 at home to Norwich City. Gillingham played Norwich again on 19 February and lost 2–1, and ended the month with a 1–0 defeat at home to league leaders Crystal Palace. In a tactical change, Needham, who had played several games as a forward, moved back to a half-back position against Crystal Palace; Baxter, normally a half-back, was included in the team for the first time in over a month and played as centre-forward. After three games in which he did not score any goals he was dropped once more. Gillingham again played Crystal Palace in the first game of March; the score was 1–1 with approximately ten minutes remaining but the league leaders then scored three times to win the game. After three consecutive defeats, Gillingham beat Brighton & Hove Albion 1–0 on 12 March, but then failed to win any of the next five games. Over the Easter period the team played three games in four days, which resulted in a draw and two defeats. Gillingham began April with a 2–1 victory at home to Grimsby Town and followed it up with a 1–0 victory away to Queens Park Rangers a week later; it was the first time during the season that they had won two consecutive league games. Wood scored a goal in both games and did so again as the team drew 1–1 at home to Swindon Town on 13 April. The run of three league games without defeat was the longest the team had managed since the previous September, but they then lost consecutive matches to Queens Park Rangers and Swansea Town. Gillingham ended April with a 2–1 win at home to Swansea Town, Wood scoring the winning goal, his fourth in six games. The team's final match of the season was at home to Luton Town and resulted in a goalless draw. Gillingham finished the season bottom of the Third Division, two points below 21st-placed Brentford. Having finished bottom of the Southern League Division One in the seasons immediately before and after the First World War, the team had now finished last in their division for three consecutive seasons. ### Match details Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ### Partial league table ## FA Cup Gillingham entered the 1920–21 FA Cup at the fourth qualifying round stage, where they were drawn to play near-neighbours Maidstone United of the Kent League. Maidstone's goalkeeper saved two penalty kicks by Battiste but a goal from Gilbey meant that Gillingham beat their non-League opponents 1–0. In the fifth qualifying round, Gillingham played another non-League team, Dulwich Hamlet of the Isthmian League, and won 2–1. In the sixth and final qualifying round, Gillingham's opponents were fellow Third Division team Northampton Town, who won 3–1 to eliminate Gillingham from the competition. ### Match details Key - In result column, Gillingham's score shown first - H = Home match - A = Away match - pen. = Penalty kick - o.g. = Own goal Results ## Players During the season, 26 players made at least one appearance for Gillingham. Robertson, Branfield and Hall made the most; each missed only one match. Three players made only one appearance each: McCormick, Alfred Milton and Ernest Ollerenshaw. It was each player's only appearance for Gillingham at Football League level, and in the case of Milton and Ollerenshaw the only game each played during their entire time with the club. Hall finished the season as the team's top scorer, with nine goals in the Third Division and two in the FA Cup; despite playing in fewer than a third of the team's games, Gilbey was the second highest-scoring player with eight goals in total. ## Aftermath As a result of finishing bottom of the Third Division, Gillingham were required to apply for re-election to the Football League, but retained their place. They remained one of the strugglers in the division, which was renamed the Third Division South when a parallel Third Division North was introduced for the 1921–22 season, and it was not until the 1925–26 season that they managed to finish in the top half of the table. In 1938, Gillingham finished in the bottom two for the fifth time; on this occasion their application for re-election was unsuccessful and they were voted out of the league, returning to the Southern League.
530,420
Carnivàle
1,164,850,760
2003–2005 American television series
[ "2000s American drama television series", "2003 American television series debuts", "2005 American television series endings", "American fantasy television series", "Carnivàle", "Dark fantasy television series", "English-language television shows", "Fiction set in 1934", "Fiction set in 1935", "Fictional rivalries", "Great Depression television series", "HBO original programming", "Religious drama television series", "Serial drama television series", "Tarot in fiction", "Television series by 3 Arts Entertainment", "Television series by Home Box Office", "Television series set in the 1930s", "Television shows filmed in Santa Clarita, California", "Television shows involved in plagiarism controversies", "Television shows set in Oklahoma", "Television shows set in circuses", "Television shows set in the United States" ]
Carnivàle (/ˌkɑːrnɪˈvæl/) is an American television series set in the United States Dust Bowl during the Great Depression of the 1930s. The series, created by Daniel Knauf, ran for two seasons between 2003 and 2005. In tracing the lives of disparate groups of people in a traveling carnival, Knauf's story combined a bleak atmosphere with elements of the surreal in portraying struggles between good and evil and between free will and destiny. The show's mythology drew upon themes and motifs from traditional Christianity and gnosticism together with Masonic lore, particularly that of the Knights Templar order. Carnivàle was produced by HBO and aired between September 14, 2003, and March 27, 2005. Its creator, Daniel Knauf, also served as executive producer along with Ronald D. Moore and Howard Klein. Jeff Beal composed the original incidental music. Nick Stahl and Clancy Brown starred as Ben Hawkins and Brother Justin Crowe, respectively. The show was filmed in Santa Clarita, California, and nearby Southern California locations. Early reviews praised Carnivàle for style and originality but questioned the approach and execution of the story. The first episode set an audience record for an HBO original series and drew durable ratings through the first season. When the series proved unable to sustain these ratings in its second season, it was cancelled. An intended six-season run was thus cut short by four seasons. In all, 24 episodes of Carnivàle were broadcast. In 2004, the series won five Emmys out of fifteen nominations. The show received numerous other nominations and awards between 2004 and 2006. ## Episodes The two seasons of Carnivàle take place in the Depression-era Dust Bowl between 1934 and 1935, and consist of two main plotlines that slowly converge. The first involves a young man with strange healing powers named Ben Hawkins (Nick Stahl), who joins a traveling carnival when it passes near his home in Milfay, Oklahoma. Soon thereafter, Ben begins having surrealistic dreams and visions, which set him on the trail of a man named Henry Scudder, a drifter who crossed paths with the carnival many years before, and who apparently possessed unusual abilities similar to Ben's own. The second plotline revolves around a Father Coughlin-esque Methodist preacher, Brother Justin Crowe (Clancy Brown), who lives with his sister Iris (Amy Madigan) in California. He shares Ben's prophetic dreams and slowly discovers the extent of his own unearthly powers, which include bending human beings to his will and making their sins and greatest evils manifest as terrifying visions. Certain that he is doing God's work, Brother Justin fully devotes himself to his religious duties, not realizing that his ultimate nemesis Ben Hawkins and the carnival are inexorably drawing closer. ## Production ### Conception Daniel Knauf conceived the initial script for the show between 1990 and 1992 when he was unsatisfied with his job as a Californian health insurance broker and hoped to become a screenwriter. He had always been interested in carnivals and noted that this subject had rarely been dramatized on film. Knauf's experiences of growing up with a disabled father who was not commonly accepted as a normal human strongly informed the story and its treatment of freaks. Knauf named the intended feature film script Carnivàle, using an unusual spelling for a more outlandish look. Knauf had plotted the story's broad strokes as well as several plot details from early on and knew the story destination until the final scene. However, the resulting 180-page long script was twice the length of a typical feature film script, and Knauf still felt that it was too short to do his story justice. He therefore shelved the screenplay as a learning experience. In the meantime, Hollywood studios rejected all but one of Knauf's other scripts, often for being "too weird." In the mid-1990s, Knauf met a few Writers Guild TV writers who encouraged him to revise Carnivàle as a TV series. Knauf turned the script's first act into a pilot episode, but, having no contacts in the television business, he was forced to shelve the project again and return to his regular job. A few years later, after realizing that his insurance career was not working out, he decided to give his screenwriting efforts a last chance by offering the Carnivàle pilot on his website. The script was subsequently forwarded to Howard Klein by Scott Winant, a mutual friend of the two men. After several meetings and conversations, Klein felt confident that Carnivàle would make a good episodic television series that could last for many years. Klein brought it to the attention of Chris Albrecht and Carolyn Strauss of HBO, who were immediately receptive. The network deemed Knauf too inexperienced in the television business to give him full control over the budget, and appointed Ronald D. Moore as showrunner. (Knauf replaced Moore after one season when Moore left for the reimagined Battlestar Galactica.) The pilot episode, which was filmed over a period of 21 days, served as the basis for additional tweaking of intended story lines. Long creative discussions took place among the writers and the network, leading to the postponement of the filming of the second episode for fourteen months. One major change was the addition of extra material for Brother Justin's side of the story. Brother Justin was originally conceived as a well-established preacher, and as a recurring character rather than a regular one. However, after perusing the preliminary version of the pilot, Knauf and the producers realized that there was no room for Justin to grow in a television series. Hence, it was decided to make Brother Justin an ordinary Methodist minister in a small town, setting him back in his career by about one or two years. Expanding Brother Justin's role opened new possibilities, and his sister Iris was created as a supporting character. Little was changed on Ben Hawkins' side except for the addition of the cootch (striptease) family; a Carnivàle consultant had elated the producers by calling attention to his research about families managing cootch shows in the 1930s. ### Format The Carnivàle story was originally intended to be a trilogy of "books", consisting of two seasons each. This plan did not come to fruition, as HBO canceled the show after the first two seasons. Each season consists of twelve episodes. Airing on HBO benefited Carnivàle in several ways. Because HBO does not rely on commercial breaks, Carnivàle had the artistic freedom to vary in episode length. Although the episodes averaged a runtime of 54 minutes, the episodes "Insomnia" and "Old Cherry Blossom Road" were 46 minutes and 59 minutes, respectively. HBO budgeted approximately US\$4 million for each episode, considerably more than most television series receive. ### Historical production design Carnivàle's 1930s' Dust Bowl setting required significant research and historical consultants to be convincing, which was made possible with HBO's strong financial backing. As a result, reviews praised the look and production design of the show as "impeccable," "spectacular" and as "an absolute visual stunner." In 2004, Carnivàle won four Emmys for art direction, cinematography, costumes, and hairstyling. To give a sense of the dry and dusty environment of the Dust Bowl, smoke and dirt were constantly blown through tubes onto the set. The actors' clothes were ragged and drenched in dirt, and Carnivàle had approximately 5,000 people costumed in the show's first season alone. The creative team listened to 1930s' music and radio and read old Hollywood magazines to get the period's sound, language, and slang right. The art department had an extensive research library of old catalogs, among them an original 1934 Sears Catalog, which were purchased at flea markets and antique stores. The East European background of some characters and Asian themes in Brother Justin's story were incorporated into the show. Aside from the show's supernatural elements, a historical consultant deemed Carnivàle's historical accuracy to be excellent regarding the characters' lives and clothes, their food and accommodations, their cars and all the material culture. ### Filming locations Carnivàle's interiors were filmed at Santa Clarita Studios in Santa Clarita, California, while the show's many exterior scenes were filmed on Southern California locations. The scenes of fictional California town of Mintern, where the stories about Brother Justin and Iris in Season 1 were based, were shot at Paramount Ranch in Agoura Hills. The carnival set itself was moved around the greater Southern California area, to movie ranches and to Lancaster, which were to replicate the states of Oklahoma, Texas, and New Mexico. The permanent filming location of the carnival in Season 2 was Big Sky Ranch, which was also used for Brother Justin's new home in fictional New Canaan. ### Opening title sequence Carnivàle's opening title sequence was created by A52, a visual effects and design company based in Los Angeles, and featured music composed by Wendy Melvoin and Lisa Coleman. The opening title sequence won an Emmy for "Outstanding Main Title Design" in 2004. `The production team of A52 had intended to "create a title sequence that grounded viewers in the mid-1930s, but that also allowed people to feel a larger presence of good and evil over all of time." In early 2003, A52 pitched their idea to Carnivàle executives, who felt that the company's proposal was the most creative for the series' concept. The actual production included scanned transparencies of famous pieces of artwork, each scanned transparency being up to 300 MB in size. The resulting images were photoshopped and digitally rendered. A last step involved stock footage clips being compiled and digitally incorporated into the sequence.` The opening title sequence itself begins with a deck of Tarot cards falling into the sand, while the camera moves in and enters one card into a separate world presenting layers of artwork and footage from iconic moments of the American Depression era; the camera then moves back out of a different card and repeats the procedure several times. The sequence ends with the camera shifting from the "Judgement" Tarot card to the "Moon" and the "Sun", identifying the Devil and God respectively, until the wind blows away all cards and the underlying sand to reveal the Carnivàle title artwork. ### Music Carnivàle features instrumental music composed by Jeff Beal, as well as many popular or obscure songs from the 1920s and 1930s, the time when Carnivàle's story takes place. However, "After the Ball," which was a major hit in the 1890s, is used to prominent effect at the close of season 1, episode 2. The main title was written by The Revolution members Wendy Melvoin and Lisa Coleman, and was released with selected themes by Jeff Beal on a Carnivàle television soundtrack by the record label Varèse Sarabande on December 7, 2004. Beal released tracks of Season 2 on his personal website. A complete list of music credits is available on the official HBO website. Jeff Beal's score is primarily acoustic sounding electronics, but mixes themes of bluegrass as well as atmospheric rhythmic sounds. Bigger groups of strings support smaller ensembles of guitars, pianos, violins, cellos, and trumpets. The music sometimes uses ethnic instruments such as banjos, harmonicas, ukuleles, and duduks. Because HBO does not break individual episodes with commercials, Carnivàle's music is paced similar to a movie, with character-specific leitmotifs from as early as the first episode. Characters are musically identified by solo instruments chosen for the character's ethnic background or nature. Some characters whose connections are only disclosed later in the series have intentionally similar themes. Different music is consciously used to represent the two different worlds of the story. Brother Justin's world features music of constructed orchestral sound with religious music and instruments. On the other hand, the score of the carnival side is more deconstructed and mystical, especially when the carnival travels through the Dust Bowl and remote towns. For carnival scenes taking place in the cootch (striptease) show or in cities, however, contemporary pop music, blues, folk, and ethnic music is played. One of the most defining songs of Carnivàle is the 1920s song "Love Me or Leave Me" sung by Ruth Etting, which is used in several episodes to tie characters in the two worlds thematically. ## Cast The plot of Carnivàle takes place in the 1930s Dust Bowl and revolves around the slowly converging storylines of a traveling carnival and a Californian preacher. Out of the 17 actors receiving star billing in the first season, 15 were part of the carnival storyline. The second season amounted to 13 main cast members, supplemented by several actors in recurring roles. Although such large casts make shows more expensive to produce, the writers are benefited with more flexibility in story decisions. The backgrounds of most characters were fully developed before the filming of Carnivàle began but were not part of the show's visible structure. The audience therefore only learned more about the characters as a natural aspect in the story. Season 1's first storyline is led by Nick Stahl portraying the protagonist Ben Hawkins, a young Okie farmer who joins a traveling carnival. Michael J. Anderson played Samson, the diminutive manager of the carnival. Tim DeKay portrayed Clayton "Jonesy" Jones, the crippled chief roustabout. Patrick Bauchau acted as the carnival's blind mentalist Lodz, while Debra Christofferson played his lover, Lila the Bearded Lady. Diane Salinger portrayed the catatonic fortune teller Apollonia, and Clea DuVall acted as her tarot-card-reading daughter, Sofie. Adrienne Barbeau portrayed the snake charmer Ruthie, with Brian Turk as her son Gabriel, a strongman. John Fleck played Gecko the Lizard Man, and Karyne and Sarah Steben appeared as the conjoined twins Alexandria and Caladonia. The cootch show Dreifuss family was played by Toby Huss and Cynthia Ettinger as Felix "Stumpy" and Rita Sue, and Carla Gallo as their daughter Libby. Amanda Aday portrayed their other daughter, Dora Mae Dreifuss, in a recurring role. John Savage played the mysterious Henry Scudder in several episodes, while Linda Hunt lent her voice to the mysterious Management. The second storyline is led by Clancy Brown portraying the primary antagonist, the Methodist minister Brother Justin Crowe. Amy Madigan played his sister Iris. Robert Knepper supported them as the successful radio host Tommy Dolan later in the first season, while Ralph Waite had a recurring role as Reverend Norman Balthus, Brother Justin's mentor. K Callan performed in a recurring role as Eleanor McGill, a parishioner who became devoted to Brother Justin after seeing his power firsthand. Several cast changes took place in Season 2, some of them planned from the beginning. John Fleck, Karyne Steben and her sister Sarah had made their last appearance in the first season's finale, while Patrick Bauchau's and Diane Salinger's status was reduced to guest-starring. Ralph Waite joined the regular cast. Several new characters were introduced in recurring roles, most notably John Carroll Lynch as the escaped convict Varlyn Stroud and Bree Walker as Sabina the Scorpion Lady. ### Casting The casting approach for Carnivàle was to cast the best available actors and to show the characters' realness as opposed to depending on freak illusions too much. Carnivàle's casting directors John Papsodera and Wendy O'Brien already had experience in casting freaks from previous projects. The producers generally preferred actors who were not strongly identified with other projects, but were willing to make exceptions such as for Adrienne Barbeau as Ruthie. The script for the pilot episode was the basis for the casting procedure, with little indication where the show went afterwards. This resulted in some preliminary casting disagreements between the creators and producers, especially for leading characters such as Ben, Brother Justin and Sofie. The character of Ben was always intended to be the leading man and hero of the series, yet he was also desired to display a youthful, innocent and anti-hero quality; Nick Stahl had the strongest consensus among the producers. The character of Sofie was originally written as more of an exotic gypsy girl, but Clea DuVall, a movie actor like Stahl, got the part after four auditions. Tim DeKay was cast as Jonesy because the producers felt he best portrayed a "very American-looking" baseball player of that period. One of the few actors who never had any real competition was Michael J. Anderson as Samson, whom Daniel Knauf had wanted as early as the initial meeting. ## Mythology Although almost every Carnivàle episode has a distinctive story with a new carnival setting, all episodes are part of an overarching good-versus-evil story that culminates and is resolved only very late in Season 2. The pilot episode begins with a prologue talking of "a creature of light and a creature of darkness" (also known as Avatars) being born "to each generation" preparing for a final battle. Carnivàle does not reveal its characters as Avatars beyond insinuation, and makes the nature of suggested Avatars a central question. Reviewers believed Ben to be a Creature of Light and Brother Justin a Creature of Darkness. Other than through the characters, the show's good-and-evil theme manifests in the series' contemporary religion, the Christian military order Knights Templar, tarot divination, and in historical events like the Dust Bowl and humankind's first nuclear test. The writers had established a groundwork for story arcs, character biographies and genealogical character links before filming of the seasons began, but many of the intended clues remained unnoticed by viewers. While Ronald D. Moore was confident that Carnivàle was one of the most complicated shows on television, Daniel Knauf reassured critics that Carnivàle was intended to be a demanding show with a lot of subtext and admitted that "you may not understand everything that goes on but it does make a certain sense". Knauf provided hints about the show's mythological structure to online fandom both during and after the two-season run of Carnivàle, and left fans a production summary of Carnivàle's first season two years after cancellation. Matt Roush of TV Guide called Carnivàle "the perfect show for those who thought Twin Peaks was too accessible". The Australian said Carnivàle "seems to have been conceived in essentially literary terms" which "can sometimes work on the page but is deadly on the large screen, let alone a small one. It's almost like a biblical injunction against pretension on television." A reviewer admitted his temptation to dismiss the first season of Carnivàle as "too artsy and esoteric" because his lack of involvement prevented him from understanding "what the heck was going on, [which] can be a problem for a dramatic television series." TV Zone however considered Carnivàle "a series like no other and [...] the fact that it is so open to interpretation surprisingly proves to be one of its greatest strengths." Carnivàle was lauded for bringing "the hopelessness of the Great Depression to life" and for being among the first TV shows to show "unmitigated pain and disappointment", but reviewers were not confident that viewers would find the "slowly unfolding sadness" appealing over long or would have the patience or endurance to find out the meaning of the show. ## Cancellation At the time, HBO made their commitments for only one year at a time, a third season would have meant opening up a new two-season book in Daniel Knauf's six-year plan, including the introduction of new storylines for current and new characters, and further clarification and elaboration on the show's mythology. HBO announced that the show had been cancelled on May 11, 2005. HBO's president Chris Albrecht said the network would have considered otherwise if the producers had been willing to lower the price of an episode to US\$2 million; but the running costs for the sizable cast, the all-on-location shooting and the number of episodes per season were too enormous for them. The cancellation resulted in several story plot lines being unfinished, and outraged loyal viewers organized petitions and mailing drives to get the show renewed. This generated more than 50,000 emails to the network in a single weekend. Show creator Daniel Knauf was unconvinced of the success of such measures, but explained that proposed alternatives like selling Carnivàle to a competing network or spinning off the story were not possible because of HBO owning Carnivàle's plot and characters. At the same time, Knauf was hopeful that, given a strong enough fan base, HBO might reconsider the show's future and allow the continuation of the show in another medium; but because of the amount of unused story material he still had, Knauf did not favor finishing the Carnivàle story with a three-hour movie. Knauf did not release a detailed run-down of intended future plots to fans, explaining that his stories are a collaboration of writers, directors and actors alike. He and the producers did, however, answer a few basic details about the immediate fate of major characters who were left in near-fatal situations in the final episode of Season 2. Knauf additionally provided in-depth information regarding the underlying fictional laws of nature that the writers had not been able to fully explore in the first two seasons. June 2007 however marked the first time that a comprehensive work of detailed character backgrounds was made public. Following a fundraising auction, Knauf offered fans a so-called "Pitch Document," a summary of Carnivàle's first season. This document was originally written in 2002 and 2003 to give the writers and the studio an idea about the series' intended plot, and answered many of the show's mysteries. ## Marketing and merchandise ### Pre-broadcast marketing HBO reportedly invested in Carnivàle's promotion as much as for any of its primetime series launches. The series' unconventional and complex narrative made the network deviate from its traditional marketing strategies. Teaser trailers were inserted on CD-ROMs into Entertainment Weekly issues to draw attention to the show's visual quality. 30-second TV spots were aired in national syndication, cable and local avails for four weeks before the show's premiere instead of the usual seven days. The historical context of Carnivàle was deliberately emphasized in the show's print art, which depicted the 17-member cast surrounding a carnival truck. This image was accompanied by a tagline of the show's good versus evil theme: "Into each generation is born a creature of light and a creature of darkness." These measures were hoped to be backed up by positive critical reviews. To give ratings an initial boost, HBO placed the premiere of Carnivàle directly after the series finale of the successful Sex and the City. The series continued to receive extensive online advertisement for almost its entire run. ### Games Personalized and interactive online games inspired by tarot divination were created for Carnivàle's internet presence. The official HBO website collaborated with RealNetworks to offer FATE: The Carnivàle Game, a downloadable game made available for trial and for purchase. ### DVDs Carnivàle: The Complete First Season was released as a widescreen six-disc Region 1 DVD box set on December 7, 2004, one month before the premiere of the second season. It was distributed by HBO Home Video and contained three audio commentaries and a behind-the-scenes featurette. The outer slipcover of the Region 1 set was made of a thick cardboard to mimic a bound book. The same set was released with less elaborate packaging in Region 2 on March 7, 2005, and in Region 4 on May 11, 2005. Carnivàle: The Complete Second Season was released as a widescreen six-disc Region 1 DVD box set on July 18, 2006, in Region 2 on August 7, 2006, and in Region 4 on October 4, 2006. Each of these releases was distributed by HBO Home Video and contained three audio commentaries, on-stage interviews of the cast and producers, a featurette about the mythology of the series, and four short "Creating the Scene" segments about the concept, inspiration and execution process. ## Reception ### Ratings Carnivàle aired on HBO on a Sunday 9:00 pm timeslot during its two-season run between 2003 and 2005. "Milfay", Carnivàle's pilot episode, drew 5.3 million viewers for its premiere on September 14, 2003. This marked the best ever debut for an HBO original series at the time, caused in part by the established HBO series Sex and the City being Carnivàle's lead-in. This record was broken on March 21, 2004, by HBO series Deadwood, which debuted with 5.8 million viewers as the lead-out of The Sopranos. Viewership dropped to 3.49 million for Carnivàle's second episode but remained stable for the remainder of the season. The final episode of season one finished with 3.5 million viewers on November 30, 2003. Season one averaged 3.54 million viewers and a household rating of 2.41. Viewership for the second-season premiere on January 9, 2005, was down by two-thirds to 1.81 million. The ratings never recovered to their first-season highs, although the season two finale experienced an upswing with 2.40 million viewers on March 27, 2005. Season 2 averaged 1.7 million viewers, not enough to avert an imminent cancellation. ### Critical reviews Many early reviews gave Carnivàle good marks but also said its unique characters and story might prevent it from becoming a huge mainstream audience success. Daily Variety TV editor Joseph Adalian predicted that "it will get mostly positive reviews but some people will be put off by the general weirdness of the show." Phil Gallo of Variety described Carnivàle as "an absolute visual stunner with compelling freak show characters—but the series unfortunately takes a leisurely approach toward getting to a point," and Eric Deggans of the St. Petersburg Times suggested that "it's as if executives at the premium cable network want to see how far they can slow a narrative before viewers start tossing their remotes through the screen". James Poniewozik of Time called the first three episodes "frustrating" as well as "spellbinding." Amanda Murray of BBC said "With so little revealed, it's almost impossible to pass judgment on the show—it's hard to tell if this is just good, or going to be great." Later reviews were able to judge the series based on full seasons. While the acting, set design, costuming, art direction and cinematography continued to be praised, some reviewers disfavored the writing, especially of Season 1, saying "the plot momentum is often virtually non-existent" or as "sometimes gripping but mostly boring." Other reviewers pointed out that Carnivàle may "demand more from its audience than many are willing to invest. [...] Without paying close attention, it's tempting to assume that the show is unnecessarily cryptic and misleading." Carnivàle's story was surveyed as long and complex, "and if you don't start from the beginning, you'll be completely lost." IGN DVD's Matt Casamassina, however, praised the show in two reviews, writing that the "gorgeously surreal" first season "dazzles with unpredictable plot twists and scares", and the "extraordinary" second season was "better fantasy – better entertainment, period – than any show that dares to call itself a competitor." A significant portion of reviews drew parallels between Carnivàle and David Lynch's 1990s mystery TV series Twin Peaks, a show in which Carnivàle actor Michael J. Anderson had previously appeared. Knauf did not deny a stylistic link and made comparisons to John Steinbeck's novel The Grapes of Wrath. When Lost began to receive major critical attention, Carnivàle and its type of mythological storytelling were compared to Lost's story approach in several instances. Critical opinion remained divided about Carnivàle in the years after the show's cancellation. Alessandra Stanley of the Australian newspaper The Age remembers Carnivàle as a "smart, ambitious series that move[s] unusual characters around an unfamiliar setting imaginatively and even with grace, but that never quite quit the surly bonds of serial drama." Variety's Brian Lowry remembers the show as "largely a macabre fantasy" that eventually suffered from "its own bleakness and eccentricities". The A.V. Club dwelled on Carnivàle's cliffhanger ending in a piece on unanswered TV questions and called the show "a fantastically rich series with a frustratingly dense mythology". ### Fandom Like other cult television shows, Carnivàle gained a respectable following of dedicated viewers. Carnivàle fans referred to themselves as "Carnies" or "Rousties" (roustabouts), terms adopted from the show. Carnivàle's complexity and subliminal mythology spawned dedicated fansites, although most discussion took place on independent internet forums. Show creator Daniel Knauf actively participated in online fandom and offered story- and mythology-related clues. He also gave insight into reasons for Carnivàle's cancellation on a messageboard before speaking to the press. One year after Carnivàle's cancellation, a major Carnivàle convention called CarnyCon 2006 Live! was organized by fans. It took place in Woodland Hills, California on August 21–23, 2006. Many of the show's cast and crew attended the event and participated in discussion panels, which were recorded and made available on DVD afterwards. ### Awards Despite its short two-season run, Carnivàle received numerous awards and nominations. The show's inaugural season received nominations for seven Emmy Awards in 2004, winning five including "Outstanding Art Direction For A Single-camera Series" and "Outstanding Costumes For A Series" for the pilot episode "Milfay", "Outstanding Cinematography For A Single-Camera Series" for the episode "Pick A Number", "Outstanding Hairstyling For A Series" for the episode "After the Ball Is Over", and "Outstanding Main Title Design". In 2005, the second season received eight further Emmy nominations without a win. Other awards include but are not limited to: - Win – Artios Award: "Best Casting for TV, Dramatic Pilot", 2004 - Win – VES Award: "Outstanding Special Effects in Service to Visual Effects in a Televised Program, Music Video or Commercial", 2004 - Win – Costume Designers Guild Award: "Excellence in Costume Design for Television – Period/Fantasy", 2005 - Nominated – two Golden Reel Awards, 2003 - Nominated – two Saturn Awards, 2004 - Nominated – two VES Awards, 2004 - Nominated – Costume Designers Guild Award, 2005 ### International reception and broadcasters HBO president Chris Albrecht said Carnivàle was "not a big show for foreign [distribution]," but did not go into more detail. Reviews however indicate that the show's cryptic mythology and inaccessibility to the casual viewer were major factors. Nevertheless, Carnivàle was sold to several foreign networks and was distributed to HBO channels abroad. The DVD releases of Carnivàle extended the availability of the show further. ## Lawsuit On June 9, 2005, a lawsuit was filed in the United States District Court for the Northern District of California by Los Angeles writer Jeff Bergquist. He claimed that the creators of Carnivàle did not originate the idea for the show, but rather stole it from his unpublished novel Beulah, a quirky drama set amid a traveling carnival during the Depression that Bergquist had been working on since the 1980s. Bergquist sought both monetary damages and an injunction preventing HBO from distributing or airing Carnivàle any further. HBO and Daniel Knauf denied the claims of copyright infringement as having "absolutely no merit."
27,900,561
Livyatan
1,162,099,221
Extinct genus of sperm whale from the Miocene epoch
[ "Fossil taxa described in 2010", "Fossils of Peru", "Herman Melville", "Miocene cetaceans", "Miocene mammals of Africa", "Miocene mammals of South America", "Neogene Peru", "Pisco Formation", "Prehistoric cetacean genera", "Prehistoric toothed whales", "Sperm whales" ]
Livyatan is an extinct genus of macroraptorial sperm whale containing one known species: L. melvillei. The genus name was inspired by the biblical sea monster Leviathan, and the species name by Herman Melville, the author of the famous novel Moby-Dick about a white bull sperm whale. It is mainly known from the Pisco Formation of Peru during the Tortonian stage of the Miocene epoch, about 9.9–8.9 million years ago (mya); however, finds of isolated teeth from other locations such as Chile, Argentina, United States (California), South Africa and Australia imply that either it or a close relative survived into the Pliocene, around 5 mya, and may have had a global presence. It was a member of a group of macroraptorial sperm whales (or "raptorial sperm whales") and was probably an apex predator, preying on whales, seals and so forth. Characteristically of raptorial sperm whales, Livyatan had functional, enamel-coated teeth on the upper and lower jaws, as well as several features suitable for hunting large prey. Livyatan's total length has been estimated to be about 13.5–17.5 m (44–57 ft), almost similar to that of the modern sperm whale (Physeter macrocephalus), making it one of the largest predators known to have existed. The teeth of Livyatan measured 36.2 cm (1.19 ft), and are the largest biting teeth of any known animal, excluding tusks. It is distinguished from the other raptorial sperm whales by the basin on the skull spanning the length of the snout. The spermaceti organ contained in that basin is thought to have been used in echolocation and communication, or for ramming prey and other sperm whales. The whale may have interacted with the large extinct shark megalodon (Otodus megalodon), competing with it for a similar food source. Its extinction was probably caused by a cooling event at the end of the Miocene period causing a reduction in food populations. The geological formation where the whale has been found has also preserved a large assemblage of marine life, such as sharks and marine mammals. ## Taxonomy ### Research history In November 2008, a partially preserved skull, as well as teeth and the lower jaw, belonging to L. melvillei, the holotype specimen MUSM 1676, were discovered in the coastal desert of Peru in the sediments of the Pisco Formation, 35 km (22 mi) southwest of the city of Ica. Klaas Post, a researcher for the Natural History Museum Rotterdam in the Netherlands, stumbled across them on the final day of a field trip. The fossils were prepared in Lima, and are now part of the collection of the Museum of Natural History, Lima of National University of San Marcos. The discoverers originally assigned—in July 2010—the English name of the biblical monster, Leviathan, to the whale as Leviathan melvillei. However, the scientific name Leviathan was also the junior synonym for the mastodon (Mammut), so, in August 2010, the authors rectified this situation by coining a new genus name for the whale, Livyatan, from the original Hebrew name of the monster. The species name melvillei is a reference to Herman Melville, author of the book Moby-Dick, which features a gigantic sperm whale as the main antagonist. The first Livyatan fossils from Peru were initially dated to around 13–12 million years ago (mya) in the Serravallian Age of the Miocene, but this was revised to 9.9–8.9 mya in the Tortonian Age of the Miocene. #### Additional specimens During the late 2010s and 2020s, fossils of large isolated sperm whale teeth were reported from various Miocene and Pliocene localities mostly along the Southern Hemisphere. These teeth have been identified to be of similar size and shape with that of the L. melvillei holotype and may be species of Livyatan. However, it is commonplace that authors do not identify such teeth as a conclusive species of Livyatan, instead opting to assign an open nomenclature in which the biological classifications of the specimens are restricted to comparisons or affinities with Livyatan. This is mostly because isolated teeth tend to not be informative enough to be identified at the species level, meaning that there is some undeterminable possibility that they belong to an undescribed close relative of Livyatan rather than Livyatan itself. In 2016 in Beaumaris Bay, Australia, a large sperm whale tooth measuring 30 cm (1 ft), specimen NMV P16205, was discovered in Pliocene strata by a local named Murray Orr, and was nicknamed the "Beaumaris sperm whale" or the "giant sperm whale". The tooth was donated to Museums Victoria at Melbourne. Though it has not been given a species designation, the tooth looks similar to those of L. melvillei, indicating it was a close relative. The tooth is dated to around 5 mya, and so is younger than the L. melvillei holotype by around 4 or 5 million years. In 2018, palaeontologists led by David Sebastian Piazza, while revising the collections of the Bariloche Paleontological Museum and the Municipal Paleontological Museum of Lamarque, uncovered two incomplete sperm whale teeth cataloged as MML 882 and BAR-2601 that were recovered from the Saladar Member of the Gran Bajo del Gualicho Formation in the Río Negro Province of Argentina, a deposit that dates between around 20–14 mya. The partial teeth measure 142 millimetres (6 in) and 178 millimetres (7 in) in height, respectively. Anatomical analyses of the specimens found that much of their characteristics are identical to L. melvillei except in width, in which the diameter of both teeth are smaller. Because of this, along with only isolated teeth being available, the palaeontologists chose to assign an open nomenclature, identifying both specimens as aff. Livyatan sp. In 2019, palaeontologist Romala Govender reported the discovery of two large sperm whale teeth from Pliocene deposits near the Hondeklip Bay village of Namaqualand in South Africa. The pair of teeth, which are stored in the Iziko South African Museum and cataloged as SAM-PQHB-433 and SAM-PQHB-1519, measure 325.12 millimetres (13 in) and 301.2 millimetres (12 in) in height, respectively, the latter having its crown missing. Both teeth have open pulp cavities, indicating that both whales were young. The teeth are very similar in shape and size to the mandibular teeth of the L. melvillei holotype, and were identified as cf. Livyatan. Like the Beaumaris specimen, the South African teeth are dated to around 5 mya. In 2023, graduate student Kristin Watmore and paleontologist Donald Prothero reported in a preprint a giant sperm whale tooth identified as cf. Livaytan discovered in Mission Viejo, California during housing development during the 1980s and '90s. The tooth resided in the Orange County Paleontological Collection cataloged as OCPC 3125/66099 and was incomplete but nevertheless measured at least 250 millimetres (10 in) in length and 86 millimetres (3 in) in diameter. Due to poor geographic recording at the time of its discovery, the exact stratigraphic locality was unknown, but it was reported to have come from a zone that contains both the mid-Miocene Monterey Formation and younger Capistrano Formation, the latter dating between 6.6 and 5.8 mya. The authors found the preservation of the tooth to be more consistent with Capistrano Formation fossils. At the area where part of the tooth broke off revealed layers of cementum and dentin of thickness within the known range of L. melvillei teeth. OCPC 3125/66099 represented the first evidence that either Livyatan or Livyatan-like whales were not restricted to the Southern Hemisphere and likely indicated a possibly global distribution of the cetaceans. ### Phylogeny Livyatan was part of a fossil stem group of hyper-predatory sperm whales commonly known as macroraptorial sperm whales, or raptorial sperm whales, alongside the extinct whales Brygmophyseter, Acrophyseter and Zygophyseter. This group is known for having large, functional teeth in both the upper and lower jaws, which were used in capturing large prey, and had an enamel coating. Conversely, the modern sperm whale (Physeter macrocephalus) lacks teeth in the upper jaw, and the ability to use its teeth to catch prey. Livyatan belongs to a different lineage in respect to the other raptorial sperm whales, and the size increase and the development of the spermaceti organ, an organ that is characteristic of sperm whales, are thought to have evolved independently from other raptorial sperm whales. The large teeth of the raptorial sperm whales either evolved once in the group with a basilosaurid-like common ancestor, or independently in Livyatan. The large temporal fossa in the skull of raptorial sperm whales is thought to a plesiomorphic feature, that is, a trait inherited from a common ancestor. Since the teeth of foetal modern sperm whales (Physeter macrocephalus) have enamel on them before being coated with cementum, it is thought that the enamel is also an ancient characteristic (basal). The appearance of raptorial sperm whales in the fossil record coincides with the diversification of baleen whales in the Miocene, implying that they evolved specifically to exploit baleen whales. It has also been suggested that the raptorial sperm whales should be placed into the subfamily Hoplocetinae, alongside the genera Diaphorocetus, Idiorophus, Scaldicetus and Hoplocetus, which are known from the Miocene to the lower Pliocene. However, most of these taxa remain too fragmentary or have been used as wastebasket taxa for non-diagnostic material of stem physeteroids. This subfamily is characterized by their robust and enamel-coated teeth. The cladogram below is modified from Lambert et al. (2017) and represents the phylogenetic relationships between Livyatan and other sperm whales, with genera identified as macroraptorial sperm whales in bold. ## Description The body length of Livyatan is unknown since only the holotype skull is preserved. Lambert and colleagues estimated the body length of Livyatan using Zygophyseter and modern sperm whales as a guide. The authors opted to use the relationship between the bizygomatic width (distance between the opposite zygomatic processes) of the skull and body length because of the variable rostrum length in modern sperm whales and the rostrum of Livyatan being proportionally shorter. Doing so produced length estimates of 13.5 m (44 ft) when using the modern sperm whale and 16.2–17.5 m (53–57 ft) when using Zygophyseter. It has been estimated to weigh 57 tonnes (62.8 short tons) based on the length estimate of 17.5 m (57 ft). By comparison, the modern sperm whale length measures on average 11 m (36 ft) for females and 16 m (52 ft) for males, with some males reaching up to 20.7 m (68 ft) long. The large size was probably an anti-predator adaptation, and allowed it to feed on larger prey. Livyatan is the largest fossil sperm whale discovered, and was also one of the biggest-known predators, having the largest bite of any tetrapod. ### Skull The holotype skull of Livyatan was about 3 m (9.8 ft) long. Like other raptorial sperm whales, Livyatan had a wide gap in between the temporal fossae on the sides of the skull and the zygomatic processes on the front of the skull, indicating a large space for holding strong temporal muscles, which are the most powerful muscles between the skull and the jaw. The snout was robust, thick and relatively short, which allowed it to clamp down harder and better handle struggling prey. The left and right premaxillae on the snout probably did not intersect at the tip of the snout, though the premaxillae took up most of the front end of the snout. Unlike in the modern sperm whale, the premaxillae reached the sides of the snout. The upper jaw was thick, especially midway through the snout. The snout was asymmetrical, with the right maxilla in the upper jaw becoming slightly convex towards the back of the snout, and the left maxilla becoming slightly concave towards the back of the snout. The vomer reached the tip of the snout, and was slightly concave, decreasing in thickness from the back to the front. A sudden thickening in the middle-left side of the vomer may indicate the location of the nose plug muscles. Each mandible in the lower jaw was higher than it was wide, with a larger gap in between the two than in the modern sperm whale. The mandibular symphysis which connects the two halves of the mandibles in the middle of the lower jaw was unfused. The condyloid process, which connects the lower jaw to the skull, was located near the bottom of the mandible like other sperm whales. #### Teeth Unlike the modern sperm whale, Livyatan had functional teeth in both jaws. The wearing on the teeth indicates that the teeth sheared past each other while biting down, meaning it could bite off large portions of flesh from its prey. Also, the teeth were deeply embedded into the gums and could interlock, which were adaptations to holding struggling prey. None of the teeth of the holotype were complete, and none of the back teeth were well-preserved. The lower jaw contained 22 teeth, and the upper jaw contained 18 teeth. Unlike other sperm whales with functional teeth in the upper jaw, none of the tooth roots were entirely present in the premaxilla portion of the snout, being at least partially in the maxilla. Consequently, its tooth count was lower than those sperm whales, and, aside from the modern dwarf (Kogia sima) and pygmy (K. breviceps) sperm whales, it had the lowest tooth count in the lower jaw of any sperm whale. The most robust teeth in Livyatan were the fourth, fifth and sixth teeth in each side of the jaw. The well-preserved teeth all had a height greater than 31 cm (1 ft), and the largest teeth of the holotype were the second and third on the left lower jaw, which were calculated to be around 36.2 cm (1.2 ft) high. The first right tooth was the smallest at around 31.5 cm (1 ft). The Beaumaris sperm whale tooth measured around 30 cm (1 ft) in length, and is the largest fossil tooth discovered in Australia. These teeth are thought to be among the largest of any known animal, excluding tusks. Some of the lower teeth have been shown to contain a facet for when the jaws close, which may have been used to properly fit the largest teeth inside the jaw. In the front teeth, the tooth diameter decreased towards the base. This was the opposite for the back teeth, and the biggest diameters for these teeth were around 11.1 cm (4.4 in) in the lower jaw. All teeth featured a rapid shortening of the diameter towards the tip of the tooth, which were probably in part due to wearing throughout their lifetimes. The curvature of the teeth decreased from front to back, and the lower teeth were more curved at the tips than the upper teeth. The front teeth projected forward at a 45° angle, and, as in other sperm whales, cementum was probably added onto the teeth throughout the animal's lifetime. All tooth sockets were cylindrical and single-rooted. The tooth sockets increased in size from the first to the fourth and then decreased, the fourth being the largest at around 197 mm (7.8 in) in diameter in the upper jaws, which is the largest of any known whale species. The tooth sockets were smaller in the lower jaw than they were in the upper jaw, and they were circular in shape, except for the front sockets which were more ovular. #### Basin The fossil skull of Livyatan had a curved basin, known as the supracranial basin, which was deep and wide. Unlike other raptorial sperm whales, but much like in the modern sperm whale, the basin spanned the entire length of the snout, causing the entire skull to be concave on the top rather than creating a snout as seen in Zygophyseter and Acrophyseter. The supracranial basin was the deepest and widest over the braincase, and, unlike other raptorial sperm whales, it did not overhang the eye socket. It was defined by high walls on the sides. The antorbital notches, which are usually slit-like notches on the sides of the skull right before the snout, were inside the basin. A slanting crest on the temporal fossa directed towards the back of the skull separated the snout from the rest of the skull, and was defined by a groove starting at the antorbital processes on the cheekbones. The basin had two foramina in the front, as opposed to the modern sperm whale which has one foramen on the maxilla, and to the modern dwarf and pygmy sperm whales which have several in the basin. The suture in the basin between the maxilla and the forehead had an interlocking pattern. ## Palaeobiology ### Hunting Livyatan was an apex predator, and probably had a profound impact on the structuring of Miocene marine communities. Using its large and deeply rooted teeth, it is likely to have hunted large prey near the surface, its diet probably consisting mainly of medium-sized baleen whales ranging from 7–10 m (23.0–32.8 ft) in length. It probably also preyed upon sharks, seals, dolphins and other large marine vertebrates, occupying a niche similar to the modern killer whale (Orcinus orca). It was contemporaneous with and occupied the same region as the otodontid shark O. megalodon, which was likely also an apex predator, implying competition over their similar food sources. It is assumed that the hunting tactics of Livyatan for hunting whales were similar to that of the modern killer whale, pursuing prey to wear it out, and then drowning it. Modern killer whales work in groups to isolate and kill whales, but, given its size, Livyatan may have been able to hunt alone. Isotopic analysis of enamel from a tooth from Chile enamel revealed that this individual likely operated at latitudes south of 40°S. Isotopic analyses of contemporary baleen whales in the same formation show that this Livyatan was not commonly feeding on them, indicating it probably did not exclusively eat large prey, though it may have targeted baleen whales from higher latitudes. ### Spermaceti organ The supracranial basin in its head suggests that Livyatan had a large spermaceti organ, a series of oil and wax reservoirs separated by connective tissue. The uses for the spermaceti organ in Livyatan are unknown. Much like in the modern sperm whale, it could have been used in the process of biosonar to generate sound for locating prey. It is possible that it was also used as a means of acoustic displays, such as for communication purposes between individuals. It may have been used for acoustic stunning, which would have caused the bodily functions of a target animal to shut down from exposure to the intense sounds. Another theory says that the enlarged forehead caused by the presence of the spermaceti organ is used in all sperm whales between males fighting for females during mating season by head-butting each other, including Livyatan and the modern sperm whale. It may have also been used to ram into prey; if this is the case, in support of this, there have been two reports of modern sperm whales attacking whaling vessels by ramming into them, and the organ is disproportionally larger in male modern sperm whales. An alternate theory is that sperm whales, including Livyatan, can alter the temperature of the wax in the organ to aid in buoyancy. Lowering the temperature increases the density to have it act as a weight for deep-sea diving, and raising the temperature decreases the density to have it pull the whale to the surface. ## Palaeoecology Fossils conclusively identified as L. melvillei have been found in Peru and Chile. However, additional isolated large sperm whale teeth from other locations including California, Australia, Argentina and South Africa have been identified as a species or possible close relative of Livyatan. On the basis of these fossils, it was likely that the distribution of Livyatan was widespread. Prior to 2023, paleontologists initially believed that the genus was restricted to the Southern Hemisphere. The warmer waters around the equator have been known to be a climatic barrier for numerous cetaceans since Neogene times, and it was then-hypothesized is that Livyatan may have been among the cetaceans unable to cross the equatorial barrier. However, collecting bias was another explanation given the apparent rarity and poor fossil record of Livyatan, now supported by the Northern Hemisphere occurrence in California. The holotype of L. melvillei is from the Tortonian stage of the Upper Miocene 9.9–8.9 mya in the Pisco Formation of Peru, which is known for its well-preserved assemblage of marine vertebrates. Among the baleen whales found, the most common was an undescribed species of cetotheriid whale measuring around 5 to 8 m (16 to 26 ft), and most of the other baleen whales found were roughly the same size. Toothed whale remains found consist of beaked whales (such as Messapicetus gregarius), ancient pontoporiids (such as Brachydelphis mazeasi), oceanic dolphins and the raptorial sperm whale Acrophyseter. All seal remains found represent the earless seals. Also found were large sea turtles such as Pacifichelys urbinai, which points to the development of seagrasses in this area. Partial bones of crocodiles were discovered. Of the seabirds, fragmentary bones of cormorants and petrels were discovered, as well as two species of boobies. The remains of many cartilaginous fish were discovered in this formation, including more than 3,500 shark teeth, which mainly belonged to the ground sharks, such as requiem sharks and hammerhead sharks. To a lesser extent, mackerel sharks were also found, such as white sharks, sand sharks and Otodontidae. Many shark teeth were associated with the extinct broad-tooth mako (Cosmopolitodus/Carcharodon hastalis) and megalodon, and the teeth of these two sharks were found near whale and seal remains. Eagle rays, sawfish and angelsharks were other cartilaginous fish found. Most of the bony fish findings belonged to tunas and croakers. Livyatan and megalodon were likely the apex predators of this area during this time. L. melvillei is also known from the Bahía Inglesa Formation of Chile, whose fossiliferous beds are dated between the Tortonian and Messinian 9.03–6.45 mya. Like the Pisco Formation, the Bahía Inglesa Formation famously holds one of the richest marine vertebrate assemblages. Baleen whale remains include ancient minke whales, grey whales, bowhead whales and cetotheriids. Of the toothed whales, five species of pontoporiids as well as beaked whales, porpoises, three other species of sperm whales such as cf. Scaldicetus, and the Odobenocetops have been yielded. Other marine mammals include the marine sloth Thalassocnus and pinnipeds like Acrophoca. At least 28 different species of sharks have been described, including many extant ground sharks and white sharks as well as extinct species such as the false mako (Parotodus sp.), broad-toothed mako, megalodon and the transitional great white Carcharodon hubbelli. Other marine vertebrates include penguins and other seabirds, and species of crocodiles and ghavials. The Beaumaris sperm whale was found in the Beaumaris Bay Black Rock Sandstone Formation in Australia near the city of Melbourne, dating to 5 mya in the Pliocene. Beaumaris Bay is one of the most productive marine fossil sites in Australia for marine megafauna. Shark teeth belonging to twenty different species have been discovered there, such as from the whale shark (Rhincodon typus), the Port Jackson shark (Heterodontus portusjacksoni), the broad-toothed mako and megalodon. Some examples of whales found include the ancient humpback whale Megaptera miocaena, the dolphin Steno cudmorei and the sperm whale Physetodon baileyi. Other large marine animals found include ancient elephant seals, dugongs, sea turtles, ancient penguins such as Pseudaptenodytes, the extinct albatross Diomedea thyridata and the extinct toothed seabirds of the genus Pelagornis. The South African teeth attributed as cf. Livyatan are from the Avontuur Member of the Alexander Bay Formation near the village of Hondeklip Bay, Namaqualand, which is also dated to around 5 mya in the Pliocene. The Hondeklip Bay locality enjoys a rich heritage of marine fossils, whose diversity may have been thanks to the initiation of the Benguela Upwelling during the late Miocene, which likely provided large populations of phytoplankton traveling the cold nutrient-rich waters. Cetaceans are the most abundant fauna in the bay, although remains tend to be difficult to conclusively identify. Included are three species of balaenopterids including two undetermined species and one identified as cf. Plesiobalaenoptera, an ancient grey whale (cf. Eschrichtius sp.), an undetermined balaenid, an unidentified dolphin, and another undetermined species of macroraptorial sperm whale. Other localities of similar age on the South African west coast have also yielded many additional species of balaenopterids and sperm whales as well as ten species of beaked whales. Large sperm whale teeth of up to around \~20 cm (8 in) in length are common in Hondeklip Bay, indicating a high presence of large sperm whales like Livyatan in the area. The locality has also a high presence of sharks indicated by a large abundance of shark teeth; however, most of these teeth have not been identified. Megalodon teeth have been found in the bay, and evidence from bite marks in whale bones indicate the additional presence of the great white shark, shortfin mako and broad-toothed mako. Other marine fauna known in Hondeklip Bay include pinnipeds such as Homiphoca capensis, bony fish and rays. ### Extinction Livyatan-like sperm whales became extinct by the early Pliocene likely due to a cooling trend causing baleen whales to increase in size and decrease in diversity, becoming coextinct with the smaller whales they fed on. Their extinction also coincides with the emergence of the killer whale as well as large predatory globicephaline dolphins, possibly acting as an additional stressor to their already collapsing niche.
911,378
Giants: Citizen Kabuto
1,162,068,128
2000 video game
[ "2000 video games", "Interplay Entertainment games", "MacOS games", "Multiplayer and single-player video games", "Planet Moon Studios games", "PlayStation 2 games", "Real-time strategy video games", "Science fantasy video games", "Third-person shooters", "Video games about extraterrestrial life", "Video games developed in the United States", "Video games featuring female protagonists", "Video games scored by Jeremy Soule", "Video games scored by Mark Morgan", "Video games set on fictional planets", "Windows games" ]
Giants: Citizen Kabuto is a third-person shooter video game with real-time strategy elements. It was the first project for Planet Moon Studios, which consisted of former Shiny Entertainment employees who had worked on the game MDK in 1997. Giants went through four years of development before Interplay Entertainment published it on December 7, 2000, for Microsoft Windows; a Mac OS X port was published by MacPlay in 2001, and the game was also ported to the PlayStation 2 later that year. In the game, players take control of a single character from one of three humanoid races to either complete the story in single-player mode or to challenge other players in online multiplayer matches. They can select heavily armed Meccaryns equipped with jet packs, or amphibious spell-casting Sea Reapers; the game's subtitle, "Citizen Kabuto", refers to the last selectable race, a thundering behemoth who can execute earthshaking wrestling attacks to pulverize its enemies. The single-player mode is framed as a sequential story, putting the player through a series of missions, several of which test the player's reflexes in action game-like puzzles. Game critics praised Giants for its state-of-the-art graphics on Windows computers, a humorous story, and successfully blending different genres. Criticisms focused on crippling software bugs and the lack of an in-game save feature. The console version rectified some of the flaws found in the PC versions, at the cost of removing several features. The game initially sold poorly for Windows and PlayStation 2; however, it sold well afterwards, and gained a cult following. ## Gameplay In Giants: Citizen Kabuto, players take on the roles of three humanoid races: gun-toting Meccaryns, magic-wielding Sea Reapers, and the gigantic Kabuto. Each player is assigned direct control of a single character. The game's developers, Planet Moon Studios, created this design to encourage players to focus on the action and not to be burdened with micromanagement. Players can customize the controls, which are largely the same for each race, with slight differences for abilities. The single-player mode consists of a sequence of missions set as an overarching story. Each mission requires the completion of certain objectives to progress to the next mission. The objectives are usually the elimination of enemies or a certain structure, but several of them test the player's eye–hand coordination or require the player to rescue and protect certain units. Players control their characters from a default third person perspective; a first person view is optional. Each race has its own offensive style, and a special mode of fast movement. Killing a creature releases a power-up, which heals or awards weapons to its collector. The real-time strategy elements of Giants consist of base building and resource gathering, wherein the resources are small humanoids called Smarties. There are a limited number of Smarties in a mission, and players must rush to gather them, or kidnap them from each other to gain an advantage. Players also gather sustenance for the Smarties to make them work; Meccaryn and Reaper players hunt the cattle-like Vimps for meat and souls respectively. The options in building a base are limited; players can neither choose the locations for the structures nor manage their workforce in detail. Players in control of Kabuto need not build a base; instead, the character gains strength and produces subordinate characters by hunting for food. Kabuto consumes Smarties to increase his size and power; at maximum size, he can produce smaller Tyrannosaurus-like units as subordinates. To restore his health, Kabuto eats Vimps and other units (player- and computer-controlled). Multiplayer mode allows a maximum of five Meccaryn, three Sea Reaper, and one Kabuto player(s) to play in each session. Due to the lack of a game server browser, players connect through online services MPlayer or GameSpy Arcade for the Windows version, and GameRanger for the Mac OS X version. Besides the standard "destroy all enemy bases and units" missions, the multiplayer mode includes deathmatches and "Capture the Smartie (flag)"-type games. Players are permitted either to start with a full base or to build one from foundations. ## Plot The game world of Giants is set on a fictional "Island" traveling through space. Its surface comprises grasslands, deserts, and forests, surrounded by azure seas. Players have an unobstructed view of the game world to its horizon; whereas distant objects are slightly blurred to convey a sense of distance. Missions for Meccaryns provide cover to hide behind, large spaces of water for Reapers, and creatures for Kabuto to eat. ### Characters Planet Moon intended for the player characters to provide a varied gameplay experience, laying down requirements to make the characters distinct with unique advantages and disadvantages. - Meccaryns use high technology and attack as a pack led by the player. Meccaryn players sport guns, explosives, and backpacks that provide special abilities: jet packs allow players to fly over obstacles and outmaneuver opponents, and the "Bush"-pack camouflages the character as a shrub. In single-player mode, players assume the role of Baz, leader of a group of Meccaryns comprising Gordon, Bennett, Tel, and Reg. Several scenarios in the game shows the responsible Baz frustrated with the laxity of Gordon and Bennett, and the inquisitive Tel and Reg. - Sea Reapers are amphibious, humanoid swimmers. Therefore, they regain health in contact with water, and the game's Piranhas do not attack them. To travel fast over land, players can "turbo boost" their Reapers to targeted areas. The Reapers can use swords, bows, and spells, such as summoning firestorms or tornadoes, in combat. Planet Moon Studios initially conceived the Sea Reaper single-player character, Delphi, as evil, but later gave her a conscience. - Kabuto is the title creature of the game, and the only one of his race. In his back-story, the Reapers created him as their guardian, but found him beyond control. Creative Director Tim Williams gave the "Citizen" title to Kabuto for its allusion to the character's wish for a sense of belonging to the Island. The game developer modeled Kabuto's attacks after those of giant monsters in classic monster movies, allowing him to use professional wrestling attacks and aerial techniques such as elbow drops, foot stomps, and the "butt flop" described as "like the body slam, but with less dignity". To balance his strength, a weak point at his waist inflicts heavy damage when struck. Players playing the giant monster can assume a perspective through his mouth to target prey. For non-playable races, the team designed Smarties to have oversized heads, bulging eyes, and idiotic personalities for comedic effect. Players labor for the Smarties while witnessing their hedonistic indulgences. The payoff, however, is a "giant gun". Standard enemies include Reaper Guards (male Reapers with no magical ability, who serve as common soldiers), as well as fauna such as the insectoid Rippers, beast-of-burden Sonaks, and bat-like Verms. ### Story Originally featuring each race in its own distinct story, the single-player mode now depicts a single sequential story wherein the player begins as Baz and must complete a sequence of missions before assuming the role of Delphi. On completion of Delphi's story, the player takes control of a Kabuto character. Williams used cut scenes to introduce and conclude each mission. As Baz, the player searches for Reg and Tel. Timmy, a Smartie rescued in the first mission, functions as a guide for the player, introducing other Smartie characters and providing exposition of the scenario. The plot portrays the Smarties as suffering under the reign of the Sea Reapers and their Queen Sappho. Alluding to the film The Magnificent Seven, Baz gathers the separated Meccaryns and takes on a quest to solve the Smarties' predicaments. In a climactic cut scene, Sappho sacrifices Timmy to Kabuto, and the young Smartie's grandfather, Borjoyzee, becomes the player's guide. Baz leads an escape from the area and sets up a base to lead a counterattack. Thereafter Delphi becomes the player's character. Yan, the Samurai Smartie, serves as the guide for this story segment, giving instructions on Delphi's abilities. After completing the training missions under Yan, Delphi attacks Sappho's base and the Reapers, eventually confronting the queen in a boss fight. When defeated, Sappho summons Kabuto to destroy the Smarties, but Kabuto eats her instead. In the final story, Delphi has transformed herself into a Kabuto-like creature to challenge the original. The player wanders around the islands as the Delphi-Kabuto character, searching for prey to increase her size. After Delphi-Kabuto achieves her maximum size, she proceeds to a boss fight with the original Kabuto. Despite her victory, Kabuto revives in a triggered cut scene and restores her Reaper form, whereupon the player takes the role of Baz against the revived monster. After defeating Kabuto, Baz is shown in the final cut scene, flying off to Planet Majorca with Delphi, Borjoyzee, and his fellow Meccaryns. ## Development When five members of Shiny Entertainment's MDK development team broke off to set up Planet Moon Studios in 1997 with software engineer, Scott Guest, they decided to make their first project fun and original, a game with graphics and gameplay unseen at that time. Nick Bruty, Bob Stevenson, and Tim Williams initially conceived the idea of pitting players as spacemen, pirates, and giants against each other and having fun. Initially projected for release in late 1999, the game suffered delays to its development largely due to the illness of their chief programmer, Andy Astor; he was suffering from stage IV mantle cell lymphoma in late 1999. The team realized they needed more resources and by 2000, they had hired two more programmers and an artist. Producing a next-generation game required them to keep up with 1998–2000's rapid advancement of technology, which resulted in further delays. The team upsized the graphic textures as they changed the graphical software to support NVIDIA graphics cards. Within a year after development started in 1999, the initial minimum graphics specification climbed from requiring Voodoo 1 graphics cards to those of the GeForce-series. Planet Moon deemed game engines available during development too restrictive and inappropriate for their requirements, and built their own. Called Amityville, it could support Glide, OpenGL, and Direct3D. The team used it to create the required "lush and vibrant" outdoor environments, and terrain deformation effects. Planet Moon designed the structure of the single-player mode to be a gradual learning process for the players; the game would introduce new command sets to the players as they progress, and encourage them to repeat using the new commands for that mission. From the start of the project, the team intended the controls to be simple, and mapped commonly used commands to a few keys. Focus groups consisting more than 25 testers went through this design to verify its ease. Planet Moon aimed for a complex artificial intelligence (AI); computer-controlled characters would evade shots and take cover. The enemy AI would plot its actions according to long-term goals. The development team consulted Mark Frohnmayer, lead programmer of the multiplayer game Tribes 2, for advice on implementing the multiplayer portion. To balance the characters in combat, Planet Moon focused on characteristics that could affect the fighting capabilities, instead of tweaking the damage output. The team faced a tight schedule, and abandoned several features initially in the game. Early designs allowed players to change the landscape; they could gorge out water channels and isolate segments of the land by playing as Reapers. The Kabuto character initially could bake mud into "mud shepherd" units and use them to defend its herd of food. Interplay Entertainment released the Windows version of the game on December 7, 2000. Planet Moon later created a special version of the game optimized for the GeForce 3 graphics card to display water reflections, soft-edged shadows, and weather effects. This version was not sold as a standalone commercial product but as a part of certain GeForce 3 graphic card package deals. MacPlay announced on November 1, 2000, it was publishing the Mac OS X version of the game. The Omni Group was responsible for the porting of the game; they rewrote the game's software to take advantage of the symmetric multi-processing capability of Mac OS X. MacPlay released the port on October 25, 2001. Multiplayer mode was initially disabled in the retail release but was re-inserted in a later patch. Giants was also ported to the PlayStation 2 (PS2), a process overseen by Interplay's division, Digital Mayhem, who posted updates of their progress on IGN. Their greatest challenge for the PS2 port was converting and storing the special effects of the Windows version onto the lesser storage space of the PS2. LightWave 3D was used by the team to convert the graphic resources. Although they had to reduce the image resolution, Digital Mayhem increased the number of polygons that composed the player character models, making them smoother and more detailed in shape. Due to the limited capabilities of the PS2 as compared to the Windows platform and the addition of a save feature, the team focused on enhancing the action gameplay, streamlining the interfaces, and tweaking the Reaper ski races, level designs, and game balance. They redesigned the controls for the PS2's controller, and after finding the analog sticks less easy to aim with than a mouse, implemented a feature to help the player's aim. Digital Mayhem originally intended to retain the multiplayer mode, but discarded it, believing the PS2 environment could not generate the same multiplayer atmosphere as the Windows platform. Interplay released the PS2 port on December 21, 2001. They also announced plans for an Xbox port but nothing resulted from this. Near the release of the United States (US) Windows version of the game, Planet Moon failed to obtain a "Teen" rating from the ESRB despite changing the original red blood to green and covering Delphi's toplessness with a bikini top. They made the changes to broaden retail opportunities because many large retailers in the US refused to sell "Mature"-rated games; Wal-Mart reiterated in October 2002 that they would never stock their shelves with software that contained vulgarity or nudity. Planet Moon Studios later released a patch that reverted the color of the blood to red, and computer gamers found they could restore Delphi's toplessness by deleting a file. Interplay offered a bonus disc containing extra multiplayer levels to those who pre-ordered the Windows version of the game. On October 5, 2003, they offered the game's soundtrack to those who purchased Giants from their online store. Composers Mark Snow (noted for his The X-Files musical scores), Mark Morgan, and Jeremy Soule (both known for the music of several video games) were involved in the music for Giants. Interplay hired Morgan to compose the scores, although reports showed they initially hired Snow for the task. Morgan, however, could not fully concentrate on the task due to personal reasons and handed it over to Soule. Closing credits of the game listed only Morgan and Soule, and Soule compiled their works onto the original soundtrack of the game. Soule originally offered to autograph the soundtrack on its release in the United States; however, he stopped his offer when email feedback revealed many were intending to pirate his work through the peer-to-peer file sharing software Napster instead of buying it. ## Reception Planet Moon Studios' blending of two genres in Giants has earned the acclaim of reviewers. Game Revolution and GameSpot found the simplified real-time strategy task of resource gathering in Giants more interesting than tedious, and Troy Dunniway, Microsoft's Head of Game Design in 2002, commented that the real-time strategy elements enhanced the game's shooter aspect rather than making it a hybrid of two genres. Sci Fi Weekly was impressed that both styles of play never interfered with each other, which was complemented by the unique gameplay of each race. The Entertainment Depot, however, found the base building in several missions tedious; they said the player had to rebuild the base several times due to being forced to leave the base defenseless, which allowed the enemy destroy the structures. Reviewers commented that the imaginative character designs and use of advanced graphics technology, such as hardware transform and lighting, and bump mapping, made the graphics of the game unrivaled in its time; ActionTrip was so impressed by the game's visuals that they thought their graphics card was supporting the complex hardware environmental bump mapping it was incapable of. The animation of Kabuto's antics such as elbow dropping onto tiny enemies, and tossing up and catching food with his mouth, in particular, won the praises of reviewers. Many critics, however, were disappointed that the computer versions of the game could not run smoothly at full details on the recommended system specifications. The AI in the game was also the subject of much commentary. Reviewers said they needed to prompt the allied non-player characters to perform actions on several occasions, although the allied AI performed pretty well most of the time. FiringSquad disagreed, calling their computer controlled teammates worthless and finding joy in leaving them to their deaths. The game review site thought the same of the enemy AI, a view echoed by IGN; enemies were unaware of the deaths of nearby teammates, and kept running into obstacles. ActionTrip, however, stated the enemy AI did well enough to take cover or flee when hurt, and constantly attack the player's base. Many reviewers found the best part of Giants to be its bawdy humor; the scenes were "bizarre and funny without ever letting the silliness distract or annoy the player". FiringSquad claimed the humor kept them plowing through the game regardless of the issues they encountered and were disappointed when the game steadily lost this approach in the later stages. Mac Guild and Macworld UK, however, considered the humor crude on a childlike level and its delivery forced. In spite of the humor, many reviewers found themselves bored by the monotony and slow pace of certain segments, According to ActionTrip, Giants lacked a unique quality to capture attention, compared to its contemporaries such as American McGee's Alice, MechWarrior 4: Vengeance, and Sea Dogs. The frequent crashes of the retail Windows versions infuriated many reviewers; Game Revolution censured Interplay for focusing on censoring the game for marketing purposes instead of testing for and fixing the software bugs before release. Several reviewers could not connect to multiplayer games due to failed connections or bugs. The reviewers who managed to play online, commented the games were fun, although they were occasionally disconnected or lagged. GamesFirst lamented the lack of dedicated low ping servers, and several reviewers declared that the computer versions of the game was flawed for not implementing an in-game save feature. Reviewers appreciated the PS2 version for including the asked-for save feature, but complained the ported game retained the AI and level design issues associated with the Windows version. IGN remarked that it looked less impressive than the computer versions. The lower resolution, flat textures, washed out colors, and sparser environments made the game average looking. The PS2 version also exhibited clipping issues; character models and projectiles would pass through objects on occasion. The game reviewer, however, praised the console version for presenting a smooth animation, rarely dropping frames. On the contrary, other reviewers stated the frame rate dropped when there are several objects on the screen, presenting a heavy load on the graphics engine. The lack of replay value for the console version after completing the single player mode was a common complaint among the reviewers. Daniel Erickson reviewed the PC version of the game for Next Generation, rating it four stars out of five, and stated that "A brilliantly conceived, beautiful epic of giant proportions." Scott Steinberg reviewed the PlayStation 2 version of the game for Next Generation, rating it four stars out of five, and stated that "It's the Monty Python meets Godzilla of computer games, suspiciously well converted to PlayStation 2." Review aggregators Metacritic and GameRankings calculated scores of 85 and 86.7% from their selected reviews for Giants as of 2007. Although most critics had awarded high scores to the game, GamesRadar and GSoundtracks reported the Windows version sold poorly. In contrast, the Mac OS X version sold out within months of its release, in spite of its smaller market base. According to the quarterly sales reports by NPDFunWorld, the PS2 version sold 11,272 copies in the US for the six months since its release. This is a poor sales figure compared to the 51,726 copies of Shadow Hearts and 753,251 copies of Max Payne sold in the same period for the PS2. Despite the poor overall sales, reviewers have nominated Giants as a game deserving a sequel, and have kept it on PC Gamer UK's Top 100 as of 2007. In 2009, Andrew Groen of GameZone ran a retrospective on Giants and suggested that the game's mix of humor and action inspired later games such as Ratchet & Clank and Jak and Daxter. He further commented that games of 2004–09 were influenced by Giants in one way or another. ## Possible sequel On September 25, 2015, the independent studio Rogue Rocket Games, co-founded by Nick Bruty, former Planet Moon Studios founder, started a Kickstarter campaign for developing a new independent crowd-funded game said to be "the spiritual successor of Giants: Citizen Kabuto", titled First Wonder. As of February 2016, the Kickstarter did not reach its goal and the spiritual successor was cancelled, despite being Greenlit on Steam.
2,070,418
North by North Quahog
1,172,256,635
null
[ "2005 American television episodes", "Cultural depictions of Mel Gibson", "Family Guy (season 4) episodes", "Television episodes about vacationing", "Television episodes set in hotels", "Television episodes written by Seth MacFarlane", "The Passion of the Christ" ]
"North by North Quahog" is the fourth season premiere of the animated television series Family Guy. It originally aired on the Fox network in the United States on May 1, 2005, though it had premiered three days earlier at a special screening at the University of Vermont, Burlington. In the episode, Peter and Lois go on a second honeymoon to rekindle their marriage, but are chased by Mel Gibson after Peter steals the sequel to The Passion of the Christ from Gibson's private hotel room. Meanwhile, Brian and Stewie take care of Chris and Meg at home. Family Guy had been canceled in 2002 due to low ratings, but was revived by Fox after reruns on Adult Swim became the cable network's most watched program, and more than three million DVDs of the show were sold. Written by series creator Seth MacFarlane and directed by Peter Shin, much of the plot and many of the technical aspects of the episode, as well as the title, are direct parodies of the 1959 Alfred Hitchcock classic movie North by Northwest; in addition, the episode makes use of Bernard Herrmann's theme music from that film. The episode contains many cultural references; in the cold opening Peter lists 29 shows that were canceled by Fox after Family Guy was canceled and says that if all of those shows were to be canceled, they might have a chance at returning. Critical responses to "North by North Quahog" were mostly positive, with the opening sequence being praised in particular. The episode was watched by nearly 12 million viewers and received a Primetime Emmy Award nomination for Outstanding Animated Program (for Programming Less Than One Hour). Shin won an Annie Award for Directing in an Animated Television Production for this episode. ## Plot In the cold open, Peter tells his family that they have "been canceled". He then lists all 29 shows that were canceled by Fox between the show's cancellation and revival and says that if all of those shows were to be canceled, they might have a chance at returning. As Peter and Lois are having sex, she yells out George Clooney's name, so Peter realizes that she is imagining him as Clooney to maintain her libido. Lois and Peter decide to take a second honeymoon to enliven their marriage, and leave their dog Brian to take care of their children Stewie, Chris, and Meg. Brian is unable to control the children, but Stewie offers to help (in exchange for Brian changing his diaper) and together they manage the home. The pair chaperone a dance at Chris's school, during which the school principal catches Chris in the boys' restroom with vodka that belongs to his classmate Jake Tucker. Although Brian and Stewie punish Chris by grounding him, they try to clear his name. Jake's father Tom refuses to believe Brian and Stewie, so they resort to planting cocaine in Jake's locker, and Jake is sentenced to community service. On the way to their vacation spot, Lois falls asleep. Unfortunately, Peter doesn't pay attention to the road, deciding instead to read a comic book while driving, and crashes the car into a tree. They are forced to spend their entire honeymoon money on car repairs and are about to return home when Peter discovers that actor/director Mel Gibson has a private suite at a luxurious hotel nearby, "which he barely uses". He and Lois then go to the hotel, where Peter poses as Gibson to gain access to his room. When Lois yells out Gibson's name during intercourse, Peter, again, decides to return home. As the two are about to leave, Peter accidentally stumbles upon Gibson's private screening room and discovers a sequel to The Passion of the Christ entitled The Passion of the Christ 2: Crucify This. To spare the world from "another two hours of Mel Gibson Jesus mumbo-jumbo," Peter steals the film. However, when they leave the hotel, they are noticed by two priests, Gibson's associates, who were there to collect the film. Pursued by the priests in a car chase that leads them through a shopping mall, Lois and Peter escape from the priests and drive to a cornfield where Peter buries the film. While he is doing so, the priests fly down in a crop-duster and kidnap Lois. Peter is then given a message telling him that if he does not return the film to Gibson at his estate on top of Mount Rushmore, his wife will be killed. Peter arrives at the house and gives Gibson a film can. As Peter and Lois are about to leave, Gibson discovers that the film has been replaced with dog feces, leading to a chase on the face of the mountain. While being chased, Lois slips but hangs on to George Washington's lips. Peter grabs her and, while being held at gunpoint, he tells Gibson that the film "is in President Rushmore's mouth" and points to the other side of the monument. Gibson follows Peter's direction and falls off the edge (Peter claims that Christians don't believe in gravity) as Peter pulls Lois to safety. Upon climbing back to the top of the mountain, the two have sexual intercourse there, improving their marriage. ## Production and development In 2002, Family Guy was canceled after three seasons due to low ratings. The show was first canceled after the 1999–2000 season, but following a last-minute reprieve, it returned for a third season in 2001. Fox tried to sell rights for reruns of the show, but it was hard to find networks that were interested; Cartoon Network eventually bought the rights, "basically for free", according to the president of 20th Century Fox Television Production. When the reruns were shown on Cartoon Network's Adult Swim in 2003, Family Guy became Adult Swim's most-watched show with an average 1.9 million viewers an episode. Following Family Guy's high ratings on Adult Swim, the first season was released on DVD in April 2003. Sales of the DVD set reached 2.2 million copies, becoming the best-selling television DVD of 2003 and the second highest-selling television DVD ever, behind the first season of Comedy Central's Chappelle's Show. The second season DVD release also sold more than a million copies. The show's popularity in both DVD sales and reruns rekindled Fox's interest in it. They ordered 35 new episodes in 2004, marking the first revival of a television show based on DVD sales. Fox president Gail Berman said that it was one of her most difficult decisions to cancel the show, and was therefore happy it would return. The network also began production of a film based on the series. "North by North Quahog" was the first episode to be broadcast after the show's cancellation. It was written by MacFarlane and directed by Peter Shin, both of whom also wrote and directed the pilot together. MacFarlane believed the show's three-year hiatus was beneficial because animated shows do not normally have hiatuses, and towards the end of their seasons "you see a lot more sex jokes and (bodily function) jokes and signs of a fatigued staff that their brains are just fried". With "North by North Quahog", the writing staff tried to keep the show "exactly as it was" before its cancellation, and did not "have the desire to make it any slicker" than it already was. Walter Murphy, who had composed music for the show before its cancellation, returned to compose the music for "North by North Quahog". Murphy and the orchestra recorded an arrangement of Bernard Herrmann's score from North by Northwest, a film referenced multiple times in the episode. Fox had ordered five episode scripts at the end of the third season; these episodes had been written but not produced. One of these scripts was adapted into "North by North Quahog". The original script featured Star Wars character Boba Fett, and later actor, writer and producer Aaron Spelling, but the release of the iconic film The Passion of the Christ inspired the writers to incorporate Mel Gibson into the episode. Multiple endings were written, including one in which Death comes for Gibson. During production, an episode of South Park was released entitled "The Passion of the Jew" that also featured Gibson as a prominent character. This gave the Family Guy writers pause, fearing accusations "that we had ripped them off." Three days before the episode debuted on television, it was screened at the University of Vermont (UVM) in Burlington, accompanied by an hour-long question-and-answer session with MacFarlane. The UVM's special screening of the episode was attended by 1,700 people. As promotion for the show, and to, as Newman described, "expand interest in the show beyond its die hard fans", Fox organized four Family Guy Live! performances, which featured cast members reading old episodes aloud; "North by North Quahog" was also previewed. In addition, the cast performed musical numbers from the Family Guy Live in Vegas comedy album. The stage shows were an extension of a performance by the cast during the 2004 Montreal Comedy Festival. The Family Guy Live! performances, which took place in Los Angeles and New York, sold out and were attended by around 1,200 people each. ## Cultural references The episode opens with Peter telling the rest of the family that Family Guy has been canceled. He lists the following 29 shows (in chronological order), that he says Fox has to make room for: Dark Angel (lasted for 2 seasons and cult following), Titus (though Titus was facing cancellation the same year Family Guy was), Undeclared, Action, That '80s Show, Wonderfalls, Fastlane, Andy Richter Controls the Universe, Skin, Girls Club, Cracking Up, The Pitts (the show that Seth MacFarlane worked on after Family Guy's cancellation), Firefly, Get Real, FreakyLinks, Wanda at Large, Costello (premiered before Family Guy hit the airwaves), The Lone Gunmen, A Minute with Stan Hooper, Normal, Ohio, Pasadena, Harsh Realm, Keen Eddie, The \$treet, The American Embassy, Cedric the Entertainer Presents, The Tick, Luis, and Greg the Bunny. Lois asks whether there is any hope, to which Peter replies that if all these shows are canceled they might have a chance, the joke being all these shows had indeed already been canceled by Fox. The New York Times reported that, during the first Family Guy Live! performance, "the longer [the list] went, the louder the laughs from the Town Hall crowd [became]". Australian-American actor Mel Gibson is prominently featured in the episode; his voice was impersonated by André Sogliuzzo. Gibson directed the film The Passion of the Christ (2004) and, in the episode, is seen making a sequel entitled Passion of the Christ 2: Crucify This. The fictional sequel is a combination of The Passion of the Christ and Rush Hour (1998), and stars Chris Tucker, who starred in Rush Hour, and Jim Caviezel who portrayed Jesus in The Passion of the Christ. Besides the title, the episode contains several references to Alfred Hitchcock's 1959 film North by Northwest, including the scene where Lois is kidnapped by Gibson's associates, the two priests flying a crop-duster who chase Peter through a cornfield, the final face-off between Peter, Lois and Gibson that takes place on Mount Rushmore, and even its theme music as originally composed by Bernard Hermann. As Peter and Lois are driving to Cape Cod for their second honeymoon, Peter is reading a Jughead comic book and their car crashes. The fictional Park Barrington Hotel, where Peter and Lois steal Gibson's film, is located in Manhattan. The car chase scene through a shopping mall is a recreation of a scene from the 1980 comedy film The Blues Brothers. To stop Meg and Chris from fighting, Brian reads to them from one of the few books Peter owns, a novelization of the 1980 film Caddyshack and quotes a line by Chevy Chase's character, Ty Webb. The episode contains a number of other cultural references. When Peter and Lois enter their motel room and find a hooker on the bed, Peter warns Lois to stay perfectly still, as the prostitute's vision is based on movement. This is a reference to a scene in the movie Jurassic Park (1993) in which Dr. Grant gives this warning in reference to a Tyrannosaurus Rex. Pinocchio appears in a cutaway gag, in which Geppetto bends over and deliberately sets Pinocchio up to tell a lie in an attempt to emulate anal sex. This was based on a joke MacFarlane's mother had told her friends when he was a child. Lois yells out George Clooney's name when she and Peter are having sex. The 1950s sitcom The Honeymooners is also referenced when a fictional episode of the sitcom is shown in which Ralph Kramden, the show's main character, hits his wife, Alice, something he would only threaten to do on the show. Meg watches an episode of the CBS sitcom Two and a Half Men, which shows three men in a living room, one of whom is cut in half at the waist and screaming in agony, the other two standing over him and screaming in horror. Fictional army soldier Flint of G.I. Joe: A Real American Hero appears briefly after Chris is caught drinking vodka, and educates the children on drinking and informs them that "knowing is half the battle". Flint's voice was provided by Bill Ratner, the actor who had voiced the character in the G.I. Joe television series. According to Seth Green, who voices Chris, the reason the Family Guy cast members did not voice Flint themselves is because if you have the original actor providing the voice "you take it with a little bit more gravitas". ## Reception "North by North Quahog" was broadcast on May 1, 2005 as part of an animated television night on Fox, was preceded by two episodes of The Simpsons (including the show's 350th episode), and was followed by the premiere of MacFarlane's new show, American Dad!. It ranked \#25 for the week, and was watched by 11.85 million viewers, higher than both The Simpsons and American Dad. The episode's ratings were Family Guy's highest ratings since the airing of the season one episode "Brian: Portrait of a Dog". Family Guy was the week's highest-rated show among teens and men in the 18 to 34 demographic, and more than doubled Fox's average in its timeslot. The episode's first broadcast in Canada on Global was watched by 1.27 million viewers, making it fourth for the week it was broadcast, behind CSI: Crime Scene Investigation, CSI: Miami and Canadian Idol. The reactions of television critics to "North by North Quahog" were mostly positive. In a simultaneous review of the two episodes of The Simpsons that preceded this episode and the American Dad! pilot, Chase Squires of the St. Petersburg Times stated that "North by North Quahog" "score[d] the highest". Multimedia news and reviews website IGN was pleased to see Stewie and Brian get more screen time as a duo, something they thought had always been one of the show's biggest strengths. IGN placed Peter's idea to pose as Mel Gibson and steal Passion of the Christ 2 in 9th place on their list of "Peter Griffin's Top 10 Craziest Ideas". Matthew Gilbert of The Boston Globe commented that the episode's material "would wear thin after a while if the character's weren't as distinct and endearing as they are, most notably Stewie, the wrathful infant." Critics reacted positively to the opening sequence; in his review of the episode, Mark McGuire of The Times Union wrote: "the first minute or so of the resurrected Family Guy ranks among the funniest 60 seconds I've seen so far this season." Variety critic Brian Lowry considered the opening sequence to be the best part of the episode. M. Keith Booker, author of the book Drawn to Television: Primetime Television from The Flintstones to Family Guy, called the opening sequence an "in-your-face, I-told-you-so rejoinder to the Fox brass followed by one of the most outrageous Family Guy episodes ever". However, the episode also garnered negative responses. Melanie McFarland of the Seattle Post-Intelligencer stated that "Three years off the air has not made the 'Family Guy' team that much more creative". Kevin Wong of PopMatters thought the episode made fun of easy targets such as Gibson and The Passion of The Christ, although he felt Family Guy regained "its admirable mix of niche nostalgia and hysterical characterizations" after the first two episodes of the new season. Though Alex Strachan, critic for The Montreal Gazette, praised the opening sequence, he felt "it's all downhill from there". Bill Brioux of the Toronto Star considered the show to be similar to The Simpsons. Media watchdog group the Parents Television Council, a frequent critic of the show, branded the episode the "worst show of the week". "North by North Quahog" was nominated for a Primetime Emmy Award for Outstanding Animated Program (for Programming Less Than One Hour), the eventual recipient of the award being South Park episode "Best Friends Forever". Peter Shin, director of the episode, won the Annie Award for Best Directing in an Animated Television Production. Fellow Family Guy director Dan Povenmire, was nominated for the same award for directing "PTV".
68,704,648
Clonmacnoise Crozier
1,122,337,763
11th-century Irish crozier
[ "Collection of the National Museum of Ireland", "Insular croziers" ]
The Clonmacnoise Crozier is a late-11th-century Insular crozier that would have been used as a ceremonial staff for bishops and mitred abbots. Its origins and medieval provenance are unknown. It was likely discovered in the late 18th or early 19th century in the monastery of Clonmacnoise in County Offaly, Ireland. The crozier has two main parts: a long shaft and a curved crook. Its style reflects elements of Viking art, especially the snake-like animals in figure-of-eight patterns running on the sides of the body of the crook, and the ribbon of dog-like animals in openwork (ornamentation with openings or holes) that form the crest at its top. Apart from a shortening to the staff length and the loss of some inserted gems, it is largely intact and is one of the best-preserved surviving pieces of Insular metalwork. The crozier may have been associated with Saint Ciarán of Clonmacnoise (died c. 549 CE), and was perhaps commissioned by Tigernach Ua Braín (died 1088), Abbot of Clonmacnoise, but little is known of its origin or rediscovery. It was built in two phases: the original 11th-century structure received an addition sometime around the early 15th century. The staff is made from a wooden core wrapped in copper-alloy (bronze) tubes, fixed in place by binding strips, and three barrel-shaped knops (protruding decorative metal fittings). The hook was concurrently but separately constructed before it was placed on top of the staff. The crozier's decorative attachments include the crest and terminal (or "drop") on the crook, and the knops and ferrule on the staff; these components are made from silver, niello, glass and enamel. The hook is further embellished with round blue glass studs and white and red millefiori (glassware) insets. The antiquarian and collector Henry Charles Sirr, Lord Mayor of Dublin, held the crozier until his collection was acquired by the Royal Irish Academy on his death in 1841. It was transferred to the archaeology branch of the National Museum of Ireland on Kildare Street on the branch's foundation in 1890. The archaeologist and art historian Griffin Murray has described the crozier as "one of [the] finest examples of early medieval metalwork from Ireland". ## Function Like all Insular croziers produced between CE, the Clonmacnoise crozier is in the shape of an open shepherd's crook, a symbol of Jesus as the Good Shepherd leading his flock. Psalm 23 mentions a "rod" and a "staff", and from the 3rd century onwards Christian art often shows the shepherd holding a staff, including the 4th-century Sarcophagus of the Three Shepherds in the Vatican Museums in Rome, and the 6th-century Throne of Maximian at the Archiepiscopal Museum, Ravenna. The distinctive shape of Irish croziers evokes the function of shepherds' crooks in restraining wayward sheep, and according to the art historian Rachel Moss is similar to the crook-headed sticks used by cherubs to grasp vine branches in Bacchic iconography. Croziers became symbols of status for bishops and abbots when Pope Celestine I linked them to the episcopal office in a 431 letter to bishops in Gaul. By tradition the first Irish example (lost since 1538) was the "Bachal Isu" (Staff of Jesus) given by God to Saint Patrick. According to the archaeologist A. T. Lucas, the croziers thus acted as "the principal vehicle of [Patrick's] power, a kind of spiritual electrode through which he conveyed the holy energy by which he wrought the innumerable miracles attributed to him". In a 2004 survey, the Clonmacnoise Crozier was one of an estimated twenty (or fewer) largely intact Insular croziers in addition to some sixty fragments. ## Origin and dating The Irish antiquarian George Petrie (d. 1866) was the first to write about the crozier's discovery, and based on his sources placed the find-spot as in the "Temple Ciarán", a now ruined oratory on the grounds of Clonmacnoise monastery, County Offaly. The oratory is said to contain the tomb of the monastery's founder Saint Ciarán of Clonmacnoise (d. ), and he is recorded as having appeared centuries after his death "to smite a would-be raider with his crozier". Petrie recorded that it was found alongside a hoard including a silver chalice dated to 1647, a wine vessel and an arm-shrine or relic of Ciarán's hand, all now lost except for the chalice. The objects would have been deposited individually at the burial site during the centuries after Ciarán's death. However there is no surviving documentary evidence to support Petrie's account of the find spot. The claim seems based on accounts from 1684 and 1739 which mention that a relic of Ciarán's hand had been found there, while the crozier's style and production technique closely resemble two other contemporary fragmentary croziers sometimes associated with Clonmacnoise; the very similar and so-called Frazer Crozier-head (catalog number NMI 1899:28) and a crozier-knop in the British Museum. The antiquarian William Frazer wrote in 1891 that the Clonmacnoise Crozier was probably revered as holding a relic of Saint Ciarán. Clonmacnoise monastery was founded in 544 by Saint Ciarán in the territory of Uí Maine where an ancient major east–west land route and early medieval political division (the Slighe Mhor) met at the River Shannon. This strategic location helped it become a thriving centre of religion, learning, craftsmanship and trade by the 9th century, and many of the high kings of Tara (Ard Rí) and of Connacht were buried here. Clonmacnoise was largely abandoned by the end of the 13th century. Today the site includes nine ruined churches, a castle, two round towers and many carved stone crosses. The crozier's late 11th-century dating is based in part on its stylistic resemblance to the Bell Shrine of St. Cuileáin and the early 12th-century Shrine of Saint Lachtin's Arm, as well as the Romanesque elements sometimes found on Insular art of the period. Lucas places it shortly after 1125. Some historians suggest that the crozier was produced in Dublin, based on the so-called "Dublin school" Hiberno-Ringerike patterns on the crook. It also has zoomorphic designs similar those on the Dublin-manufactured Prosperous Crozier, on the shrine of the Cathach of Saint Columba, which also contains stylistic resemblances to Dublin metalwork, in particular with those found during excavations at High Street, Dublin, during 1962 and 1963. None of these links are definitive nor widely accepted. A significant metal workshop is known to have been in operation at Clonmacnoise in the 11th century, and the crozier contains design elements and motifs unique to contemporary objects found on or near the monastery's grounds. These include the confronted lions with intertwined legs on collar below the top-most knop, that are also present on a high cross in Temple Ciarán. ## Description The crozier is 97 cm (38 in) long (about the length of a walking stick) and the crook 13.5 cm (5.3 in) wide. It was probably once 20 cm longer and had four knops, as with most other intact examples; the losses seem to result from its having been broken apart to make it easier to fold and thus hide from Viking and later Norman invaders. The staff is formed from a wooden core overlaid by metal tubes, and comprises two main sections: the long shaft and the crook. The crook ends in a vertical section called the drop, with a drop-plate on the outward-facing side. The casing on the shaft is attached by binding strips connected to each other by three knops, while a protective copper alloy ferrule comprises the tip of the shaft's base. The shaft and crook cores are made from separate pieces of timber but date from the same period. The crook is fitted with an inner binding strip, crest and drop-plate, each of which was independently made and, having no structural function, are purely decorative. It was built in two phases: the early 11th-century structure was added to and refurbished in the 14th century, the later additions including the bishop and dragon in the drop-plate, and some of the ornamentation on the upper knop. The first phase is designed in the Insular style, and contains animal ornament, interlace and Celtic art patterns. Several of the decorations are influenced by the late 10th-century Ringerike and 11th-century Urnes styles of Viking art, both of which are characterised by band-shaped animals (often snakes, dogs and birds), acanthus-leaf foliage, crosses and spirals, and was adapted in Ireland via direct contact and contemporary Anglo-Saxon art from Southern England. Moss describes the crozier as among the finest of the Irish Ringerike-influenced objects, along with the Shrine of Miosach and the Cathach (both 11th-century cumdaigh). Although it has suffered some losses, damage and detrimental repair-work, it is in excellent condition overall. The original drop-plate was replaced in the late medieval period. The wood at the end of the crest is decayed, likely due to one of the rivets being exposed, which in turn led to further damage to the structure. ### Crook The crook is 13.5 cm (5.3 in) high, 15.5 cm (6.1 in) wide and has a maximum circumference of 3.7 cm (1.5 in). It is composed of a single piece of wood, encased in copper alloy, with an inner binding and plates for the crest and drop. Each side of the crook is decorated with four or five silver cast zoomorphic snake-like animals in rows of tightly bound figure-eight knots and ribbon-shaped pale coloured bodies that intertwine and loop over each other. Designed in an Irish adaption of the Ringerike style, they are outlined with thin strips of niello that appear as decorative flaps that, according to the archaeologist and art historian Griffin Murray, "spring from their heads and bodies forming knotted vegetal-like designs around them" before terminating in spiral patterns. The crest is attached to the top of the crook by rivets and nails. Around half of it has broken away, but what remains is a openwork row of five crouching dog-like animals that extends from above the joining with the staff to just before the top of the crook – presumably the row once extended to the top of the drop, especially since the lead animal is the most badly damaged and missing its head, while those nearest are also damaged and have missing parts. The animals are forward-looking and positioned end-to-end, and rendered in the Oseberg Style of Viking art. They each appear, in the words of the art historian Máire de Paor, as "grasping with [their] jaws the buttocks of the preceding animal". Similarly, the Frazer Crozier-head contains dog-tooth patterns on the upper part of the crook, but these are thought to be 16th-century additions. ### Drop The original drop was presumably as highly decorated as the knops, but is lost and was replaced sometime during the 14th or 15th centuries. The current plate, like the original, forms a hollow box-like extension that was fixed to the end of the crook. It is cast from copper alloy and consists of a cast figurative insert attached to a plain metal strip. At its top is a looming, grotesque human head in champlevé enamel. Set into the cavity below is a figure added in the 14th or 15th century, who appears to be a bishop or cleric wearing a mitre (a type of bishop's headgear). He has one hand raised in blessing while the other holds a long crozier with a spiral crook, which he uses to impale an animal, probably a dragon, at his feet. De Paor describes the cleric as a generic late-period Insular figure with "pierced eyes, small ears, a large nose, and [a] heavy mustache and beard". The positioning of the human figures is similar to the late 9th-century Prosperous Crozier. The only other surviving example of such a figure is in the drop of the River Laune Crozier; presumably other croziers once held similar figures but the components were damaged or removed. It seems likely that the cleric is intended to represent the commemorated saint, thus "making the body of the founder saint visible and active", and conferring the saint's authority to the crozier's current bearer. The copper plate underneath the drop contain enamel double-spiral designs rendered in blue, green and yellows. As the most visible portion of the crozier, the drops were the obvious focus point for figure art, an element that is, apart from zoomorphism, otherwise almost entirely absent in Insular metalwork. This led to theories in the 19th century that the drops acted as containers for smaller relics of saints, while the metal casing held the saint's original wooden staffs; these claims have been in doubt since the mid-20th century, and there is no evidence to support the theories. An exception is the Lismore Crozier, where two small relics and a linen cloth were found inside the crook during a 1966 refurbishment. ### Shaft The shaft is formed from a wooden core plated with two copper alloy tubes and narrows after the lowest knop. The tubing was originally sealed by two binding strips on the front which were probably of leather but are now lost, although a portion of a leather membrane between the wood and metal still exists. The shaft contains three large and ornately decorated barrel-shaped and individually cast knops, each of which fully wraps around the staff. They are positioned equally distant on the staff, separated by lengths of bare tubing. Each contains openwork patterns and chased or repoussé (i.e. relief hammered from the back) copper-alloy plates, a feature only otherwise found on the Prosperous Crozier. The largest and uppermost knop is 7.5 cm (3.0 in) high and has a diameter of 4.8 cm (1.9 in). It is centred by a horizontal band of interlace and champlevé enamelling containing geometric and foliage patterns. It is lined with inserted triangular and rectangular plaques (some of which are missing) between which are blue glass studs. The plaques are in copper and decorated with interlace and have borders lined with strips of twisted copper and silver wire. It contains a 4.2 cm (1.7 in) crest which has been trimmed to hold the base of the crook. The crest below the upper knop is made of copper alloy and contains two pairs of large cat-like animals facing or confronting each other. The animals are rendered in relief and decorated with niello and inlaid silver. They have lion-like manes, upright ears, long necks and taloned tails. Their intertwined legs begin from spirals which develop or knot into triquetra arcs before merging with the corresponding animal on the opposite side. Although usually identified as lions, the figures also bear a resemblance to griffins in an 8th-century Insular knop from Setnes in Norway. The central knop is 8.8 cm (3.5 in) in height and less decorated than the other two, but has bands of open Ringerike-style interlace made of inlaid silver that form series of knotted patterns. The lower knop measures 6.8 cm (2.7 in) in height, and like the upper knop is biconical (i.e. of two parts) and contains copper plaques separated by glass studs. After the lower knop the shaft passes through a free ring and tapers (narrows) into the spiked ferrule (a protective metal-cast foot, here of copper alloy) that forms the crozier's basal point. Unlike the other two Insular examples with surviving ferrules (Lismore and River Laune, both of which have more elaborate and complex endings) it is not cast into the lower knop, but is a separate piece. ## Modern provenance The location and year of the crozier's rediscovery is uncertain. Writing in 1821 in his Notes on the history of Clonmacnoise, Petrie said that it had been found "some 30 years ago [...] [in] the tomb of St. Ciaran", placing its finding . He continued that other objects discovered in the tomb included a chalice and wine vessel which, according to Petrie "fell into ignorant hands, and were probably deemed unworthy of preservation", indicating that their precious metal was melted and sold for its intrinsic value. The "St Ciaran's tomb" referred to by Petrie is most likely Clonmacnoise's Temple Ciarán, a shrine-chapel on the site. The crozier was for a period in the collection of the Lord Mayor of Dublin and collector Henry Charles Sirr (1764–1841), although the circumstances of his purchase are unknown. In 1970, the archaeologist Françoise Henry speculated that Sirr "might have obtained it directly or indirectly from the family of its hereditary keepers" (a local family who would have looked after and protected the object over centuries), but there is no documentary evidence for this. In 1826, a lithograph representation appeared in Picturesque Views of the Antiquities of Ireland, compiled in 1830 by the architect and draughtsman Robert O'Callaghan Newenham, where it was described as having been "dug up 100 years ago". The crozier is described as an "ancient" and ornamental crozier, which once belonged to the Abbots of Clonmacnoise, in an 1841 catalogue for an exhibition of Sirr's collection at the Rotunda Hospital in Dublin, held shortly after his death. It was acquired at that exhibition by the Royal Irish Academy, and transferred to the National Museum of Ireland, Kildare Street, Dublin, on its founding in 1890. Today it is on permanent display in the Treasury Room, next to the Lismore and River Laune Croziers, where it is catalogued as R 2988. An early 20th-century replica is in the Met Cloisters in New York. Widely considered the most lavish and ornate of the surviving early medieval croziers, it appeared in 2011 in The Irish Times and Royal Irish Academy's list of "A History of Ireland in 100 Objects". ## Gallery
10,158,504
Northern rosella
1,171,004,348
Parrot native to northern Australia
[ "Birds described in 1820", "Birds of the Northern Territory", "Endemic birds of Australia", "Platycercus", "Taxa named by Heinrich Kuhl" ]
The northern rosella (Platycercus venustus), formerly known as Brown's rosella or the smutty rosella, is a species of parrot native to northern Australia, ranging from the Gulf of Carpentaria and Arnhem Land to the Kimberley. It was described by Heinrich Kuhl in 1820, and two subspecies are recognised. The species is unusually coloured for a rosella, with a dark head and neck with pale cheeks—predominantly white in the subspecies from the Northern Territory and blue in the Western Australian subspecies hillii. The northern rosella's mantle and scapulars are black with fine yellow scallops, while its back, rump and underparts are pale yellow with fine black scallops. The long tail is blue-green, and the wings are black and blue-violet. The sexes have similar plumage, while females and younger birds are generally duller with occasional spots of red. Found in woodland and open savanna country, the northern rosella is predominantly herbivorous, consuming seeds, particularly of grasses and eucalypts, as well as flowers and berries, but it may also eat insects. Nesting takes place in tree hollows. Although uncommon, the northern rosella is rated as least concern on the International Union for Conservation of Nature (IUCN)'s Red List of Threatened Species. ## Taxonomy and naming The northern rosella was first described as Psittacus venustus by German naturalist Heinrich Kuhl in 1820. The description was based on an illustration by Ferdinand Bauer from a specimen collected by Robert Brown in February 1803, during Matthew Flinders' voyage around the Australian coastline. The specific epithet is derived from the Latin venustus, meaning "charming, lovely or graceful". Dutch zoologist Coenraad Jacob Temminck published the name Psittacus brownii in honour of Brown in 1821, and Irish zoologist Nicholas Aylward Vigors transferred it (as P. brownii) to the genus Platycercus in 1827, describing it as the "most beautiful of the family". However, John Gould wrote in his 1865 work Handbook to the Birds of Australia that "Hitherto this bird has been known to ornithologists as Platycercus brownii, a specific appellation in honour of the celebrated botanist; but which, I regret to say, must give place to the prior one of venustus." Gregory Mathews described the subspecies P. venustus hillii in 1910, collected by G.F. Hill from Napier Broome Bay in Western Australia. He noted that its cheeks had more blue and less white than the nominate subspecies. The Victoria River marks the border between this and the nominate subspecies. Animal taxonomist Arthur Cain treated the subspecies as synonymous to the nominate, as the only difference of which he knew was the colour of the cheeks, but conceded further evidence could prove them distinct. As well as the differences in cheek plumage, the two differ in that subspecies hillii has brighter yellow feathers on the breast and belly with thinner black edges, and a consistently longer and wider bill. A subspecies, P. venustus melvillensis from Melville Island, was described by Mathews in 1912, noting it to have blacker plumage on its back. It is now thought to be indistinguishable from the nominate subspecies. "Northern rosella" has been designated the official English name by the International Ornithologists' Union (IOC). Early names used include Brown's rosella, parrot or parakeet for its collector, with Brown's parakeet remaining a name used in aviculture in Europe and the United Kingdom, and smutty rosella, parrot or parakeet, from its dark plumage. Gould reported in 1848 that the latter was the local name used, and it was the most common name at the end of the 19th century. It was changed—possibly through bowdlerisation—to sooty parrot by the Royal Australasian Ornithologists Union (RAOU) in 1913. Bulawirdwird and Djaddokorddokord are two names from the Kunwinjku language of the western Arnhem Land. One of six species of rosella in the genus Platycercus, the northern rosella and related eastern (P. eximius) and pale-headed rosella (P. adscitus) make up a "white-cheeked" lineage. A 1987 genetic study on mitochondrial DNA by Ovenden and colleagues found that the northern rosella was the earliest offshoot (basal) of a lineage that gave rise to the other white-cheeked forms. But a study with nuclear DNA by Ashlee Shipham and colleagues published in 2017 found that the eastern rosella was basal to the lineage that split into the pale-headed and northern rosellas, and hence, that non-sister taxa were able to hybridise among the rosellas. ## Description Smaller than all rosella species except the western rosella, the adult northern rosella weighs 90 to 110 g (3.2 to 3.9 oz) and is 29 to 32 cm (11 to 13 in) long. It has broad wings with a wingspan of around 44 cm (17 in), and a long tail with twelve feathers. The sexes are almost indistinguishable, though some adult females have duller plumage and are more likely to have some red feathers on the head and breast. The adult bird has a black forehead, crown, lores, ear coverts, upper neck and nape, a whitish throat and large cheek-patches, which are mainly white with violet lower borders in the nominate subspecies, and more blue with a narrow white upper segment in subspecies hillii. The feathers of the lower neck, mantle and scapulars are black narrowly fringed with yellow, giving a scalloped appearance, while the feathers of the back, rump, upper tail coverts and underparts are pale yellow with black borders and concealed grey bases. Those of the breast have very dark grey bases, occasionally tinged with red. The undertail covert feathers are red with black fringes. The feathers on the upper leg are pale yellow tinged with blue. The central rectrices of the long tail are dark green changing to dark blue at the tips, while the other feathers are dark blue with two bands of pale blue and white tips. The undertail is pale blue with a white tip. The wings have a wide purplish blue shoulder patch at rest, with the secondary feathers edged darker blue and the primaries black edged with blue. The beak is off-white with a grey cere, the legs and feet are grey, and the iris is dark brown. Immature birds resemble adults but are duller overall, with less-well defined cheek patches. The black plumage in particular is more greyish, and there are more likely to be scattered red feathers on the head, neck and underparts. ## Distribution and habitat The northern rosella is found across northern Australia. In Western Australia, it is found across the Kimberley south to the 18th parallel, around Derby, Windjana Gorge National Park, the northern Wunaamin Miliwundi Ranges, Springvale Station and Warmun, with vagrants reported at Halls Creek and Fitzroy Crossing. In the Northern Territory it is found from Victoria River north to the Tiwi Islands and east into western Arnhem Land, and across northern Arnhem Land through Milingimbi Island and the Wessel Islands to the Gove Peninsula. It is absent from central Arnhem Land, but is found further east around the western and southern coastline of the Gulf of Carpentaria, south to Borroloola and across the border into western Queensland as far as the Nicholson River. The northern rosella lives in grassy open forests and woodlands, including deciduous eucalypt savanna woodlands. Typical trees include species of Eucalyptus, such as Darwin stringybark (Eucalyptus tetrodonta), Melaleuca, Callitris and Acacia. More specific habitats include vegetation along small creeks and gorges, sandstone outcrops and escarpments, as well as some forested offshore islands. The northern rosella is occasionally found in mangroves or public green spaces in suburban Darwin. It avoids dense forest. ## Behaviour Not a gregarious bird, the northern rosella is generally found alone or in pairs, although several birds perch together in the same tree. Sometimes they are encountered in larger troops—usually 6 to 8 birds, but in rare instances up to 15 individuals. It is shyer than other rosellas, and flees to the upper tree canopy if disturbed. It is a quieter and less vocal species than other rosellas, and its call repertoire has been little studied. It exhibits a sharp and short chit-chut chit-chut contact call in flight; while perched it makes a three-note whistle on ascending scale or metallic piping sounds. Soft chattering can be heard while feeding, and sometimes when squabbling at the beginning of breeding season. ### Breeding Nesting occurs in tree hollows in the Southern Hemisphere winter, often in eucalypts located near water. The clutch is anywhere from two to five white matte or slightly glossy eggs, measuring roughly 26 x 21 mm (1 x 0.8 in). The female incubates the eggs alone, over a period of 19 or 20 days. Newly hatched chicks are covered with long white down and are largely helpless (nidicolous). They may remain in the nest for seven weeks after hatching and are fed by both parents. Fledglings remain with their parents for a year or more, often feeding together in small family groups. ### Feeding The northern rosella feeds on the ground in grassy glades in woodlands and on roadsides and riverbanks, as well as in the canopy of trees. It eats seeds, particularly those of eucalypts, wattles, cypress (Callitris intratropica) and grasses. It eats both the seeds and nectar of white gum (Eucalyptus alba), Darwin stringybark, long-fruited bloodwood (Corymbia polycarpa), fibrebark (Melaleuca nervosa) and fern-leaved grevillea (Grevillea pteridifolia). It also eats flowers, such as those of Darwin woollybutt (Eucalyptus miniata), and fruit. It also eats larval and adult insects. ### Predation and parasites The northern rosella is a prey item of the rufous owl (Ninox rufa). The bird louse Forficuloecus wilsoni has been recovered from the northern rosella. ## Conservation status The northern rosella is listed as being a species of least concern by the International Union for Conservation of Nature (IUCN), on account of its large range and stable population, with no evidence of any significant decline. Despite this, the northern rosella is an uncommon bird. Grazing by livestock and frequent burning of grassy woodland may have a negative impact on northern rosella numbers. Like most species of parrots, the northern rosella is protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) with its placement on the Appendix II list of vulnerable species, which makes the import, export and trade of listed wild-caught animals illegal. ## Aviculture Most northern rosellas in captivity in Australia are the nominate subspecies, but there are blue-cheeked specimens that are either subspecies hillii or intermediate. Its attractive colours make it a desirable species to keep. In captivity in the Northern Hemisphere, the northern rosella has been reported to breed in the same calendar months as it does in its Southern Hemisphere native range. As it breeds early in the season, clutches laid in cooler months of cooler Australian states may fail. Breeders have attempted to use sprinklers in enclosures to induce pairs to breed at other times.
351,717
Burke and Hare murders
1,173,498,563
1828 series of killings in Edinburgh, Scotland
[ "1792 births", "1820s in Edinburgh", "1827 in Scotland", "1827 murders in the United Kingdom", "1828 in Scotland", "1828 murders in the United Kingdom", "1829 deaths", "1859 deaths", "19th-century executions by Scotland", "Criminal duos", "Executed British serial killers", "History of anatomy", "Male serial killers", "Medical serial killers", "Murder in Edinburgh", "Murder in Scotland", "Old Town, Edinburgh", "People from County Tyrone", "Scottish serial killers", "Serial murders in the United Kingdom" ]
The Burke and Hare murders were a series of sixteen killings committed over a period of about ten months in 1828 in Edinburgh, Scotland. They were undertaken by William Burke and William Hare, who sold the corpses to Robert Knox for dissection at his anatomy lectures. Edinburgh was a leading European centre of anatomical study in the early 19th century, in a time when the demand for cadavers led to a shortfall in legal supply. Scottish law required that corpses used for medical research should only come from those who had died in prison, suicide victims, or from foundlings and orphans. The shortage of corpses led to an increase in body snatching by what were known as "resurrection men". Measures to ensure graves were left undisturbed—such as the use of mortsafes—exacerbated the shortage. When a lodger in Hare's house died, he turned to his friend Burke for advice and they decided to sell the body to Knox. They received what was, for them, the generous sum of £7 10s. A little over two months later, when Hare was concerned that a lodger with a fever would deter others from staying in the house, he and Burke murdered her and sold the body to Knox. The men continued their murder spree, probably with the knowledge of their wives. Burke and Hare's actions were uncovered after other lodgers discovered their last victim, Margaret Docherty, and contacted the police. A forensic examination of Docherty's body indicated she had probably been suffocated, but this could not be proven. Although the police suspected Burke and Hare of other murders, there was no evidence on which they could take action. An offer was put to Hare granting immunity from prosecution if he turned king's evidence. He provided the details of Docherty's murder and confessed to all sixteen deaths; formal charges were made against Burke and his wife for three murders. At the subsequent trial Burke was found guilty of one murder and sentenced to death. The case against his wife was found not proven—a Scottish legal verdict to acquit an individual but not declare them innocent. Burke was hanged shortly afterwards; his corpse was dissected and his skeleton displayed at the Anatomical Museum of Edinburgh Medical School where, as at 2022, it remains. The murders raised public awareness of the need for bodies for medical research and contributed to the passing of the Anatomy Act 1832. The events have made appearances in literature, and been portrayed on screen, either in heavily fictionalised accounts or as the inspiration for fictional works. ## Background ### Anatomy in 19th-century Edinburgh In the early 19th century Edinburgh had several pioneering anatomy teachers, including Alexander Monro, his son who was also called Alexander, John Bell, John Goodsir and Robert Knox, all of whom developed the subject into a modern science. Because of their efforts, Edinburgh became one of the leading European centres of anatomical study, alongside Leiden in the Netherlands and the Italian city of Padua. The teaching of anatomy—crucial in the study of surgery—required a sufficient supply of cadavers, the demand for which increased as the science developed. Scottish law determined that suitable corpses on which to undertake the dissections were those who died in prison, suicide victims, and the bodies of foundlings and orphans. With the rise in prestige and popularity of medical training in Edinburgh, the legal supply of corpses failed to keep pace with the demand; students, lecturers and grave robbers—also known as resurrection men—began an illicit trade in exhumed cadavers. The situation was confused by the legal position. Disturbing a grave was a criminal offence, as was the taking of property from the deceased. Stealing the body itself was not an offence, as it did not legally belong to anyone. The price per corpse changed depending on the season. It was £8 during the summer, when the warmer temperatures brought on quicker decomposition, and £10 in the winter months, when the demand by anatomists was greater, because the lower temperatures meant they could store corpses longer so they undertook more dissections. By the 1820s the residents of Edinburgh had taken to the streets to protest at the increase in grave robbing. To avoid corpses being disinterred, bereaved families used several techniques in order to deter the thieves: guards were hired to watch the graves, and watchtowers were built in several cemeteries; some families hired a large stone slab that could be placed over a grave for a short period—until the body had begun to decay past the point of being useful for an anatomist. Other families used a mortsafe, an iron cage that surrounded the coffin. The high levels of vigilance from the public, and the techniques used to deter the grave robbers, led to what the historian Ruth Richardson describes as "a growing atmosphere of crisis" among anatomists because of the shortage of corpses. The historian Tim Marshall considers the situation meant "Burke and Hare took graverobbing to its logical conclusion: instead of digging up the dead, they accepted lucrative incentives to destroy the living". ### Robert Knox Knox was an anatomist who had qualified as a doctor in 1814. After contracting smallpox as a child, he was blind in one eye and badly disfigured. He undertook service as an army physician at the Battle of Waterloo in 1815, followed by a posting in England and then, during the Cape Frontier War (1819), in southern Africa. He eventually settled in his home town of Edinburgh in 1820. In 1825 he became a fellow of the Royal College of Surgeons of Edinburgh, where he lectured on anatomy. He undertook dissections twice a day, and his advertising promised "a full demonstration on fresh anatomical subjects" as part of every course of lectures he delivered; he stated that his lessons drew over 400 pupils. Clare Taylor, his biographer in the Oxford Dictionary of National Biography, observes that he "built up a formidable reputation as a teacher and lecturer and almost single-handedly raised the profile of the study of anatomy in Britain". Another biographer, Isobel Rae, considers that without Knox, the study of anatomy in Britain "might not have progressed as it did". ### William Burke and William Hare William Burke was born in 1792 in Urney, County Tyrone, Ireland, one of two sons to middle-class parents. Burke, along with his brother, Constantine, had a comfortable upbringing, and both joined the British Army as teenagers. Burke served in the County Donegal militia until he met and married a woman from County Mayo, where they later settled. The marriage was short-lived; in 1818, after an argument with his father-in-law over land ownership, Burke deserted his wife and family. He moved to Scotland and became a labourer, working on the Union Canal. He settled in the small village of Maddiston near Falkirk, and set up home with Helen McDougal, whom he affectionately nicknamed Nelly; she became his second wife. After a few years, and when the works on the canal were finished, the couple moved to Tanners Close, Edinburgh, in November 1827. They became hawkers, selling second-hand clothes to impoverished locals. Burke then became a cobbler, a trade in which he experienced some success, earning upwards of £1 a week. He became known locally as an industrious and good-humoured man who often entertained his clients by singing and dancing for them on their doorsteps while plying his trade. Although raised as a Roman Catholic, Burke became a regular worshipper at Presbyterian religious meetings held in the Grassmarket; he was seldom seen without a bible. William Hare was probably born in County Armagh, County Londonderry or in Newry. His age and year of birth are unknown; when arrested in 1828 he gave his age as 21, but one source states that he was born between 1792 and 1804. Information on his earlier life is scant, although it is possible that he worked in Ireland as an agricultural labourer before travelling to Britain. He worked on the Union Canal for seven years before moving to Edinburgh in the mid-1820s, where he worked as a coal man's assistant. He lodged at Tanner's Close, in the house of a man named Logue and his wife, Margaret Laird, in the nearby West Port area of the town. When Logue died in 1826, Hare may have married Margaret. Based on contemporary accounts, Brian Bailey in his history of the murders describes Hare as "illiterate and uncouth—a lean, quarrelsome, violent and amoral character with the scars from old wounds about his head and brow". Bailey describes Margaret, who was also an Irish immigrant, as a "hard-featured and debauched virago". In 1827 Burke and McDougal went to Penicuik in Midlothian to work on the harvest, where they met Hare. The men became friends; when Burke and McDougal returned to Edinburgh, they moved into Hare's Tanner's Close lodging house, where the two couples soon acquired a reputation for hard drinking and boisterous behaviour. ## Events of November 1827 to November 1828 On 29 November 1827 Donald, a lodger in Hare's house, died of dropsy shortly before receiving a quarterly army pension while owing £4 of back rent. After Hare bemoaned his financial loss to Burke, the pair decided to sell Donald's body to one of the local anatomists. A carpenter provided a coffin for a burial which was to be paid for by the local parish. After he left, the pair opened the coffin, removed the body—which they hid under the bed—filled the coffin with bark from a local tanners and resealed it. After dark, on the day the coffin was removed for burial, they took the corpse to Edinburgh University, where they looked for a purchaser. According to Burke's later testimony, they asked for directions to Professor Monro, but a student sent them to Knox's premises in Surgeon's Square. Although the men dealt with juniors when discussing the possibility of selling the body, it was Knox who arrived to fix the price at £7 10s. Hare received £4 5s while Burke took the balance of £3 5s; Hare's larger share was to cover his loss from Donald's unpaid rent. According to Burke's official confession, as he and Hare left the university, one of Knox's assistants told them that the anatomists "would be glad to see them again when they had another to dispose of". There is no agreement as to the order in which the murders took place. Burke made two confessions but gave different sequences for the murders in each statement. The first was an official one, given on 3 January 1829 to the sheriff-substitute, the procurator fiscal and the assistant sheriff-clerk. The second was in the form of an interview with the Edinburgh Courant that was published on 7 February 1829. These in turn differed from the order given in Hare's statement, although the pair were agreed on many of the points of the murders. Contemporary reports also differ from the confessions of the two men. More recent sources, including the accounts written by Brian Bailey, Lisa Rosner and Owen Dudley Edwards, either follow one of the historic versions or present their own order of events. Most of the sources agree that the first murder in January or February 1828 was either that of a miller named Joseph lodging in Hare's house, or a salt seller named Abigail Simpson. The historian Lisa Rosner considers Joseph the more likely; a pillow was used to smother the victim, while later ones were suffocated by a hand over the nose and mouth. The novelist Sir Walter Scott, who took a keen interest in the case, also thought the miller was the more likely first victim, and highlighted that "there was an additional motive to reconcile them to the deed", as Joseph had developed a fever and had become delirious. Hare and his wife were concerned that having a potentially infectious lodger would be bad for business. Hare again turned to Burke and, after providing their victim with whisky, Hare suffocated Joseph while Burke lay across the upper torso to restrict movement. They again took the corpse to Knox, who this time paid £10. Rosner considers the method of murder to be ingenious: Burke's weight on the victim stifled movement—and thus the ability to make noise—while it also prevented the chest from expanding should any air get past Hare's suffocating grip. In Rosner's opinion, the method would have been "practically undetectable until the era of modern forensics". The order of the two victims next after Joseph is also unclear; Rosner puts the sequence as Abigail Simpson followed by an English male lodger from Cheshire, while Bailey and Dudley Edwards each have the order as the English male lodger followed by Simpson. The unnamed Englishman was a travelling seller of matches and tinder who fell ill with jaundice at Hare's lodging house. As with Joseph, Hare was concerned with the effect this illness might have on his business, and he and Burke employed the same modus operandi they had with the miller: Hare suffocating their victim while Burke lay over the body to stop movement and noise. Simpson was a pensioner who lived in the nearby village of Gilmerton and visited Edinburgh to supplement her pension by selling salt. On 12 February 1828—the only exact date Burke quoted in his confession—she was invited into the Hares' house and plied with enough alcohol to ensure she was too drunk to return home. After murdering her, Burke and Hare placed the body in a tea-chest and sold it to Knox. They received £10 for each body, and Burke's confession records of Simpson's body that "Dr Knox approved of its being so fresh ... but [he] did not ask any questions". In either February or March that year an old woman was invited into the house by Margaret Hare. She gave her enough whisky to fall asleep, and when Hare returned that afternoon, he covered the sleeping woman's mouth and nose with the bed tick (a stiff mattress cover) and left her. She was dead by nightfall and Burke joined his companion to transport the corpse to Knox, who paid another £10. Burke met two women in early April: Mary Paterson (also known as Mary Mitchell) and Janet Brown, in the Canongate area of Edinburgh. He bought the two women alcohol before inviting them back to his lodging for breakfast. The three left the tavern with two bottles of whisky and went instead to his brother Constantine's house. After his brother left for work, Burke and the women finished the whisky and Paterson fell asleep at the table; Burke and Brown continued talking but were interrupted by McDougal, who accused them of having an affair. A row broke out between Burke and McDougal—during which he threw a glass at her, cutting her over the eye—Brown stated that she did not know Burke was married and left; McDougal also left, and went to fetch Hare and his wife. They arrived shortly afterwards and the two men locked their wives out of the room, then murdered Paterson in her sleep. That afternoon the pair took the body to Knox in a tea-chest, while McDougal kept Paterson's skirt and petticoats; they were paid £8 for the corpse, which was still warm when they delivered it. Fergusson—one of Knox's assistants—asked where they had obtained the body, as he thought he recognised her. Burke explained that the girl had drunk herself to death, and they had purchased it "from an old woman in the Canongate". Knox was delighted with the corpse, and stored it in whisky for three months before dissecting it. When Brown later searched for her friend, she was told that she had left for Glasgow with a travelling salesman. At some point in early-to-mid 1828 a Mrs Haldane, whom Burke described as "a stout old woman", lodged at Hare's premises. After she became drunk, she fell asleep in the stable; she was smothered and sold to Knox. Several months later Haldane's daughter (either called Margaret or Peggy) also lodged at Hare's house. She and Burke drank together heavily and he killed her, without Hare's assistance; her body was put into a tea-chest and taken to Knox where Burke was paid £8. The next murder occurred in May 1828, when an old woman joined the house as a lodger. One evening while she was intoxicated, Burke smothered her—Hare was not present in the house at the time; her body was sold to Knox for £10. Then came the murder of Effy (sometimes spelt Effie), a "cinder gatherer" who scavenged through bins and rubbish tips to sell her findings. Effy was known to Burke and had previously sold him scraps of leather for his cobbling business. Burke tempted her into the stable with whisky, and when she was drunk enough he and Hare killed her; Knox gave £10 for the body. Another victim was found by Burke too drunk to stand. She was being helped by a local constable back to her lodgings when Burke offered to take her there himself; the policeman obliged, and Burke took her back to Hare's house where she was killed. Her corpse raised a further £10 from Knox. Burke and Hare murdered two lodgers in June, "an old woman and a dumb boy, her grandson", as Burke later recalled in his confession. While the boy sat by the fire in the kitchen, his grandmother was murdered in the bedroom by the usual method. Burke and Hare then picked up the boy and carried him to the same room where he was also killed. Burke later said that this was the murder that disturbed him the most, as he was haunted by his recollection of the boy's expression. The tea-chest that was usually used by the duo to transport the bodies was found to be too small, so the bodies were forced into a herring barrel and taken to Surgeons' Square, where they fetched £8 each. According to Burke's confession, the barrel was loaded onto a cart which Hare's horse refused to pull further than the Grassmarket. Hare called a porter with a handcart to help him transport the container. Once back in Tanner's Close, Hare took his anger out on the horse by shooting it dead in the yard. On 24 June Burke and McDougal departed for Falkirk to visit the latter's father. Burke knew that Hare was short of cash and had even pawned some of his clothes. When the couple returned, they found that Hare was wearing new clothes and had surplus money. After he was asked, Hare denied that he had sold another body. Burke checked with Knox, who confirmed Hare had sold a woman's body for £8. It led to an argument between the two men and they came to blows. Burke and his wife moved into the home of his cousin, John Broggan (or Brogan), two streets away from Tanner's Close. The breach between the two men did not last long. In late September or early October Hare was visiting Burke when Mrs Ostler (also given as Hostler), a washerwoman, came to the property to do the laundry. The men got her drunk and killed her; the corpse was with Knox that afternoon, for which the men received £8. A week or two later one of McDougal's relatives, Ann Dougal (also given as McDougal) was visiting from Falkirk; after a few days the men killed her by their usual technique and received £10 for the body. Burke later claimed that about this time Hare's wife suggested killing Helen McDougal on the grounds that "they could not trust her, as she was a Scotch woman", but he refused. Burke and Hare's next victim was a familiar figure in the streets of Edinburgh: James Wilson, an 18-year-old man with a limp caused by deformed feet. He was mentally disabled and, according to Alanna Knight in her history of the murders, was inoffensive; he was known locally as Daft Jamie. Wilson lived on the streets and supported himself by begging. In November Hare lured Wilson to his lodgings with the promise of whisky, and sent his wife to fetch Burke. The two murderers led Wilson into a bedroom, the door of which Margaret locked before pushing the key back under the door. As Wilson did not like excess whisky—he preferred snuff—he was not as drunk as most of the duo's victims; he was also strong and fought back against the two attackers, but was overpowered and killed in the normal way. His body was stripped and his few possessions stolen: Burke kept a snuff box and Hare a snuff spoon. When the body was examined the following day by Knox and his students, several of them recognised it to be Wilson, but Knox denied it could be anyone the students knew. When word started circulating that Wilson was missing, Knox dissected the body ahead of the others that were being held in storage; the head and feet were removed before the main dissection. The final victim, killed on 31 October 1828, was Margaret Docherty, a middle-aged Irish woman. Burke lured her into the Broggan lodging house by claiming that his mother was also a Docherty from the same area of Ireland, and the pair began drinking. At one point Burke left Docherty in the company of McDougal while he went out, ostensibly to buy more whisky, but actually to get Hare. Two other lodgers—Ann and James Gray—were an inconvenience to the men, so they paid them to stay at Hare's lodging for the night, claiming Docherty was a relative. The drinking continued into the evening, by which time Margaret had joined in. At around 9:00 pm the Grays returned briefly to collect some clothing for their children, and saw Burke, Hare, their wives and Docherty all drunk, singing and dancing. Although Burke and Hare came to blows at some point in the evening, they subsequently murdered Docherty, and put her body in a pile of straw at the end of the bed. The next day the Grays returned, and Ann became suspicious when Burke would not let her approach a bed where she had left her stockings. When they were left alone in the house in the early evening, the Grays searched the straw and found Docherty's body, showing blood and saliva on the face. On their way to alert the police, they ran into McDougal who tried to bribe them with an offer of £10 a week; they refused. While the Grays reported the murder to the police, Burke and Hare removed the body and took it to Knox's surgery. The police search located Docherty's bloodstained clothing hidden under the bed. When questioned, Burke and his wife claimed that Docherty had left the house, but gave different times for her departure. This raised enough suspicion for the police to take them in for questioning. Early the following morning the police went to Knox's dissecting-rooms where they found Docherty's body; James identified her as the woman he had seen with Burke and Hare. Hare and his wife were arrested that day, as was Broggan; all denied any knowledge of the events. In total sixteen people were murdered by Burke and Hare. Burke stated later that he and Hare were "generally in a state of intoxication" when the murders were carried out, and that he "could not sleep at night without a bottle of whisky by his bedside, and a twopenny candle to burn all night beside him; when he awoke he would take a drink from the bottle—sometimes half a bottle at a draught—and that would make him sleep." He also took opium to ease his conscience. ## Developments: investigation and the path to court On 3 November 1828 a warrant was issued for the detention of Burke, Hare and their wives; Broggan was released without further action. The four suspects were kept apart and statements taken; these conflicted with the initial answers given on the day of their arrests. After Alexander Black, a police surgeon, examined Docherty's body, two forensic specialists were appointed, Robert Christison and William Newbigging; they reported that it was probable the victim had been murdered by suffocation, but this could not be medically proven. On the basis of the report from the two doctors, the Burkes and Hares were charged with murder. As part of his investigation Christison interviewed Knox, who asserted that Burke and Hare had watched poor lodging houses in Edinburgh and purchased bodies before anyone claimed them for burial. Christison thought Knox was "deficient in principle and heart", but did not think he had broken the law. Although the police were sure murder had taken place, and that at least one of the four was guilty, they were uncertain whether they could secure a conviction. Police also suspected there had been other murders committed, but the lack of bodies hampered this line of enquiry. As news of the possibility of other murders came to the public's attention, newspapers began to publish lurid and inaccurate stories of the crimes; speculative reports led members of the public to assume that all missing people had been victims. Janet Brown went to the police and identified her friend Mary Paterson's clothing, while a local baker informed them that Jamie Wilson's trousers were being worn by Burke's nephew. On 19 November a warrant for the murder of Jamie Wilson was made against the four suspects. Sir William Rae, the Lord Advocate, followed a regular technique: he focused on one individual to extract a confession on which the others could be convicted. Hare was chosen and, on 1 December, he was offered immunity from prosecution if he turned king's evidence and provided the full details of the murder of Docherty and any other; because he could not be brought to testify against his wife, she was also exempt from prosecution. Hare made a full confession of all the deaths and Rae decided sufficient evidence existed to secure a prosecution. On 4 December formal charges were laid against Burke and McDougal for the murders of Mary Paterson, James Wilson and Mrs Docherty. Knox faced no charges for the murders because Burke's statement to the police exonerated the surgeon. Public awareness of the news grew as newspapers and broadsides began releasing further details. Opinion was against Knox and, according to Bailey, many in Edinburgh thought he was "a sinister ringmaster who got Burke and Hare dancing to his tune". Several broadsides were published with editorials stating that he should have been in the dock alongside the murderers, which influenced public opinion. A new word was coined from the murders: burking, to smother a victim or to commit an anatomy murder, and a rhyme began circulating around the streets of Edinburgh: > > Up the close and doon the stair, But and ben' wi' Burke and Hare. Burke's the butcher, Hare's the thief, Knox the boy that buys the beef. ## Trial The trial began at 10:00 am on Christmas Eve 1828 before the High Court of Justiciary in Edinburgh's Parliament House. The case was heard by the Lord Justice-Clerk, David Boyle, supported by the Lords Meadowbank, Pitmilly and Mackenzie. The court was full shortly after the doors were opened at 9:00 am, and a large crowd gathered outside Parliament House; 300 constables were on duty to prevent disturbances, while infantry and cavalry were on standby as a further precaution. The case ran through the day and night to the following morning; Rosner notes that even a formal postponement of the case for dinner could have raised questions about the validity of the trial. When the charges were read out, the two defence counsels objected to Burke and McDougal being tried together. James Moncreiff, Burke's defence lawyer, protested that his client was charged "with three unconnected murders, committed each at a different time, and at a different place" in a trial with another defendant "who is not even alleged to have had any concern with two of the offences of which he is accused". Several hours were spent on legal arguments about the objection. The judge decided that to ensure a fair trial, the indictment should be split into separate charges for the three murders. He gave Rae the choice as to which should be heard first; Rae opted for the murder of Docherty, given they had the corpse and the strongest evidence. In the early afternoon Burke and McDougal pleaded not guilty to the murder of Docherty. The first witnesses were then called from a list of 55 that included Hare and Knox; not all the witnesses on the list were called and Knox, with three of his assistants, avoided being questioned in court. One of Knox's assistants, David Paterson—who had been the main person Burke and Hare had dealt with at Knox's surgery—was called and confirmed the pair had supplied the doctor with several corpses. In the early evening Hare took the stand to give evidence. Under cross-examination about the murder of Docherty, Hare claimed Burke had been the sole murderer and McDougal had twice been involved by bringing Docherty back to the house after she had run out; Hare stated that he had assisted Burke in the delivery of the body to Knox. Although he was asked about other murders, he was not obliged to answer the questions, as the charge related only to the death of Docherty. After Hare's questioning, his wife entered the witness box, carrying their baby daughter who had developed whooping cough. Margaret used the child's coughing fits as a way to give herself thinking time for some of the questions, and told the court that she had a very poor memory and could not remember many of the events. The final prosecution witnesses were the two doctors, Black and Christison; both said they suspected foul play, but that there was no forensic evidence to support the suggestion of murder. There were no witnesses called for the defence, although the pre-trial declarations by Burke and McDougal were read out in their place. The prosecution summed up their case, after which, at 3:00 am, Burke's defence lawyer began his final statement, which lasted for two hours; McDougal's defence lawyer began his address to the jury on his client's behalf at 5:00 am. Boyle then gave his summing up, directing the jury to accept the arguments of the prosecution. The jury retired to consider its verdict at 8:30 am on Christmas Day and returned fifty minutes later. It delivered a guilty verdict against Burke for the murder of Docherty; the same charge against McDougal they found not proven. As he passed the death sentence against Burke, Boyle told him: > Your body should be publicly dissected and anatomized. And I trust, that if it is ever customary to preserve skeletons, yours will be preserved, in order that posterity may keep in remembrance your atrocious crimes. ## Aftermath, including execution and dissection McDougal was released at the end of the trial and returned home. The following day she went to buy whisky and was confronted by a mob who were angry at the not proven verdict. She was taken to a police building in nearby Fountainbridge for her own protection, but after the mob laid siege to it she escaped through a back window to the main police station off Edinburgh's High Street. She tried to see Burke, but permission was refused; she left Edinburgh the next day, and there are no clear accounts of her later life. On 3 January 1829, on the advice of both Catholic priests and Presbyterian clergy, Burke made another confession. This was more detailed than the official one provided prior to his trial; he placed much of the blame for the murders on Hare. On 16 January 1829 a petition on behalf of James Wilson's mother and sister, protesting against Hare's immunity and intended release from prison, was given lengthy consideration by the High Court of Justiciary and rejected by a vote of 4 to 2. Margaret was released on 19 January and travelled to Glasgow to find a passage back to Ireland. While waiting for a ship she was recognised and attacked by a mob. She was given shelter in a police station before being given a police escort onto a Belfast-bound vessel; no clear accounts exist of what became of her after she landed in Ireland. Burke was hanged on the morning of 28 January 1829 in front of a crowd possibly as large as 25,000; views from windows in the tenements overlooking the scaffold were hired at prices ranging from 5s to 20s. On 1 February Burke's corpse was publicly dissected by Professor Monro in the anatomy theatre of the university's Old College. Police had to be called when large numbers of students gathered demanding access to the lecture for which a limited number of tickets had been issued. A minor riot ensued; calm was restored only after one of the university professors negotiated with the crowd that they would be allowed to pass through the theatre in batches of fifty, after the dissection. During the procedure, which lasted for two hours, Monro dipped his quill pen into Burke's blood and wrote, "This is written with the blood of Wm Burke, who was hanged at Edinburgh. This blood was taken from his head". Burke's skeleton was given to the Anatomical Museum of the Edinburgh Medical School where, as at 2022, it remains. His death mask and a book said to be bound with his tanned skin can be seen at Surgeons' Hall Museum. Hare was released on 5 February 1829—his extended stay in custody had been undertaken for his own protection—and was assisted in leaving Edinburgh in disguise by the mailcoach to Dumfries. At one of its stops he was recognised by a fellow passenger, Erskine Douglas Sandford, a junior counsel who had represented Wilson's family; Sandford informed his fellow passengers of Hare's identity. On arrival in Dumfries the news of Hare's presence spread and a large crowd gathered at the hostelry where he was due to stay the night. Police arrived and arranged for a decoy coach to draw off the crowd while Hare escaped through a back window and into a carriage which took him to the town's prison for safekeeping. A crowd surrounded the building; stones were thrown at the door and windows and street lamps were smashed before 100 special constables arrived to restore order. In the small hours of the morning, escorted by a sheriff officer and militia guard, Hare was taken out of town, set down on the Annan Road and instructed to make his way to the English border. There were no subsequent reliable sightings of him and his eventual fate is unknown. Knox refused to make any public statements about his dealings with Burke and Hare. The common thought in Edinburgh was that he was culpable in the events; he was lampooned in caricature and, in February 1829, a crowd gathered outside his house and burned an effigy of him. A committee of inquiry cleared him of complicity and reported that they had "seen no evidence that Dr Knox or his assistants knew that murder was committed in procuring any of the subjects brought to his rooms". He resigned from his position as curator of the College of Surgeons' museum, and was gradually excluded from university life by his peers. He left Edinburgh in 1842 and lectured in Britain and mainland Europe. While working in London he fell foul of the regulations of the Royal College of Surgeons and was debarred from lecturing; he was removed from the roll of fellows of the Royal Society of Edinburgh in 1848. From 1856 he worked as a pathological anatomist at the Brompton Cancer Hospital and had a medical practice in Hackney until his death in 1862. ## Legacy ### Legislation The question of the supply of cadavers for scientific research had been promoted by the English philosopher Jeremy Bentham before the crimes of Burke and Hare took place. A parliamentary select committee had drafted a "Bill for preventing the unlawful disinterment of human bodies, and for regulating Schools of Anatomy" in mid 1828—six months before the murders were detected. This was rejected in 1829 by the House of Lords. The murders committed by Burke and Hare raised public awareness of the need for bodies for medical purposes, and of the trade that doctors had conducted with grave robbers and murderers. The East London murder of a 14-year-old boy and the subsequent attempt to sell the corpse to the medical school at King's College London led to an investigation of the London Burkers, who had recently turned from grave robbing to murder to obtain corpses; two men were hanged in December 1831 for the crime. A bill was quickly introduced into Parliament, and gained royal assent nine months later to become the Anatomy Act 1832. This Act authorised dissection on bodies from workhouses unclaimed after 48 hours, and ended the practice of anatomising as part of the death sentence for murder. ### In media portrayals and popular culture The events of the West Port murders have made appearances in fiction. They are referred to in Robert Louis Stevenson's 1884 short story "The Body Snatcher" and Marcel Schwob told their story in the last chapter of Imaginary Lives (1896), while the Edinburgh-based author Elizabeth Byrd used the events in her novels Rest Without Peace (1974) and The Search for Maggie Hare (1976). The murders have also been portrayed on stage and screen, usually in heavily fictionalised form. David Paterson, Knox's assistant, contacted Walter Scott to ask the novelist if he would be interested in writing an account of the murders, but he declined, despite Scott's long-standing interest in the events. Scott later wrote: > Our Irish importation have made a great discovery of Oeconomicks, namely, that a wretch who is not worth a farthing while alive, becomes a valuable article when knockd on the head & carried to an anatomist; and acting on this principle, have cleard the streets of some of those miserable offcasts of society, whom nobody missd because nobody wishd to see them again. ## See also - Organ trade - London Burkers
12,153,654
Elizabeth II
1,173,695,121
Queen of the United Kingdom from 1952 to 2022
[ "1926 births", "2022 deaths", "20th-century British monarchs", "20th-century British women", "21st-century British monarchs", "21st-century British women", "Auxiliary Territorial Service officers", "British Anglicans", "British Presbyterians", "British philanthropists", "British princesses", "British racehorse owners and breeders", "British women in World War II", "Burials at St George's Chapel, Windsor Castle", "Daughters of emperors", "Daughters of kings", "Deaths in Scotland", "Dethroned monarchs", "Duchesses of Edinburgh", "Elizabeth II", "Heads of state of Antigua and Barbuda", "Heads of state of Australia", "Heads of state of Barbados", "Heads of state of Belize", "Heads of state of Canada", "Heads of state of Fiji", "Heads of state of Ghana", "Heads of state of Grenada", "Heads of state of Guyana", "Heads of state of Jamaica", "Heads of state of Kenya", "Heads of state of Malawi", "Heads of state of Malta", "Heads of state of Mauritius", "Heads of state of New Zealand", "Heads of state of Nigeria", "Heads of state of Pakistan", "Heads of state of Papua New Guinea", "Heads of state of Saint Kitts and Nevis", "Heads of state of Saint Lucia", "Heads of state of Saint Vincent and the Grenadines", "Heads of state of Sierra Leone", "Heads of state of Tanganyika", "Heads of state of Trinidad and Tobago", "Heads of state of Tuvalu", "Heads of state of Uganda", "Heads of state of the Bahamas", "Heads of state of the Gambia", "Heads of state of the Solomon Islands", "Heads of the Commonwealth", "Heirs to the British throne", "Honorary air commodores", "House of Windsor", "Jewellery collectors", "Lord High Admirals of the United Kingdom", "Monarchs of Ceylon", "Monarchs of South Africa", "Monarchs of the Isle of Man", "Monarchs of the United Kingdom", "People from Mayfair", "People named in the Paradise Papers", "Queens regnant in the British Isles", "Time Person of the Year", "Women in the Canadian armed services" ]
Elizabeth II (Elizabeth Alexandra Mary; 21 April 1926 – 8 September 2022) was Queen of the United Kingdom and other Commonwealth realms from 6 February 1952 until her death in 2022. She was queen regnant of 32 sovereign states over the course of her lifetime and remained the monarch of 15 realms by the time of her death. Her reign of over 70 years is the longest of any British monarch and the longest verified reign of any female head of state in history. Elizabeth was born in Mayfair, London, during the reign of her paternal grandfather, King George V. She was the first child of the Duke and Duchess of York (later King George VI and Queen Elizabeth The Queen Mother). Her father acceded to the throne in 1936 upon the abdication of his brother Edward VIII, making the ten-year-old Princess Elizabeth the heir presumptive. She was educated privately at home and began to undertake public duties during the Second World War, serving in the Auxiliary Territorial Service. In November 1947, she married Philip Mountbatten, a former prince of Greece and Denmark, and their marriage lasted 73 years until his death in 2021. They had four children: Charles, Anne, Andrew, and Edward. When her father died in February 1952, Elizabeth—then 25 years old—became queen of seven independent Commonwealth countries: the United Kingdom, Canada, Australia, New Zealand, South Africa, Pakistan, and Ceylon (known today as Sri Lanka), as well as head of the Commonwealth. Elizabeth reigned as a constitutional monarch through major political changes such as the Troubles in Northern Ireland, devolution in the United Kingdom, the decolonisation of Africa, and the United Kingdom's accession to the European Communities and withdrawal from the European Union. The number of her realms varied over time as territories gained independence and some realms became republics. As queen, Elizabeth was served by more than 170 prime ministers across her realms. Her many historic visits and meetings included state visits to China in 1986, to Russia in 1994, and to the Republic of Ireland in 2011, and meetings with five popes and fourteen US presidents. Significant events included Elizabeth's coronation in 1953 and the celebrations of her Silver, Golden, Diamond, and Platinum jubilees in 1977, 2002, 2012, and 2022, respectively. Although she faced occasional republican sentiment and media criticism of her family—particularly after the breakdowns of her children's marriages, her annus horribilis in 1992, and the death in 1997 of her former daughter-in-law Diana—support for the monarchy in the United Kingdom remained consistently high throughout her lifetime, as did her personal popularity. Elizabeth died aged 96 at Balmoral Castle in September 2022, and was succeeded by her eldest son, Charles III. ## Early life Elizabeth was born on 21 April 1926, the first child of Prince Albert, Duke of York (later King George VI), and his wife, Elizabeth, Duchess of York (later Queen Elizabeth The Queen Mother). Her father was the second son of King George V and Queen Mary, and her mother was the youngest daughter of Scottish aristocrat Claude Bowes-Lyon, 14th Earl of Strathmore and Kinghorne. She was delivered at 02:40 (GMT) by Caesarean section at her maternal grandfather's London home, 17 Bruton Street in Mayfair. The Anglican Archbishop of York, Cosmo Gordon Lang, baptised her in the private chapel of Buckingham Palace on 29 May, and she was named Elizabeth after her mother; Alexandra after her paternal great-grandmother, who had died six months earlier; and Mary after her paternal grandmother. She was called "Lilibet" by her close family, based on what she called herself at first. She was cherished by her grandfather George V, whom she affectionately called "Grandpa England", and her regular visits during his serious illness in 1929 were credited in the popular press and by later biographers with raising his spirits and aiding his recovery. Elizabeth's only sibling, Princess Margaret, was born in 1930. The two princesses were educated at home under the supervision of their mother and their governess, Marion Crawford. Lessons concentrated on history, language, literature, and music. Crawford published a biography of Elizabeth and Margaret's childhood years entitled The Little Princesses in 1950, much to the dismay of the royal family. The book describes Elizabeth's love of horses and dogs, her orderliness, and her attitude of responsibility. Others echoed such observations: Winston Churchill described Elizabeth when she was two as "a character. She has an air of authority and reflectiveness astonishing in an infant." Her cousin Margaret Rhodes described her as "a jolly little girl, but fundamentally sensible and well-behaved". Elizabeth's early life was spent primarily at the Yorks' residences at 145 Piccadilly (their town house in London) and Royal Lodge in Windsor. ## Heir presumptive During her grandfather's reign, Elizabeth was third in the line of succession to the British throne, behind her uncle Edward and her father. Although her birth generated public interest, she was not expected to become queen, as Edward was still young and likely to marry and have children of his own, who would precede Elizabeth in the line of succession. When her grandfather died in 1936 and her uncle succeeded as Edward VIII, she became second in line to the throne, after her father. Later that year, Edward abdicated, after his proposed marriage to divorced socialite Wallis Simpson provoked a constitutional crisis. Consequently, Elizabeth's father became king, taking the regnal name George VI. Since Elizabeth had no brothers, she became heir presumptive. If her parents had subsequently had a son, he would have been heir apparent and above her in the line of succession, which was determined by the male-preference primogeniture in effect at the time. Elizabeth received private tuition in constitutional history from Henry Marten, Vice-Provost of Eton College, and learned French from a succession of native-speaking governesses. A Girl Guides company, the 1st Buckingham Palace Company, was formed specifically so she could socialise with girls her age. Later, she was enrolled as a Sea Ranger. In 1939, Elizabeth's parents toured Canada and the United States. As in 1927, when they had toured Australia and New Zealand, Elizabeth remained in Britain since her father thought she was too young to undertake public tours. She "looked tearful" as her parents departed. They corresponded regularly, and she and her parents made the first royal transatlantic telephone call on 18 May. ### Second World War In September 1939, Britain entered the Second World War. Lord Hailsham suggested that Princesses Elizabeth and Margaret should be evacuated to Canada to avoid the frequent aerial bombings of London by the Luftwaffe. This was rejected by their mother, who declared, "The children won't go without me. I won't leave without the King. And the King will never leave." The princesses stayed at Balmoral Castle, Scotland, until Christmas 1939, when they moved to Sandringham House, Norfolk. From February to May 1940, they lived at Royal Lodge, Windsor, until moving to Windsor Castle, where they lived for most of the next five years. At Windsor, the princesses staged pantomimes at Christmas in aid of the Queen's Wool Fund, which bought yarn to knit into military garments. In 1940, the 14-year-old Elizabeth made her first radio broadcast during the BBC's Children's Hour, addressing other children who had been evacuated from the cities. She stated: "We are trying to do all we can to help our gallant sailors, soldiers, and airmen, and we are trying, too, to bear our own share of the danger and sadness of war. We know, every one of us, that in the end all will be well." In 1943, Elizabeth undertook her first solo public appearance on a visit to the Grenadier Guards, of which she had been appointed colonel the previous year. As she approached her 18th birthday, Parliament changed the law so that she could act as one of five counsellors of state in the event of her father's incapacity or absence abroad, such as his visit to Italy in July 1944. In February 1945, she was appointed an honorary second subaltern in the Auxiliary Territorial Service with the service number 230873. She trained and worked as a driver and mechanic and was given the rank of honorary junior commander (female equivalent of captain at the time) five months later. At the end of the war in Europe, on Victory in Europe Day, Elizabeth and Margaret mingled incognito with the celebrating crowds in the streets of London. Elizabeth later said in a rare interview, "We asked my parents if we could go out and see for ourselves. I remember we were terrified of being recognised ... I remember lines of unknown people linking arms and walking down Whitehall, all of us just swept along on a tide of happiness and relief." During the war, plans were drawn to quell Welsh nationalism by affiliating Elizabeth more closely with Wales. Proposals, such as appointing her Constable of Caernarfon Castle or a patron of Urdd Gobaith Cymru (the Welsh League of Youth), were abandoned for several reasons, including fear of associating Elizabeth with conscientious objectors in the Urdd at a time when Britain was at war. Welsh politicians suggested she be made Princess of Wales on her 18th birthday. Home Secretary Herbert Morrison supported the idea, but the King rejected it because he felt such a title belonged solely to the wife of a Prince of Wales and the Prince of Wales had always been the heir apparent. In 1946, she was inducted into the Gorsedd of Bards at the National Eisteddfod of Wales. Elizabeth went on her first overseas tour in 1947, accompanying her parents through southern Africa. During the tour, in a broadcast to the British Commonwealth on her 21st birthday, she made the following pledge: "I declare before you all that my whole life, whether it be long or short, shall be devoted to your service and the service of our great imperial family to which we all belong." The oft-quoted speech was written by Dermot Morrah, a journalist for The Times. ### Marriage Elizabeth met her future husband, Prince Philip of Greece and Denmark, in 1934 and again in 1937. They were second cousins once removed through King Christian IX of Denmark and third cousins through Queen Victoria. After meeting for the third time at the Royal Naval College in Dartmouth in July 1939, Elizabeth—though only 13 years old—said she fell in love with Philip, who was 18, and they began to exchange letters. She was 21 when their engagement was officially announced on 9 July 1947. The engagement attracted some controversy. Philip had no financial standing, was foreign-born (though a British subject who had served in the Royal Navy throughout the Second World War), and had sisters who had married German noblemen with Nazi links. Marion Crawford wrote, "Some of the King's advisors did not think him good enough for her. He was a prince without a home or kingdom. Some of the papers played long and loud tunes on the string of Philip's foreign origin." Later biographies reported that Elizabeth's mother had reservations about the union initially and teased Philip as "the Hun". In later life, however, she told the biographer Tim Heald that Philip was "an English gentleman". Before the marriage, Philip renounced his Greek and Danish titles, officially converted from Greek Orthodoxy to Anglicanism, and adopted the style Lieutenant Philip Mountbatten, taking the surname of his mother's British family. Shortly before the wedding, he was created Duke of Edinburgh and granted the style His Royal Highness. Elizabeth and Philip were married on 20 November 1947 at Westminster Abbey. They received 2,500 wedding gifts from around the world. Elizabeth required ration coupons to buy the material for her gown (which was designed by Norman Hartnell) because Britain had not yet completely recovered from the devastation of the war. In post-war Britain, it was not acceptable for Philip's German relations, including his three surviving sisters, to be invited to the wedding. Neither was an invitation extended to the Duke of Windsor, formerly King Edward VIII. Elizabeth gave birth to her first child, Charles, in November 1948. One month earlier, the King had issued letters patent allowing her children to use the style and title of a royal prince or princess, to which they otherwise would not have been entitled as their father was no longer a royal prince. A second child, Princess Anne, was born in August 1950. Following their wedding, the couple leased Windlesham Moor, near Windsor Castle, until July 1949, when they took up residence at Clarence House in London. At various times between 1949 and 1951, the Duke of Edinburgh was stationed in the British Crown Colony of Malta as a serving Royal Navy officer. He and Elizabeth lived intermittently in Malta for several months at a time in the hamlet of Gwardamanġa, at Villa Guardamangia, the rented home of Philip's uncle Lord Mountbatten. Their two children remained in Britain. ## Reign ### Accession and coronation As George VI's health declined during 1951, Elizabeth frequently stood in for him at public events. When she visited Canada and Harry S. Truman in Washington, DC, in October 1951, her private secretary Martin Charteris carried a draft accession declaration in case the King died while she was on tour. In early 1952, Elizabeth and Philip set out for a tour of Australia and New Zealand by way of the British colony of Kenya. On 6 February, they had just returned to their Kenyan home, Sagana Lodge, after a night spent at Treetops Hotel, when word arrived of the death of Elizabeth's father. Philip broke the news to the new queen. She chose to retain Elizabeth as her regnal name, and was therefore called Elizabeth II. The numeral offended some Scots, as she was the first Elizabeth to rule in Scotland. She was proclaimed queen throughout her realms, and the royal party hastily returned to the United Kingdom. Elizabeth and Philip moved into Buckingham Palace. With Elizabeth's accession, it seemed possible that the royal house would take her husband's name, in line with the custom for married women of the time. Lord Mountbatten advocated for House of Mountbatten, and Philip suggested House of Edinburgh, after his ducal title. The British prime minister, Winston Churchill, and Elizabeth's grandmother Queen Mary favoured the retention of the House of Windsor. Elizabeth issued a declaration on 9 April 1952 that the royal house would continue to be Windsor. Philip complained, "I am the only man in the country not allowed to give his name to his own children." In 1960, the surname Mountbatten-Windsor was adopted for Philip and Elizabeth's male-line descendants who do not carry royal titles. Amid preparations for the coronation, Princess Margaret told her sister she wished to marry Peter Townsend, a divorcé 16 years Margaret's senior with two sons from his previous marriage. Elizabeth asked them to wait for a year; in the words of her private secretary, "the Queen was naturally sympathetic towards the Princess, but I think she thought—she hoped—given time, the affair would peter out." Senior politicians were against the match and the Church of England did not permit remarriage after divorce. If Margaret had contracted a civil marriage, she would have been expected to renounce her right of succession. Margaret decided to abandon her plans with Townsend. In 1960, she married Antony Armstrong-Jones, who was created Earl of Snowdon the following year. They were divorced in 1978. She did not remarry. Despite the death of Queen Mary on 24 March 1953, the coronation went ahead as planned on 2 June, as Mary had requested. The coronation ceremony in Westminster Abbey was televised for the first time, with the exception of the anointing and communion. On Elizabeth's instruction, her coronation gown was embroidered with the floral emblems of Commonwealth countries. ### Continuing evolution of the Commonwealth From Elizabeth's birth onwards, the British Empire continued its transformation into the Commonwealth of Nations. By the time of her accession in 1952, her role as head of multiple independent states was already established. In 1953, Elizabeth and her husband embarked on a seven-month round-the-world tour, visiting 13 countries and covering more than 40,000 miles (64,000 km) by land, sea and air. She became the first reigning monarch of Australia and New Zealand to visit those nations. During the tour, crowds were immense; three-quarters of the population of Australia were estimated to have seen her. Throughout her reign, Elizabeth made hundreds of state visits to other countries and tours of the Commonwealth; she was the most widely travelled head of state. In 1956, the British and French prime ministers, Sir Anthony Eden and Guy Mollet, discussed the possibility of France joining the Commonwealth. The proposal was never accepted, and the following year France signed the Treaty of Rome, which established the European Economic Community, the precursor to the European Union. In November 1956, Britain and France invaded Egypt in an ultimately unsuccessful attempt to capture the Suez Canal. Lord Mountbatten said Elizabeth was opposed to the invasion, though Eden denied it. Eden resigned two months later. The governing Conservative Party had no formal mechanism for choosing a leader, meaning that it fell to Elizabeth to decide whom to commission to form a government following Eden's resignation. Eden recommended she consult Lord Salisbury, the lord president of the council. Lord Salisbury and Lord Kilmuir, the lord chancellor, consulted the British Cabinet, Churchill, and the chairman of the backbench 1922 Committee, resulting in Elizabeth appointing their recommended candidate: Harold Macmillan. The Suez crisis and the choice of Eden's successor led, in 1957, to the first major personal criticism of Elizabeth. In a magazine, which he owned and edited, Lord Altrincham accused her of being "out of touch". Altrincham was denounced by public figures and slapped by a member of the public appalled by his comments. Six years later, in 1963, Macmillan resigned and advised Elizabeth to appoint Alec Douglas-Home as the prime minister, advice she followed. Elizabeth again came under criticism for appointing the prime minister on the advice of a small number of ministers or a single minister. In 1965, the Conservatives adopted a formal mechanism for electing a leader, thus relieving the Queen of her involvement. In 1957, Elizabeth made a state visit to the United States, where she addressed the United Nations General Assembly on behalf of the Commonwealth. On the same tour, she opened the 23rd Canadian Parliament, becoming the first monarch of Canada to open a parliamentary session. Two years later, solely in her capacity as Queen of Canada, she revisited the United States and toured Canada. In 1961, she toured Cyprus, India, Pakistan, Nepal, and Iran. On a visit to Ghana the same year, she dismissed fears for her safety, even though her host, President Kwame Nkrumah, who had replaced her as head of state, was a target for assassins. Harold Macmillan wrote, "The Queen has been absolutely determined all through ... She is impatient of the attitude towards her to treat her as ... a film star ... She has indeed 'the heart and stomach of a man' ... She loves her duty and means to be a Queen." Before her tour through parts of Quebec in 1964, the press reported extremists within the Quebec separatist movement were plotting Elizabeth's assassination. No attempt was made, but a riot did break out while she was in Montreal; Elizabeth's "calmness and courage in the face of the violence" was noted. Elizabeth gave birth to her third child, Prince Andrew, in February 1960, which was the first birth to a reigning British monarch since 1857. Her fourth child, Prince Edward, was born in March 1964. On 21 October 1966, the Aberfan disaster in Wales saw 116 children and 28 adults killed when a colliery spoil tip collapsed, engulfing Pantglas Junior School and the surrounding houses in the village. The Queen was criticised for waiting eight days before deciding to visit the village, and her delay in visiting the scene was a mistake that she later regretted. ### Acceleration of decolonisation The 1960s and 1970s saw an acceleration in the decolonisation of Africa and the Caribbean. More than 20 countries gained independence from Britain as part of a planned transition to self-government. In 1965, however, the Rhodesian prime minister, Ian Smith, in opposition to moves towards majority rule, unilaterally declared independence while expressing "loyalty and devotion" to Elizabeth, declaring her "Queen of Rhodesia". Although Elizabeth formally dismissed him, and the international community applied sanctions against Rhodesia, his regime survived for over a decade. As Britain's ties to its former empire weakened, the British government sought entry to the European Community, a goal it achieved in 1973. Elizabeth toured Yugoslavia in October 1972, becoming the first British monarch to visit a communist country. She was received at the airport by President Josip Broz Tito, and a crowd of thousands greeted her in Belgrade. In February 1974, the British prime minister, Edward Heath, advised Elizabeth to call a general election in the middle of her tour of the Austronesian Pacific Rim, requiring her to fly back to Britain. The election resulted in a hung parliament; Heath's Conservatives were not the largest party but could stay in office if they formed a coalition with the Liberals. When discussions on forming a coalition foundered, Heath resigned, and Elizabeth asked the Leader of the Opposition, Labour's Harold Wilson, to form a government. A year later, at the height of the 1975 Australian constitutional crisis, the Australian prime minister, Gough Whitlam, was dismissed from his post by Governor-General Sir John Kerr, after the Opposition-controlled Senate rejected Whitlam's budget proposals. As Whitlam had a majority in the House of Representatives, Speaker Gordon Scholes appealed to Elizabeth to reverse Kerr's decision. She declined, saying she would not interfere in decisions reserved by the Constitution of Australia for the Governor-General. The crisis fuelled Australian republicanism. ### Silver Jubilee In 1977, Elizabeth marked the Silver Jubilee of her accession. Parties and events took place throughout the Commonwealth, many coinciding with her associated national and Commonwealth tours. The celebrations re-affirmed Elizabeth's popularity, despite virtually coincident negative press coverage of Princess Margaret's separation from her husband, Lord Snowdon. In 1978, Elizabeth endured a state visit to the United Kingdom by Romania's communist leader, Nicolae Ceaușescu, and his wife, Elena, though privately she thought they had "blood on their hands". The following year brought two blows: one was the unmasking of Anthony Blunt, former Surveyor of the Queen's Pictures, as a communist spy; the other was the assassination of her uncle-in-law Lord Mountbatten by the Provisional Irish Republican Army. According to Paul Martin Sr., by the end of the 1970s, Elizabeth was worried the Crown "had little meaning for" Pierre Trudeau, the Canadian prime minister. Tony Benn said Elizabeth found Trudeau "rather disappointing". Trudeau's supposed republicanism seemed to be confirmed by his antics, such as sliding down banisters at Buckingham Palace and pirouetting behind Elizabeth's back in 1977, and the removal of various Canadian royal symbols during his term of office. In 1980, Canadian politicians sent to London to discuss the patriation of the Canadian constitution found Elizabeth "better informed ... than any of the British politicians or bureaucrats". She was particularly interested after the failure of Bill C-60, which would have affected her role as head of state. ### Press scrutiny and Thatcher premiership During the 1981 Trooping the Colour ceremony, six weeks before the wedding of Prince Charles and Lady Diana Spencer, six shots were fired at Elizabeth from close range as she rode down The Mall, London, on her horse, Burmese. Police later discovered the shots were blanks. The 17-year-old assailant, Marcus Sarjeant, was sentenced to five years in prison and released after three. Elizabeth's composure and skill in controlling her mount were widely praised. That October, Elizabeth was the subject of another attack while on a visit to Dunedin, New Zealand. Christopher John Lewis, who was 17 years old, fired a shot with a .22 rifle from the fifth floor of a building overlooking the parade but missed. Lewis was arrested, but instead of being charged with attempted murder or treason was sentenced to three years in jail for unlawful possession and discharge of a firearm. Two years into his sentence, he attempted to escape a psychiatric hospital with the intention of assassinating Charles, who was visiting the country with Diana and their son Prince William. From April to September 1982, Elizabeth's son Andrew served with British forces in the Falklands War, for which she reportedly felt anxiety and pride. On 9 July, she awoke in her bedroom at Buckingham Palace to find an intruder, Michael Fagan, in the room with her. In a serious lapse of security, assistance only arrived after two calls to the Palace police switchboard. After hosting US president Ronald Reagan at Windsor Castle in 1982 and visiting his California ranch in 1983, Elizabeth was angered when his administration ordered the invasion of Grenada, one of her Caribbean realms, without informing her. Intense media interest in the opinions and private lives of the royal family during the 1980s led to a series of sensational stories in the press, pioneered by The Sun tabloid. As Kelvin MacKenzie, editor of The Sun, told his staff: "Give me a Sunday for Monday splash on the Royals. Don't worry if it's not true—so long as there's not too much of a fuss about it afterwards." Newspaper editor Donald Trelford wrote in The Observer of 21 September 1986: "The royal soap opera has now reached such a pitch of public interest that the boundary between fact and fiction has been lost sight of ... it is not just that some papers don't check their facts or accept denials: they don't care if the stories are true or not." It was reported, most notably in The Sunday Times of 20 July 1986, that Elizabeth was worried that Margaret Thatcher's economic policies fostered social divisions and was alarmed by high unemployment, a series of riots, the violence of a miners' strike, and Thatcher's refusal to apply sanctions against the apartheid regime in South Africa. The sources of the rumours included royal aide Michael Shea and Commonwealth secretary-general Shridath Ramphal, but Shea claimed his remarks were taken out of context and embellished by speculation. Thatcher reputedly said Elizabeth would vote for the Social Democratic Party—Thatcher's political opponents. Thatcher's biographer, John Campbell, claimed "the report was a piece of journalistic mischief-making". Reports of acrimony between them were exaggerated, and Elizabeth gave two honours in her personal gift—membership in the Order of Merit and the Order of the Garter—to Thatcher after her replacement as prime minister by John Major. Brian Mulroney, Canadian prime minister between 1984 and 1993, said Elizabeth was a "behind the scenes force" in ending apartheid. In 1986, Elizabeth paid a six-day state visit to the People's Republic of China, becoming the first British monarch to visit the country. The tour included the Forbidden City, the Great Wall of China, and the Terracotta Warriors. At a state banquet, Elizabeth joked about the first British emissary to China being lost at sea with Queen Elizabeth I's letter to the Wanli Emperor, and remarked, "fortunately postal services have improved since 1602". Elizabeth's visit also signified the acceptance of both countries that sovereignty over Hong Kong would be transferred from the United Kingdom to China in 1997. By the end of the 1980s, Elizabeth had become the target of satire. The involvement of younger members of the royal family in the charity game show It's a Royal Knockout in 1987 was ridiculed. In Canada, Elizabeth publicly supported politically divisive constitutional amendments, prompting criticism from opponents of the proposed changes, including Pierre Trudeau. The same year, the elected Fijian government was deposed in a military coup. As monarch of Fiji, Elizabeth supported the attempts of Governor-General Ratu Sir Penaia Ganilau to assert executive power and negotiate a settlement. Coup leader Sitiveni Rabuka deposed Ganilau and declared Fiji a republic. ### Turbulent 1990s and annus horribilis In the wake of coalition victory in the Gulf War, Elizabeth became the first British monarch to address a joint meeting of the United States Congress in May 1991. On 24 November 1992, in a speech to mark the Ruby Jubilee of her accession to the throne, Elizabeth called 1992 her annus horribilis (a Latin phrase, meaning "horrible year"). Republican feeling in Britain had risen because of press estimates of Elizabeth's private wealth—contradicted by the Palace—and reports of affairs and strained marriages among her extended family. In March, her second son, Prince Andrew, separated from his wife, Sarah, and Mauritius removed Elizabeth as head of state; her daughter, Princess Anne, divorced Captain Mark Phillips in April; angry demonstrators in Dresden threw eggs at Elizabeth during a state visit to Germany in October; and a large fire broke out at Windsor Castle, one of her official residences, in November. The monarchy came under increased criticism and public scrutiny. In an unusually personal speech, Elizabeth said that any institution must expect criticism, but suggested it might be done with "a touch of humour, gentleness and understanding". Two days later, John Major announced plans to reform the royal finances, drawn up the previous year, including Elizabeth paying income tax from 1993 onwards, and a reduction in the civil list. In December, Prince Charles and his wife, Diana, formally separated. At the end of the year, Elizabeth sued The Sun newspaper for breach of copyright when it published the text of her annual Christmas message two days before it was broadcast. The newspaper was forced to pay her legal fees and donated £200,000 to charity. Elizabeth's solicitors had taken successful action against The Sun five years earlier for breach of copyright after it published a photograph of her daughter-in-law the Duchess of York and her granddaughter Princess Beatrice. In January 1994, Elizabeth broke the scaphoid bone in her left wrist as the horse she was riding at Sandringham tripped and fell. In October 1994, she became the first reigning British monarch to set foot on Russian soil. In October 1995, Elizabeth was tricked into a hoax call by Montreal radio host Pierre Brassard impersonating Canadian prime minister Jean Chrétien. Elizabeth, who believed that she was speaking to Chrétien, said she supported Canadian unity and would try to influence Quebec's referendum on proposals to break away from Canada. In the year that followed, public revelations on the state of Charles and Diana's marriage continued. In consultation with her husband and John Major, as well as the Archbishop of Canterbury (George Carey) and her private secretary (Robert Fellowes), Elizabeth wrote to Charles and Diana at the end of December 1995, suggesting that a divorce would be advisable. In August 1997, a year after the divorce, Diana was killed in a car crash in Paris. Elizabeth was on holiday with her extended family at Balmoral. Diana's two sons, Princes William and Harry, wanted to attend church, so Elizabeth and Philip took them that morning. Afterwards, for five days, the royal couple shielded their grandsons from the intense press interest by keeping them at Balmoral where they could grieve in private, but the royal family's silence and seclusion, and the failure to fly a flag at half-mast over Buckingham Palace, caused public dismay. Pressured by the hostile reaction, Elizabeth agreed to return to London and address the nation in a live television broadcast on 5 September, the day before Diana's funeral. In the broadcast, she expressed admiration for Diana and her feelings "as a grandmother" for the two princes. As a result, much of the public hostility evaporated. In October 1997, Elizabeth and Philip made a state visit to India, which included a controversial visit to the site of the Jallianwala Bagh massacre to pay her respects. Protesters chanted "Killer Queen, go back", and there were demands for her to apologise for the action of British troops 78 years earlier. At the memorial in the park, she and Philip laid a wreath and stood for a 30‐second moment of silence. As a result, much of the fury among the public softened, and the protests were called off. That November, Elizabeth and her husband held a reception at Banqueting House to mark their golden wedding anniversary. Elizabeth made a speech and praised Philip for his role as a consort, referring to him as "my strength and stay". In 1999, as part of the process of devolution within the UK, Elizabeth formally opened newly established legislatures for Wales and Scotland: the National Assembly for Wales at Cardiff in May, and the Scottish Parliament at Edinburgh in July. ### Golden Jubilee On the eve of the new millennium, Elizabeth and Philip boarded a vessel from Southwark, bound for the Millennium Dome. Before passing under Tower Bridge, Elizabeth lit the National Millennium Beacon in the Pool of London using a laser torch. Shortly before midnight, she officially opened the Dome. During the singing of Auld Lang Syne, Elizabeth held hands with Philip and British prime minister Tony Blair. In 2002, Elizabeth marked her Golden Jubilee, the 50th anniversary of her accession. Her sister and mother died in February and March, respectively, and the media speculated on whether the Jubilee would be a success or a failure. She again undertook an extensive tour of her realms, beginning in Jamaica in February, where she called the farewell banquet "memorable" after a power cut plunged King's House, the official residence of the governor-general, into darkness. As in 1977, there were street parties and commemorative events, and monuments were named to honour the occasion. One million people attended each day of the three-day main Jubilee celebration in London, and the enthusiasm shown for Elizabeth by the public was greater than many journalists had anticipated. In 2003, Elizabeth sued the Daily Mirror for breach of confidence and obtained an injunction which prevented the outlet from publishing information gathered by a reporter who posed as a footman at Buckingham Palace. The newspaper also paid £25,000 towards her legal costs. Though generally healthy throughout her life, in 2003 she had keyhole surgery on both knees. In October 2006, she missed the opening of the new Emirates Stadium because of a strained back muscle that had been troubling her since the summer. In May 2007, citing unnamed sources, The Daily Telegraph reported that Elizabeth was "exasperated and frustrated" by the policies of Tony Blair, that she was concerned the British Armed Forces were overstretched in Iraq and Afghanistan, and that she had raised concerns over rural and countryside issues with Blair. She was, however, said to admire Blair's efforts to achieve peace in Northern Ireland. She became the first British monarch to celebrate a diamond wedding anniversary in November 2007. On 20 March 2008, at the Church of Ireland St Patrick's Cathedral, Armagh, Elizabeth attended the first Maundy service held outside England and Wales. Elizabeth addressed the UN General Assembly for a second time in 2010, again in her capacity as Queen of all Commonwealth realms and Head of the Commonwealth. The UN secretary-general, Ban Ki-moon, introduced her as "an anchor for our age". During her visit to New York, which followed a tour of Canada, she officially opened a memorial garden for British victims of the 9/11 attacks. Elizabeth's 11-day visit to Australia in October 2011 was her 16th visit to the country since 1954. By invitation of the Irish president, Mary McAleese, she made the first state visit to the Republic of Ireland by a British monarch in May 2011. ### Diamond Jubilee and longevity Elizabeth's 2012 Diamond Jubilee marked 60 years on the throne, and celebrations were held throughout her realms, the wider Commonwealth, and beyond. She and her husband undertook an extensive tour of the United Kingdom, while her children and grandchildren embarked on royal tours of other Commonwealth states on her behalf. On 4 June, Jubilee beacons were lit around the world. On 18 December, she became the first British sovereign to attend a peacetime Cabinet meeting since George III in 1781. Elizabeth, who opened the 1976 Summer Olympics in Montreal, also opened the 2012 Summer Olympics and Paralympics in London, making her the first head of state to open two Olympic Games in two countries. For the London Olympics, she played herself in a short film as part of the opening ceremony, alongside Daniel Craig as James Bond. On 4 April 2013, she received an honorary BAFTA for her patronage of the film industry and was called "the most memorable Bond girl yet" at the award ceremony. On 3 March 2013, Elizabeth stayed overnight at King Edward VII's Hospital as a precaution after developing symptoms of gastroenteritis. A week later, she signed the new Charter of the Commonwealth. Because of her age and the need for her to limit travelling, in 2013 she chose not to attend the biennial Commonwealth Heads of Government Meeting for the first time in 40 years. She was represented at the summit in Sri Lanka by Prince Charles. On 20 April 2018, the Commonwealth heads of government announced that Charles would succeed her as Head of the Commonwealth, which she stated was her "sincere wish". She underwent cataract surgery in May 2018. In March 2019, she gave up driving on public roads, largely as a consequence of a car crash involving her husband two months earlier. Elizabeth surpassed her great-great-grandmother, Queen Victoria, to become the longest-lived British monarch on 21 December 2007, and the longest-reigning British monarch and longest-reigning queen regnant and female head of state in the world on 9 September 2015. She became the oldest current monarch after King Abdullah of Saudi Arabia died on 23 January 2015. She later became the longest-reigning current monarch and the longest-serving current head of state following the death of King Bhumibol of Thailand on 13 October 2016, and the oldest current head of state on the resignation of Robert Mugabe of Zimbabwe on 21 November 2017. On 6 February 2017, she became the first British monarch to commemorate a sapphire jubilee, and on 20 November, she was the first British monarch to celebrate a platinum wedding anniversary. Philip had retired from his official duties as the Queen's consort in August 2017. ### COVID-19 pandemic On 19 March 2020, as the COVID-19 pandemic hit the United Kingdom, Elizabeth moved to Windsor Castle and sequestered there as a precaution. Public engagements were cancelled and Windsor Castle followed a strict sanitary protocol nicknamed "HMS Bubble". On 5 April, in a televised broadcast watched by an estimated 24 million viewers in the UK, she asked people to "take comfort that while we may have more still to endure, better days will return: we will be with our friends again; we will be with our families again; we will meet again." On 8 May, the 75th anniversary of VE Day, in a television broadcast at 9 pm—the exact time at which her father George VI had broadcast to the nation on the same day in 1945—she asked people to "never give up, never despair". In October, she visited the UK's Defence Science and Technology Laboratory in Wiltshire, her first public engagement since the start of the pandemic. On 4 November, she appeared masked for the first time in public, during a private pilgrimage to the Tomb of the Unknown Warrior at Westminster Abbey, to mark the centenary of his burial. In 2021, she received her first and second COVID-19 vaccinations in January and April respectively. Prince Philip died on 9 April 2021, after 73 years of marriage, making Elizabeth the first British monarch to reign as a widow or widower since Queen Victoria. She was reportedly at her husband's bedside when he died, and remarked in private that his death had "left a huge void". Due to the COVID-19 restrictions in place in England at the time, Elizabeth sat alone at Philip's funeral service, which evoked sympathy from people around the world. In her Christmas broadcast that year, which was ultimately her last, she paid a personal tribute to her "beloved Philip", saying, "That mischievous, inquiring twinkle was as bright at the end as when I first set eyes on him". Despite the pandemic, Elizabeth attended the 2021 State Opening of Parliament in May, and the 47th G7 summit in June. On 5 July, the 73rd anniversary of the founding of the UK's National Health Service, she announced that the NHS would be awarded the George Cross to "recognise all NHS staff, past and present, across all disciplines and all four nations". In October 2021, she began using a walking stick during public engagements for the first time since her operation in 2004. Following an overnight stay in hospital on 20 October, her previously scheduled visits to Northern Ireland, the COP26 summit in Glasgow, and the 2021 National Service of Remembrance were cancelled on health grounds. On Christmas Day 2021, while she was staying at Windsor Castle, 19-year-old Jaswant Singh Chail broke into the gardens using a rope ladder and carrying a crossbow with the aim of assassinating Elizabeth in revenge for the Amritsar massacre. Before he could enter any buildings, he was arrested and detained under the Mental Health Act. In 2023, he pleaded guilty to attempting to injure or alarm the sovereign. ### Platinum Jubilee Elizabeth's Platinum Jubilee began on 6 February 2022, marking 70 years since she acceded to the throne on her father's death. On the eve of the date, she held a reception at Sandringham House for pensioners, local Women's Institute members and charity volunteers. In her accession day message, Elizabeth renewed her commitment to a lifetime of public service, which she had originally made in 1947. Later that month, Elizabeth had "mild cold-like symptoms" and tested positive for COVID-19, along with some staff and family members. She cancelled two virtual audiences on 22 February, but held a phone conversation with British prime minister Boris Johnson the following day amid a crisis on the Russo-Ukrainian border, following which she made a donation to the Disasters Emergency Committee (DEC) Ukraine Humanitarian Appeal. On 28 February, she was reported to have recovered and spent time with her family at Frogmore. On 7 March, Elizabeth met Canadian prime minister Justin Trudeau at Windsor Castle, in her first in-person engagement since her COVID diagnosis. She later remarked that COVID infection "leave[s] one very tired and exhausted ... It's not a nice result". Elizabeth was present at the service of thanksgiving for Prince Philip at Westminster Abbey on 29 March, but was unable to attend the annual Commonwealth Day service that month or the Royal Maundy service in April, due to "episodic mobility problems". She missed the State Opening of Parliament in May for the first time in 59 years. (She did not attend the 1959 and 1963 state openings as she was pregnant with Prince Andrew and Prince Edward, respectively.) In her absence, Parliament was opened by the Prince of Wales and the Duke of Cambridge as counsellors of state. During the Platinum Jubilee celebrations, Elizabeth was largely confined to balcony appearances and missed the National Service of Thanksgiving. For the Jubilee concert, she took part in a sketch with Paddington Bear that opened the event outside Buckingham Palace. On 13 June, she became the second-longest reigning monarch in history among those whose exact dates of reign are known, with 70 years, 127 days reigned—surpassing King Bhumibol Adulyadej of Thailand. On 6 September, she appointed her 15th British prime minister, Liz Truss, at Balmoral Castle in Scotland. This marked the only time she did not receive a new prime minister at Buckingham Palace during her reign. No other British reign had seen so many prime ministers. The Queen's last public message was issued on 7 September to her Canadian people, in the aftermath of the Saskatchewan stabbings. Elizabeth never planned to abdicate, though she took on fewer public engagements as she grew older and Prince Charles took on more of her duties. The Queen told Canadian governor-general Adrienne Clarkson in a meeting in 2002 that she would never abdicate, saying "It is not our tradition. Although, I suppose if I became completely gaga, one would have to do something". In June 2022, Elizabeth met the Archbishop of Canterbury, Justin Welby, who "came away thinking there is someone who has no fear of death, has hope in the future, knows the rock on which she stands and that gives her strength." ## Death On 8 September 2022, Buckingham Palace released a statement which read: "Following further evaluation this morning, the Queen's doctors are concerned for Her Majesty's health and have recommended she remain under medical supervision. The Queen remains comfortable and at Balmoral." Her immediate family rushed to Balmoral to be by her side. She died peacefully at 15:10 BST at the age of 96, with two of her children, Charles and Anne, by her side; Charles immediately succeeded as monarch. Her death was announced to the public at 18:30, setting in motion Operation London Bridge and, because she died in Scotland, Operation Unicorn. Elizabeth was the first monarch to die in Scotland since James V in 1542. Her death certificate recorded her cause of death as "old age". On 12 September, Elizabeth's coffin was carried up the Royal Mile in a procession to St Giles' Cathedral, where the Crown of Scotland was placed on it. Her coffin lay at rest at the cathedral for 24 hours, guarded by the Royal Company of Archers, during which around 33,000 people filed past the coffin. It was taken by air to London on 13 September. On 14 September, her coffin was taken in a military procession from Buckingham Palace to Westminster Hall, where Elizabeth lay in state for four days. The coffin was guarded by members of both the Sovereign's Bodyguard and the Household Division. An estimated 250,000 members of the public filed past the coffin, as did politicians and other public figures. On 16 September, Elizabeth's children held a vigil around her coffin, and the next day her eight grandchildren did the same. Elizabeth's state funeral was held at Westminster Abbey on 19 September, which marked the first time that a monarch's funeral service had been held at the Abbey since George II in 1760. More than a million people lined the streets of central London, and the day was declared a holiday in several Commonwealth countries. In Windsor, a final procession involving 1,000 military personnel took place, which 97,000 people witnessed. Elizabeth's fell pony, and two royal corgis, stood at the side of the procession. After a committal service at St George's Chapel, Windsor Castle, Elizabeth was interred with her husband Philip in the King George VI Memorial Chapel later the same day, in a private ceremony attended by her closest family members. ## Legacy ### Beliefs, activities, and interests Elizabeth rarely gave interviews, and little was known of her political opinions, which she did not express explicitly in public. It is against convention to ask or reveal the monarch's views. When Times journalist Paul Routledge asked her about the miners' strike of 1984–85 during a royal tour of the newspaper's offices, she replied that it was "all about one man" (a reference to Arthur Scargill), with which Routledge disagreed. Routledge was widely criticised in the media for asking the question and claimed that he was unaware of the protocols. After the 2014 Scottish independence referendum, Prime Minister David Cameron was overheard saying that Elizabeth was pleased with the outcome. She had arguably issued a public coded statement about the referendum by telling one woman outside Balmoral Kirk that she hoped people would think "very carefully" about the outcome. It emerged later that Cameron had specifically requested that she register her concern. Elizabeth had a deep sense of religious and civic duty, and took her Coronation Oath seriously. Aside from her official religious role as Supreme Governor of the established Church of England, she worshipped with that church and with the national Church of Scotland. She demonstrated support for inter-faith relations and met with leaders of other churches and religions, including five popes: Pius XII, John XXIII, John Paul II, Benedict XVI and Francis. A personal note about her faith often featured in her annual Christmas Message broadcast to the Commonwealth. In 2000, she said: > To many of us, our beliefs are of fundamental importance. For me the teachings of Christ and my own personal accountability before God provide a framework in which I try to lead my life. I, like so many of you, have drawn great comfort in difficult times from Christ's words and example. Elizabeth was patron of more than 600 organisations and charities. The Charities Aid Foundation estimated that Elizabeth helped raise over £1.4 billion for her patronages during her reign. Her main leisure interests included equestrianism and dogs, especially her Pembroke Welsh Corgis. Her lifelong love of corgis began in 1933 with Dookie, the first corgi owned by her family. Scenes of a relaxed, informal home life were occasionally witnessed; she and her family, from time to time, prepared a meal together and washed the dishes afterwards. ### Media depiction and public opinion In the 1950s, as a young woman at the start of her reign, Elizabeth was depicted as a glamorous "fairytale Queen". After the trauma of the Second World War, it was a time of hope, a period of progress and achievement heralding a "new Elizabethan age". Lord Altrincham's accusation in 1957 that her speeches sounded like those of a "priggish schoolgirl" was an extremely rare criticism. In the late 1960s, attempts to portray a more modern image of the monarchy were made in the television documentary Royal Family and by televising Prince Charles's investiture as Prince of Wales. Elizabeth also instituted other new practices; her first royal walkabout, meeting ordinary members of the public, took place during a tour of Australia and New Zealand in 1970. Her wardrobe developed a recognisable, signature style driven more by function than fashion. In public, she took to wearing mostly solid-colour overcoats and decorative hats, allowing her to be seen easily in a crowd. By the end of her reign, nearly one third of Britons had seen or met Elizabeth in person. At Elizabeth's Silver Jubilee in 1977, the crowds and celebrations were genuinely enthusiastic; but, in the 1980s, public criticism of the royal family increased, as the personal and working lives of Elizabeth's children came under media scrutiny. Her popularity sank to a low point in the 1990s. Under pressure from public opinion, she began to pay income tax for the first time, and Buckingham Palace was opened to the public. Although support for republicanism in Britain seemed higher than at any time in living memory, republican ideology was still a minority viewpoint, and Elizabeth herself had high approval ratings. Criticism was focused on the institution of the monarchy itself, and the conduct of Elizabeth's wider family, rather than her own behaviour and actions. Discontent with the monarchy reached its peak on the death of Diana, Princess of Wales, although Elizabeth's personal popularity—as well as general support for the monarchy—rebounded after her live television broadcast to the world five days after Diana's death. In November 1999, a referendum in Australia on the future of the Australian monarchy favoured its retention in preference to an indirectly elected head of state. Many republicans credited Elizabeth's personal popularity with the survival of the monarchy in Australia. In 2010, Prime Minister Julia Gillard noted that there was a "deep affection" for Elizabeth in Australia and that another referendum on the monarchy should wait until after her reign. Gillard's successor, Malcolm Turnbull, who led the republican campaign in 1999, similarly believed that Australians would not vote to become a republic in her lifetime. "She's been an extraordinary head of state", Turnbull said in 2021, "and I think frankly, in Australia, there are more Elizabethans than there are monarchists". Similarly, referendums in both Tuvalu in 2008 and Saint Vincent and the Grenadines in 2009 saw voters reject proposals to become republics. Polls in Britain in 2006 and 2007 revealed strong support for the monarchy, and in 2012, Elizabeth's Diamond Jubilee year, her approval ratings hit 90 per cent. Her family came under scrutiny again in the last few years of her life due to her son Andrew's association with convicted sex offenders Jeffrey Epstein and Ghislaine Maxwell, his lawsuit with Virginia Giuffre amidst accusations of sexual impropriety, and her grandson Harry and his wife Meghan's exit from the working royal family and subsequent move to the United States. Polling in Great Britain during the Platinum Jubilee, however, showed support for maintaining the monarchy and Elizabeth's personal popularity remained strong. As of 2021 she remained the third most admired woman in the world according to the annual Gallup poll, her 52 appearances on the list meaning she had been in the top ten more than any other woman in the poll's history. Elizabeth was portrayed in a variety of media by many notable artists, including painters Pietro Annigoni, Peter Blake, Chinwe Chukwuogo-Roy, Terence Cuneo, Lucian Freud, Rolf Harris, Damien Hirst, Juliet Pannett and Tai-Shan Schierenberg. Notable photographers of Elizabeth included Cecil Beaton, Yousuf Karsh, Anwar Hussein, Annie Leibovitz, Lord Lichfield, Terry O'Neill, John Swannell and Dorothy Wilding. The first official portrait photograph of Elizabeth was taken by Marcus Adams in 1926. ## Titles, styles, honours, and arms ### Titles and styles Elizabeth held many titles and honorary military positions throughout the Commonwealth, was sovereign of many orders in her own countries and received honours and awards from around the world. In each of her realms, she had a distinct title that follows a similar formula: Queen of Saint Lucia and of Her other Realms and Territories in Saint Lucia, Queen of Australia and Her other Realms and Territories in Australia, etc. In the Isle of Man, which is a Crown Dependency rather than a separate realm, she was known as Lord of Mann. Elizabeth was also styled Defender of the Faith. ### Arms From 21 April 1944 until her accession, Elizabeth's arms consisted of a lozenge bearing the royal coat of arms of the United Kingdom differenced with a label of three points argent, the centre point bearing a Tudor rose and the first and third a cross of St George. Upon her accession, she inherited the various arms her father held as sovereign. Elizabeth also possessed royal standards and personal flags for use in the United Kingdom, Canada, Australia, New Zealand, Jamaica, and elsewhere. ## Issue ## Ancestry ## See also - Finances of the British royal family - Household of Elizabeth II - List of things named after Elizabeth II - List of jubilees of Elizabeth II - List of special addresses made by Elizabeth II - Royal eponyms in Canada - Royal descendants of Queen Victoria and of King Christian IX - List of covers of Time magazine (1920s), (1940s), (1950s), (2010s)
5,054,349
Thomas the Slav
1,168,875,135
Byzantine military commander (c. 760–823)
[ "760s births", "823 deaths", "9th-century Byzantine people", "9th-century executions by the Byzantine Empire", "Abbasid Caliphate–Byzantine Empire relations", "Byzantine Pontians", "Byzantine generals", "Byzantine people of the Arab–Byzantine wars", "Byzantine usurpers", "Executed Byzantine people", "People executed by impalement" ]
Thomas the Slav (Greek: Θωμᾶς ὁ Σλάβος, romanized: Thōmas ho Slavos, c. 760 – October 823) was a 9th-century Byzantine military commander, most notable for leading a wide-scale revolt in 821–23 against Emperor Michael II the Amorian (r. 820–829). An army officer of Slavic origin from the Pontus region (now north-eastern Turkey), Thomas rose to prominence, along with the future emperors Michael II and Leo V the Armenian (r. 813–820), under the protection of general Bardanes Tourkos. After Bardanes' failed rebellion in 803, Thomas fell into obscurity until Leo V's rise to the throne, when Thomas was raised to a senior military command in central Asia Minor. After the murder of Leo and usurpation of the throne by Michael the Amorian, Thomas revolted, claiming the throne for himself. Thomas quickly secured support from most of the themes (provinces) and troops in Asia Minor, defeated Michael's initial counter-attack and concluded an alliance with the Abbasid Caliphate. After winning over the maritime themes and their ships as well, he crossed with his army to Europe and laid siege to Constantinople. The imperial capital withstood Thomas's attacks by land and sea, while Michael II called for help from the Bulgarian ruler khan Omurtag. Omurtag attacked Thomas's army, but although repelled, the Bulgarians inflicted heavy casualties on Thomas's men, who broke and fled when Michael took to the field a few months later. Thomas and his supporters sought refuge in Arcadiopolis, where he was soon blockaded by Michael's troops. In the end, Thomas's supporters surrendered him in exchange for a pardon, and he was executed. Thomas's rebellion was one of the largest in the Byzantine Empire's history, but its precise circumstances are unclear due to competing historical narratives, which have come to include claims fabricated by Michael to blacken his opponent's name. Consequently, various motives and driving forces have been attributed to Thomas and his followers. As summarized by the Oxford Dictionary of Byzantium, "Thomas's revolt has been variously attributed to a reaction against Iconoclasm, a social revolution and popular uprising, a revolt by the Empire's non-Greek ethnic groups, Thomas's personal ambitions, and his desire to avenge Leo V." Its effects on the military position of the Empire, particularly vis-à-vis the Arabs, are also disputed. ## Early life and career The 11th-century Theophanes Continuatus states that Thomas was descended from South Slavs resettled in Asia Minor by successive Byzantine emperors, while the 10th-century chronicler Genesios calls him "Thomas from Lake Gouzourou, of Armenian race". Most modern scholars support his Slavic descent and believe his birthplace to have been near Gaziura in the Pontus. Hence his epithet of "the Slav", which has been applied to him only in modern times. Nothing is known about his family and early life, except that his parents were poor and that Thomas himself had received no education. Given that he was between 50 and 60 years old at the time of the rebellion, he was probably born around 760. Two different accounts of Thomas's life are recounted in both Genesios and Theophanes Continuatus. According to the first account, Thomas first appeared in 803 accompanying general Bardanes Tourkos, and pursued a military career until launching his revolt in late 820. In the second version, he came to Constantinople as a poor youth and entered the service of a man with the high court rank of patrikios. Then, discovered trying to commit adultery with his master's wife, Thomas fled to the Arabs in Syria, where he remained for 25 years. Pretending to be the murdered emperor Constantine VI (r. 780–797), he then led an Arab-sponsored invasion of Asia Minor, but was defeated and punished. Classical and Byzantine scholar J.B. Bury tried to reconcile the two narratives, placing Thomas's flight to the Abbasid Caliphate at around 788 and then having him return to Byzantine service before 803, while the Russian scholar Alexander Vasiliev interpreted the sources as implying that Thomas fled to the Caliphate at Constantine VI's deposition in 797, and that his participation in Bardanes's revolt must be discounted entirely. The second version of Thomas's story is explicitly preferred by Genesios and Theophanes Continuatus, and is the only one recorded in 9th-century sources, namely the chronicle of George the Monk and the Life of Saints David, Symeon, and George of Lesbos. Nevertheless, the French Byzantinist Paul Lemerle came to consider it an unreliable later tradition created by his rival Michael II to discredit Thomas, and rejected it altogether, preferring to rely on the first account alone. Most modern scholars follow him in this interpretation. The first tradition relates that Thomas served as a spatharios (staff officer) to Bardanes Tourkos, the monostrategos ("single-general", i.e. commander-in-chief) of the eastern themes, who in 803 rose in rebellion against Emperor Nikephoros I (r. 802–811). Alongside Thomas were two other young spatharioi in Bardanes's retinue, who formed a fraternal association: Leo the Armenian, the future Leo V, and Michael the Amorian, the future Michael II. According to a later hagiographic tradition, before launching his revolt, Bardanes, in the company of his three young protégés, is said to have visited a monk near Philomelion who was reputed to foresee the future. The monk predicted what would indeed happen: that Bardanes's revolt would fail, that Leo and Michael would both become emperors, and that Thomas would be acclaimed emperor and killed. When Bardanes did in fact rise up, he failed to win any widespread support. Leo and Michael soon abandoned him and defected to the imperial camp and were rewarded with senior military posts. Thomas alone remained loyal to Bardanes until his surrender. In the aftermath of Bardanes's failure, Thomas disappears from the sources for ten years. Bury suggests that he fled (for a second time according to his interpretation) to the Arabs, a view accepted by a number of other scholars, such as Romilly James Heald Jenkins. The historian Warren Treadgold, however, argues that Thomas stayed in the empire and that may have even remained in active military service, and explains his obscurity by Thomas's association with Bardanes, which hampered his career. In July 813, Leo the Armenian became emperor and quickly rewarded his old companions, giving them command over elite military forces. Michael received the tagma of the Excubitors (one of the professional guard cavalry regiments stationed around Constantinople), and Thomas the tourma (division) of the Foederati, stationed in the Anatolic Theme. ## Rebellion ### Background and motives On Christmas Day 820, Leo was murdered in the palace chapel by officials under the direction of Michael the Amorian, who was quickly crowned emperor. At about the same time, Thomas launched a rebellion in the Anatolic Theme. Sources are divided on the exact chronology and motives of the revolt. George the Monk, the hagiographic sources, and a letter from Michael II to the western emperor Louis the Pious claim that Thomas had risen up against Leo before Michael's usurpation. This chronology is followed by almost all later Byzantine chroniclers like Genesios, Theophanes Continuatus, and Skylitzes, as well as a number of modern scholars like J. B. Bury and Alexander Kazhdan. In his study of Thomas and the revolt, Paul Lemerle dismisses this timeline as a later attempt by Michael to justify his revolt as a response to Leo's failure to suppress the rebellion, and to exculpate himself of the early defeats suffered by the imperial forces. Some recent studies follow Lemerle and prefer the account of Symeon Logothetes—generally considered the most accurate of the 10th-century sources—according to which Thomas rebelled a few days after the murder of Leo and in reaction to it. Consequently, the empire became divided in a struggle that was less a rebellion against the established government and more a contest for the throne between equal contenders. Michael held Constantinople and the European provinces, controlled the imperial bureaucracy, and had been properly crowned by the Patriarch, but he had come to the throne through murder, while Thomas gained support and legitimacy through his claim to avenge the fallen Leo, and he won the backing of themes both in Asia and later in Europe. Thomas was a well-known, popular, and respected figure in Asia Minor, where Leo V had enjoyed considerable support. Michael, on the other hand, was virtually unknown outside the capital; his military record was unremarkable, he was uneducated and coarse of manner, his stutter earned him ridicule, and he was reputed to sympathize with the heretical religious sect of the Athinganoi, to which his family had belonged. Byzantine accounts of Thomas's rebellion state that he did not in fact claim the throne under his own name but assumed the identity of Emperor Constantine VI, who had been deposed and murdered by his mother, Irene of Athens, in 797. Most modern scholars follow Lemerle, who dismisses this as yet another later fabrication. If it contains any truth, it is possible that this story may originate from Thomas choosing to be crowned under the regnal name of "Constantine", but there is no evidence for such an act. The possible appropriation of Constantine VI's identity is linked in some Byzantine sources with the statement that Thomas was a rumoured supporter of iconolatry, as opposed to Michael's support for iconoclasm: it was under Constantine VI that veneration of the icons was restored. Nevertheless, the ambiguous phrasing of the sources, the iconoclast leanings of many themes in Asia Minor, and Thomas's alliance with the Arabs seem to speak against any open commitment to icon worship on his part. Indeed, given Michael II's conciliatory approach during his early reign, the icon worship controversy does not seem to have been a major issue at the time, and in the view of modern scholars most probably did not play a major role in Thomas's revolt. The image of Thomas as an iconophile champion opposed to the iconoclast Michael II in later, Macedonian-era sources was probably the result of their own anti-iconoclast bias. Warren Treadgold furthermore suggests that if true, Thomas's claim to be Constantine VI may have been little more than a tale circulated to win support, and that Thomas pursued a "studied ambiguity" towards icons, designed to attract support from iconophiles. In Treadgold's words, "Thomas could be all things to all men until he had conquered the whole empire, and then he would have time enough to disappoint some of his followers". The account of Theophanes Continuatus on Thomas's revolt states that in this time, "the servant raised his hand against his master, the soldier against his officer, the captain against his general". This has led some scholars, chiefly Alexander Vasiliev and George Ostrogorsky, to regard Thomas's revolt as an expression of widespread discontent among the rural population, which suffered under heavy taxation. Other Byzantinists, notably Lemerle, dismiss rural discontent as a primary factor during the revolt. Genesios and other chroniclers further state that Thomas won the support of "Hagarenes, Indians, Egyptians, Assyrians, Medians, Abasgians, Zichs, Iberians, Kabirs, Slavs, Huns, Vandals, Getae, the sectarians of Manes, Laz, Alanians, Chaldians, Armenians and every kind of other peoples". This has led to modern claims that Thomas's rebellion represented an uprising of the empire's non-Greek ethnic groups, but according to Lemerle, this exaggerated account is yet another piece of hostile disinformation. It is almost certain, however, that Thomas could count on support among the empire's Caucasian neighbours, for the presence of Abasgians, Armenians, and Iberians in his army is mentioned in the near-contemporary letter of Michael II to Louis the Pious. The reasons for this support are unclear; Thomas may have made unspecified promises to their rulers, but Lemerle suggests that the Armenians might have in part been motivated by revenge for Leo, their murdered kinsman. ### Outbreak and spread of the revolt in Asia Minor As commander of the Foederati, Thomas was based at Amorion, the capital of the Anatolic Theme. Although junior to the theme's strategos (military governor), his proclamation received widespread support throughout Asia Minor. Within a short time, all the Asian themes supported Thomas, except for the Opsician Theme under the patrician Katakylas, a nephew of Michael II, and the Armeniac Theme, under its strategos, Olbianos. The Thracesian Theme wavered between the two rivals, but finally threw its support behind Thomas. More than two-thirds of the empire's Asian army eventually aligned with Thomas, while the defection of the provincial tax officials provided him with much-needed revenue. Michael's first response was to order the Armeniac army to attack Thomas. The Armeniacs were easily defeated in battle and Thomas proceeded through the eastern parts of the Armeniac Theme to occupy the frontier region of Chaldia. His conquest of the Armeniac province was left incomplete because the Abbasids, taking advantage of the Byzantine civil war, launched raids by land and sea against southern Asia Minor, where Thomas had left few troops. Instead of returning to face these raids, Thomas launched a large-scale invasion of his own against Abbasid territory in spring 821, either in Syria (according to Bury and others) or in Arab-held Armenia (according to Treadgold). Thomas then sent an emissary to the Caliph al-Ma'mun, who was sufficiently impressed by Thomas's show of force to receive his proposals, especially in view of the Caliphate's own problems with the rebellion of the Khurramites under Babak Khorramdin. Thomas and Ma'mun concluded a treaty of peace and mutual alliance. The Caliph allowed Thomas to recruit men from Arab-ruled territories, and gave leave for him to cross the border and travel to Arab-held Antioch, where he was crowned emperor by the iconophile Patriarch of Antioch, Job. In exchange, Thomas is said to have promised to cede unspecified territories and become a tributary vassal of the Caliph, though the agreement's exact terms are left unclear in the sources. At about the same time, Thomas adopted a young man of obscure origin, whom he named Constantius and made his co-emperor. Meanwhile, Michael II tried to win support among the iconophiles by appointing a relative of his as Archbishop of Ephesus, but his plan failed when the latter refused to be consecrated by the avowedly iconoclast Patriarch Antony I Kassimates. In an effort to consolidate his hold on the provinces, and especially the two Asian themes still loyal to him, Michael proclaimed a 25 percent reduction in taxes for 821–822. By summer 821, Thomas had consolidated his position in the East, though the Opsician and Armeniac themes still eluded his control. He set his sights on the ultimate prize, Constantinople, the possession of which alone conferred full legitimacy to an emperor. Thomas assembled troops, gathered supplies, and built siege machines. To counter the powerful Imperial Fleet stationed in the capital, he built new ships to augment his existing fleet, which came from the Cibyrrhaeot and Aegean Sea naval themes, and possibly included task forces from the theme of Hellas. Thomas recalled Gregory Pterotos, a general and nephew of Leo V whom Michael had exiled to the island of Skyros, and gave him command of the fleet. By October, the thematic fleets loyal to Thomas had finished assembling at Lesbos, and Thomas's army began marching from the Thracesian Theme towards Abydos, where he intended to cross over into Europe. At this point, Thomas suffered his first reversal of fortune: before his departure for Abydos, he had sent an army under his adoptive son Constantius against the Armeniacs. Constantius was ambushed by strategos Olbianos and killed, although the army was able to withdraw with relatively few casualties. Constantius's severed head was sent to Michael, who dispatched it to Thomas at Abydos. Thomas was undaunted by this relatively minor setback, and crossed over into Europe some time in late October or early November. There, Constantius was soon replaced as co-emperor by another obscure individual, a former monk whom Thomas also adopted and named Anastasius. ### Siege of Constantinople Anticipating Thomas's move, Michael had gone out at the head of an army to the themes of Thrace and Macedonia in Constantinople's European hinterland and strengthened the garrisons of several fortresses there to secure the loyalty of their populace. When Thomas landed, the people of the European themes welcomed him with enthusiasm, and Michael was forced to withdraw to Constantinople. Volunteers, including many Slavs, flocked to Thomas's banner. As he set out towards Constantinople, chroniclers recount that his army swelled to some 80,000 men. The capital was defended by the imperial tagmata, augmented by reinforcements from the Opsician and Armeniac themes. Michael had ordered the city walls to be repaired, and chained off the entrance to the Golden Horn, while the Imperial Fleet further guarded the capital from the sea. Nevertheless, judging from Michael's passive stance, his forces were inferior to Thomas's; Warren Treadgold estimates Michael's army to have numbered approximately 35,000 men. Thomas's fleet arrived at the capital first. Facing no opposition from the Imperial Fleet, the rebels broke or unfastened the chain and entered the Golden Horn, taking station near the mouths of the Barbysos river, where they awaited the arrival of Thomas and his army. Thomas arrived in early December. The sight of his huge force did not cow the capital's inhabitants: unlike the provinces, the capital's citizens and garrison stood firmly behind Michael. To further encourage his troops, Michael had his young son Theophilos lead a procession along the walls, carrying a piece of the True Cross and the mantle of the Virgin Mary, while a large standard was hoisted on top of the Church of St. Mary at Blachernae, in full view of both armies. After subduing the cities around the capital, Thomas resolved to attack Constantinople from three sides, perhaps hoping his assault would impress its inhabitants or lead to defections. His deputies Anastasius and Gregory Pterotos would attack the Theodosian land and sea walls, respectively, while he would lead the main attack against the less formidable defenses protecting Blachernae. All of Thomas's forces were amply supplied with siege engines and catapults, and his fleet fielded quantities of Greek fire in addition to large shipborne catapults. Each of Thomas's attacks failed: the defenders' artillery proved superior and kept Thomas's engines away from the land walls, while adverse winds hindered the fleet from taking any meaningful action. Deciding that operations in the midst of winter were hazardous and unlikely to succeed, Thomas suspended all further attacks until spring and withdrew his army to winter quarters. Michael used the respite to ferry in additional reinforcements from Asia Minor and repair the walls of Blachernae. When Thomas returned in spring, he decided to focus his attack on the Blachernae sector. Before the offensive, Michael himself ascended the walls and addressed Thomas's troops, exhorting them to abandon their commander and promising amnesty if they would defect. Thomas's army viewed the plea as a sign of weakness, and advanced confidently to begin the assault, but as they neared the wall, the defenders opened the gates and attacked. The sudden onslaught drove back Thomas's army; at the same time, the Imperial Fleet defeated Thomas's ships, whose crews broke and fled to the shore in panic. This defeat diminished Thomas's naval strength, and although he continued blockading the capital by land, the loss demoralized his supporters, who began defecting. Gregory Pterotos, whose family was in Michael's hands, resolved to desert Thomas, followed by a small band of men loyal to him. He departed the rebel camp, headed west, and sent a monk to inform Michael of his defection, but the monk failed to circumvent the blockade and reach the capital. Upon learning of this defection, Thomas reacted quickly: with a select detachment, he followed Gregory, defeated his troops and killed the deserter. Thomas exploited this small victory for all it was worth, widely proclaiming that he had defeated Michael's troops "by land and sea". He sent messages to the themes of Greece, whose support had been lukewarm until that point, demanding additional ships. The themes responded forcefully, sending their squadrons, allegedly numbering 350 vessels, to join him. Thus reinforced, Thomas decided to launch a two-pronged assault against Constantinople's sea walls, with his original fleet attacking the wall of the Golden Horn, and the new fleet attacking the south coast, looking towards the Sea of Marmara. Michael, however, did not remain idle: his own fleet attacked the thematic force soon after it arrived at its anchorage in Byrida. Using Greek fire, the Imperial Fleet destroyed many of the rebel vessels and captured most of the remaining ships. Only a few managed to escape and rejoin Thomas's forces. Through this victory, Michael secured control of the sea, but Thomas's army remained superior on land and continued its blockade of Constantinople. Minor skirmishes ensued for the remainder of the year, with Michael's forces sallying forth from the city to attack Thomas's forces. Although both sides claimed minor successes in these clashes, neither was able to gain a decisive advantage. Michael turned to the empire's northern neighbour, Bulgaria, for help. The two states were bound by a 30-year treaty signed under Leo V, and the Bulgarian ruler, khan Omurtag (r. 814–831), was happy to respond to Michael's request for assistance. A later tradition, reported by Genesios and Theophanes Continuatus, holds that Omurtag acted of his own accord and against Michael's will, but this is almost universally rejected as a version started or at least encouraged by Michael, who did not wish to be seen encouraging "barbarians" to invade the empire. The Bulgarian army invaded Thrace, probably in November 822 (Bury believes that the Bulgarian attack occurred in spring 823), and advanced towards Constantinople. Thomas raised the siege, and marched to meet them with his army. The two armies met at the plain of Kedouktos near Heraclea (hence known as the Battle of Kedouktos in the Byzantine sources) . The accounts of the subsequent battle differ: the later sources state that Thomas lost the battle, but the near-contemporary George the Monk states that Thomas "killed many Bulgarians". Given the lack of Bulgarian activity after the battle, most modern scholars (with the notable exception of Bury) believe that Thomas won the battle. ### Defeat and death of Thomas, end of the revolt Thomas was unable to resume the siege: aside from the heavy casualties his army likely suffered, his fleet, which he had left behind in the Golden Horn, surrendered to Michael during his absence. Thomas set up camp at the plain of Diabasis some 40 kilometres (25 mi) west of Constantinople, spending winter and early spring there. While a few of his men deserted, the bulk remained loyal. Finally, in late April or early May 823, Michael marched with his troops against Thomas, accompanied by the generals Olbianos and Katakylas with new troops from Asia Minor. Thomas marched to meet them and planned to use a stratagem to outwit his opponents: his men, ostensibly demoralized, would pretend to flee, and when the imperial army broke ranks to pursue them, they would turn back and attack. However, Thomas's troops were by now weary of the prolonged conflict, and their submission was unfeigned. Many surrendered to Michael, while others fled to nearby fortified cities. Thomas sought refuge in Arcadiopolis with a large group; his adopted son Anastasius went with some of Thomas's men to Bizye, and others fled to Panion and Heraclea. Michael blockaded Thomas's cities of refuge but organized no assaults, instead aiming to capture them peacefully by wearing out their defenders. His strategy was motivated by the political and propaganda expedient of appearing merciful—"in order to spare Christian blood", as Michael himself put it in his letter to Louis the Pious—but also, according to the chroniclers, by fear of demonstrating to the Bulgarians that the Byzantine cities' fortifications could fall to attack. In Asia Minor, Thomas's partisans hoped to lure Michael away by allowing the Arabs free passage to raid the provinces of Opsikion and Optimaton, which were loyal to the emperor. Michael was unmoved and continued the blockade. His troops barred access to Arcadiopolis with a ditch. To conserve supplies, the blockaded troops sent away women and children, followed by those too old, wounded, or otherwise incapable of bearing arms. After five months of blockade, Thomas's loyalists were eventually forced to eat starved horses and their hides. Some began deserting by lowering themselves with ropes over the city walls or jumping from them. Thomas sent messengers to Bizye, where the blockade was less close, to arrange a relief attempt by Anastasius. Before anything could be done, however, the exhausted troops at Arcadiopolis surrendered their leader in exchange for an imperial pardon. Thomas was delivered to Michael seated on a donkey and bound in chains. He was prostrated before the emperor, who placed his foot on his defeated rival's neck and ordered his hands and feet cut off and his corpse impaled. Thomas pleaded for clemency with the words "Have mercy on me, oh True Emperor!" Michael only asked his captive to reveal whether any of his own senior officials had had dealings with Thomas. Before Thomas could respond, the Logothete of the Course, John Hexaboulios, advised against hearing whatever claims a defeated rebel might make. Michael agreed, and Thomas's sentence was carried out immediately. When the inhabitants of Bizye heard of Thomas's fate, they surrendered Anastasius, who suffered the same fate as Thomas. In Panion and Heraclea, Thomas's men held out until an earthquake struck in February 824. The tremor severely damaged the wall of Panion, and the city surrendered. The damage at Heraclea was less severe, but after Michael landed troops at its seaward side, it too was forced to surrender. In Asia Minor, Thomas's loyalists mostly submitted peacefully, but in the Cibyrrhaeot Theme, resistance lingered until suppressed by strategos John Echimos. In the Thracesian theme, Thomas's soldiers turned to brigandage. The most serious opposition was offered in central Asia Minor by two officers, who had possibly served Thomas as strategoi: Choireus, with his base at Kaballa northwest of Iconium, and Gazarenos Koloneiates, based at Saniana, southeast of Ancyra. From their strongholds, they spurned Michael's offer of a pardon and the high title of magistros and raided the provinces that had gone over to him. Soon, however, Michael's agents persuaded the inhabitants of the two forts to shut their gates against the officers. Choireus and Koloneiates then tried to seek refuge in Arab territory but were attacked en route by loyalist troops, captured, and crucified. ### Aftermath and effects The end of Thomas the Slav's great rebellion was marked by Michael II's triumph, held in May 824 in Constantinople. While he executed Thomas's volunteers from the Caliphate and perhaps also the Slavs, the sheer number of individuals involved, the necessity of appearing clement and sparing with Christian lives, and the need to restore internal tranquillity to his realm compelled Michael to treat Thomas's defeated partisans with leniency: most were released after being paraded in the Hippodrome during his celebration, and only the most dangerous were exiled to remote corners of the empire. In an effort to discredit his opponent, Michael authorized an "official" and heavily distorted version of Thomas's life and revolt. The document was written by the deacon Ignatios and published in 824 as Against Thomas. This report quickly became the commonly accepted version of events. Thomas failed in spite of his qualities and the widespread support he had gained, which brought him control of most of the empire. Lemerle holds that several factors played a role in his defeat: the Asian themes he did not subdue supplied reinforcements to Michael; Thomas's fleet performed badly; and the Bulgarian offensive diverted him away from the capital and weakened his army. But the most decisive obstacles were the impregnable walls of Constantinople, which ensured that an emperor who controlled Constantinople could only be overthrown from within the city. Thomas's rebellion was the "central domestic event" of Michael II's reign, but it was not very destructive in material terms: except for Thrace, which had suffered from the prolonged presence of the rival armies and the battles fought there, the larger part of the empire was spared the ravages of war. The Byzantine navy suffered great losses, with the thematic fleets in particular being devastated, while the land forces suffered comparatively few casualties. This is traditionally held to have resulted in a military weakness and internal disorder which was swiftly exploited by the Muslims: in the years after Thomas's rebellion, Andalusian exiles captured Crete and the Tunisian Aghlabids began their conquest of Sicily, while in the East, the Byzantines were forced to maintain a generally defensive stance towards the Caliphate. More recent scholarship has disputed the degree to which the civil war was responsible for Byzantine military failures during these years, citing other reasons to explain them: Warren Treadgold opines that the empire's military forces recovered fairly quickly, and that incompetent military leadership coupled with "the remoteness of Sicily, the absence of regular troops on Crete, the simultaneity of the attacks on both islands, and the government's long-standing lack of interest in sea-power" were far more responsible for the loss of the islands. ## See also - List of Byzantine revolts and civil wars - List of sieges of Constantinople
65,840,778
Nichols's Missouri Cavalry Regiment
1,025,930,613
Cavalry Regiment of the Confederate States Army
[ "Military units and formations disestablished in 1865", "Military units and formations established in 1864", "Units and formations of the Confederate States Army from Missouri" ]
Nichols's Missouri Cavalry Regiment served in the Confederate States Army during the late stages of the American Civil War. The cavalry regiment began recruiting in early 1864 under Colonel Sidney D. Jackman, who had previously raised a unit that later became the 16th Missouri Infantry Regiment. The regiment officially formed on June 22 and operated against the Memphis and Little Rock Railroad through August. After joining Major General Sterling Price's command, the unit participated in Price's Raid, an attempt to create a popular uprising against Union control of Missouri and draw Union troops away from more important theaters of the war. During the raid, while under the command of Lieutenant Colonel Charles H. Nichols, the regiment was part of an unsuccessful pursuit of Union troops who were retreating after the Battle of Fort Davidson in late September. At the Battle of Little Blue River on October 21, Nichols's regiment attacked the Union flank, drawing artillery from the Union center to counter the regiment's attack. This allowed other Confederate units to successfully attack the now-weakened Union center. The next day, the regiment was part of a force that defeated the 2nd Kansas Militia Infantry Regiment during the Battle of Byram's Ford. On October 23, Nichols's regiment was engaged in the Confederate defeat at the Battle of Westport. After the defeat at Westport, the Confederates began retreating through Kansas. After a disastrous defeat at the Battle of Mine Creek on October 25, Nichols's regiment was part of the Confederate rear guard. The unit supported an artillery battery during the Second Battle of Newtonia on October 28, but did not see close combat. The men of Nichols's regiment were furloughed on October 30, with orders to return to the army in December. Before the war ended in 1865, the unit disbanded, probably while stationed in Texas; some of the men reported to Shreveport, Louisiana, in June to receive their paroles. The regiment had a strength of about 300 men in August 1864 and the number of casualties suffered by the regiment over the course of its existence cannot be accurately determined. ## Background and organization At the outset of the American Civil War in April 1861, Missouri was a slave state. Governor Claiborne Fox Jackson supported secession from the United States, and formed a secessionist militia unit known as the Missouri State Guard. In July, anti-secession state legislators voted to remain in the Union, while Jackson and the pro-secession legislators voted to secede in November. Jackson and his supporters formed the Confederate government of Missouri and joined the Confederate States of America, functioning as a government-in-exile. As a result, Missouri had two opposing governments. Militarily, the pro-secession forces won some early victories, but the Union gained control of Missouri after the Battle of Pea Ridge in March 1862. Colonel Sidney D. Jackman had led a newly recruited unit of Missouri Confederates in 1862, but resigned his commission when the unit was classified as infantry, as he preferred to lead cavalry. Jackman's former unit eventually became the 16th Missouri Infantry Regiment. Under the authority of Major General Thomas C. Hindman, Jackman returned to Missouri to continue recruiting cavalry. He led his recruits into Arkansas in October 1863 to join Major General Sterling Price's army. In early 1864, Jackman individually traveled to northeastern Arkansas to join the forces of Brigadier General Joseph O. Shelby, who authorized Jackman to begin recruiting again. Jackman's orders were to rejoin Shelby on June 16, at Jacksonport, Arkansas; Jackman and his recruits did not join the Confederates until June 22. About two-thirds of the new unit lacked weapons. Jackman was the regiment's colonel, while Charles H. Nichols was lieutenant colonel and George W. Newton was major. Ten companies of the regiment are known to have existed, but the only confirmed designations are G and H companies. ## Service history ### Operations in Arkansas The regiment spent July 1864 operating in the vicinity of the Memphis and Little Rock Railroad. That month, it took part in a fight that resulted in the unit inflicting 33 Union casualties and damaging about 1 mile (1.6 km) of the railroad. In August, Jackman was elevated to brigade command and Nichols took over leadership of the regiment. A squad of the unit moved on August 23, to join an attack on a station of the Memphis & Little Rock held by Union troops, but the fight had ended with a Union surrender before Nichols's men arrived. Later that same day, the men of the regiment were part of a Confederate column that attacked Jones's Hay Station, whose Union defenders quickly surrendered. The capture of the station netted 400 prisoners, as well as supplies, weapons, and a battle flag. Later, the unit skirmished for an hour with a Union column that had left DeVall's Bluff; the action ended when the Confederates disengaged. The regiment spent September 26 detached from the rest of Jackman's brigade as a rear guard unit. Nichols's regiment saw little further action until Price's Raid began in October. The regiment consisted of around 300 men during the month of August. ### Price's Raid #### Towards St. Louis In the 1864 United States presidential election, incumbent president Abraham Lincoln supported continuing the war, while former Union general George B. McClellan promoted ending it. By the beginning of September 1864, military events in the eastern United States, especially the Confederate defeat in the Atlanta campaign, gave Lincoln an advantage in the election over McClellan. At this point, the Confederacy had very little chance of victory. As events east of the Mississippi River turned against the Confederacy, General Edmund Kirby Smith, commander of the Confederate Trans-Mississippi Department, was ordered to transfer the infantry under his command to the fighting in the Eastern and Western Theaters. This proved to be impossible, as the Union Navy controlled the Mississippi River, preventing a large-scale crossing. Despite having limited resources for an offensive, Smith decided that an attack designed to divert Union troops from the principal theaters of combat would have the same effect as the proposed transfer of troops. Price and the new Confederate Governor of Missouri, Thomas Caute Reynolds, suggested that an invasion of Missouri would be an effective operation; Smith approved the plan and appointed Price to command it. Price expected that the offensive would create a popular uprising against Union control of Missouri, divert Union troops away from the principal theaters of combat (many of the Union troops defending Missouri had been transferred out of the state, leaving the Missouri State Militia as the state's primary defensive force), and aid McClellan's chance of defeating Lincoln. On September 19, Price's column entered the state. Nichols's regiment, as part of Jackman's brigade, traveled to Potosi. On September 24, Price learned that a Union force held the town of Pilot Knob. On September 26, Price moved to counter this force by sending Shelby's men to operate north of Pilot Knob, while moving the divisions of Brigadier General John S. Marmaduke and Major General James F. Fagan against the town. On September 27, Marmaduke's and Fagan's men attacked the Union soldiers, bringing on the Battle of Fort Davidson. The Confederate attackers suffered significant losses and were repulsed, although the Union troops abandoned the fort overnight. Price ordered Shelby's division, including Nichols's regiment, to pursue the Union soldiers, who managed to escape. On September 30, and October 1, the regiment operated against the Pacific Railroad, destroying parts of it. Jackman's brigade then headed to Jefferson City, and Nichols's regiment fought in several minor skirmishes on the way. On October 10, the unit arrived at Boonville, where it deployed south of the town to guard a road. Two days later, Union troops attacked the regiment's position. In this action, Nichols's unit, which was reportedly about 300-men strong, was initially driven back by the 5th Missouri State Militia Cavalry Regiment, but the Unionists retreated after engaging Hunter's Missouri Cavalry Regiment, Schnable's Missouri Cavalry Battalion, and Collins's Missouri Battery. The skirmish lasted about an hour. #### To Kansas City As the Confederate army passed through a pro-Confederate region around Boonville known as Little Dixie, many new recruits joined Price's force. Many of these men were unarmed, and Price needed weapons to issue to them. Price authorized a raid against Glasgow to capture supplies. This raiding force was under the command of Brigadier General John B. Clark Jr. Jackman selected elements of his brigade to serve with Clark on the left of the Confederate line. The attack against Glasgow was successful, with weapons, supplies, and prisoners being captured. The Confederate victors at Glasgow then rejoined Price's main army, which was moving towards Kansas City. The Confederate army encountered a Union force holding the town of Lexington on October 19, starting the Second Battle of Lexington. Jackman's brigade was sent around the Confederate left flank to cut off the Union path of retreat, but the brigade failed to get into an appropriate position to block the Union retreat, allowing the town's defenders to escape. The Union soldiers engaged at Lexington fell back to Independence, leaving a small force to hold the crossing of the Little Blue River. Elements of Marmaduke's division attacked this holding force on October 21, bringing on the Battle of Little Blue River. Marmaduke's men drove the Union defenders back across the creek, but reinforcements for both sides arrived: those for the Union under Major General James G. Blunt, and the Confederates under Shelby's command. Nichols's regiment was deployed on the extreme Confederate right, from which it applied pressure on the Union flank. The regiment was the only one of Shelby's units to remain mounted. Union artillery was moved from other parts of the line to counter Nichols's attack, which in turn weakened the Union center, allowing Brigadier General M. Jeff Thompson's brigade to successfully attack it. Union troops counterattacked to rescue the threatened artillery and then fell back to Independence. The next day, some of Shelby's men broke through a Union line defending the Big Blue River in the opening stages of the Battle of Byram's Ford. Jackman's brigade and the 5th Missouri Cavalry Regiment then encountered a Union unit, the 2nd Kansas Militia Infantry Regiment, near the Mockbee Farm. Initially, the Kansans held their ground, fighting off two attacks, but a third attack shattered the Union line. Initially used to guard the Confederate flank, Nichols's regiment was involved in this affair, which resulted in the capture of a 24-pounder howitzer. While Jackman reported his losses as slight, Nichols's horse was killed during the fighting. That evening, Union cavalry commanded by Major General Alfred Pleasonton who had been following Price from the east, attacked and defeated his rear guard in the Second Battle of Independence. By the morning of October 23, Price's army was caught between Pleasonton's troopers, who had advanced to between Independence and the Big Blue River, and Blunt's men. Major General Samuel R. Curtis's Union Army of the Border occupied Kansas City, adding to the encirclement. That day, Pleasonton's men continued the Battle of Byram's Ford, driving Marmaduke's division back from the Big Blue River. Meanwhile, Shelby's and Fagan's divisions fought against Blunt's men and elements of the Kansas State Militia in the Battle of Westport, the end result being a Confederate defeat. Nichols's regiment took part in the Westport fighting in the vicinity of Brush Creek. Later in the fighting, Union troops coming from the east put pressure on Fagan's line, and Nichols's regiment was part of a force sent to Fagan's aid. The regiment, as well as the rest of Jackman's brigade, conducted a rear-guard action while dismounted before retreating. #### Retreat and war's end The Confederates retreated south into Kansas. On October 25, Union troops caught up with Price's column, and soundly defeated it at the Battle of Mine Creek. During the battle, hundreds of Confederate soldiers were captured including Marmaduke, as well as cannons and supplies. Shelby led a rear-guard action, which included Nichols's regiment. The Confederate troops conducted a drawn-out running fight until the Union pursuers broke contact later that day. After Mine Creek, the Confederates re-entered Missouri, where they stopped near the town of Newtonia on October 28, only for Blunt's troops to reestablish contact. During the Second Battle of Newtonia, Nichols's regiment was held to the rear of the right side of the Confederate line, supporting Collins's battery, and did not see close combat. Price's army continued its retreat into Arkansas, where Nichols's regiment was furloughed on October 30, along with much of the rest of Jackman's brigade. The furloughs were ostensibly for the men to perform recruiting activities and catch deserters, but were mostly due to a lack of food and the continuing disintegration of the structure and morale of Price's army. The furlough terms set a date of mid-December for the men to return to the army. While direct evidence for the men's return from furlough is lacking, historian James McGhee believes that they did eventually return to Price's army. A Union cavalry officer reported clashing with Nichols's regiment near Crooked Creek in northern Arkansas on November 15, and stated that there were about 600 men with the unit. The unit disbanded in 1865 before the war ended, probably while stationed in Texas, and few of the men from Nichols's regiment reported to Shreveport, Louisiana, in June to receive their combat-ending paroles. No complete muster records for Nichols's regiment exist, and casualty figures for the unit cannot be accurately discerned.
37,730,837
On the Job
1,152,464,708
2013 Filipino neo-noir crime thriller film
[ "2010s Tagalog-language films", "2010s police procedural films", "2013 films", "Films about contract killing", "Films about corruption", "Films about families", "Films directed by Erik Matti", "Films set in Manila", "Neo-noir", "Philippine New Wave", "Philippine action thriller films", "Philippine crime thriller films", "Reality Entertainment films", "Star Cinema films" ]
On the Job (abbreviated as OTJ) is a 2013 Filipino neo-noir crime thriller film written and directed by Erik Matti, who co-wrote the screenplay with Michiko Yamamoto. Starring Gerald Anderson, Joel Torre, Joey Marquez and Piolo Pascual, it tells the story of two hit-man prisoners (Anderson and Torre) who are temporarily freed to carry out political executions, and two law enforcers (Marquez and Pascual) tasked with investigating the drug-related murder case connected to the prison gun-for-hire business. The film co-stars Angel Aquino, Shaina Magdayao, Empress Schuck, Leo Martinez, Michael de Mesa, Vivian Velez, and Rayver Cruz. The inspiration for On the Job came from a Viva Films crew member who said he had been temporarily released from prison to commit contract killings before he was reincarcerated. Star Cinema initially refused to produce the film in 2010, deeming it excessively violent compared with their usual rom-com projects; by 2012, however, they agreed to co-produce it with Matti's own film production company, Reality Entertainment. Filming took place in Manila and lasted 33 days, on a production budget of ₱47 million (about US\$1.1 million). On the Job was shown as part of the Directors' Fortnight at the 2013 Cannes Film Festival, where it was praised and received a standing ovation on May 24. The film was released in the Philippines on August 28, 2013, and in the United States and Canada on September 27 of that year. It received positive reviews from foreign and domestic critics. In 2021 the film and its sequel On the Job: The Missing 8 were re-edited as a six-part HBO Asia miniseries titled On the Job. ## Plot In the Philippines, Mario and Daniel are prisoners who are frequently released and paid to commit contract killings. Mario spends his earnings on Tina, his daughter, and Lulette, his estranged wife, while Daniel sends remittances to his family and spends the rest on goods and privileges in prison; he has come to see Mario as a mentor and father figure. After they murder drug lord Tiu and return to prison, Tiu's murder case is assigned to NBI Agent Francis Coronel through Congressman Manrique. When Coronel and his partner, Bernabe, arrive at the local precinct, they clash with PNP Sergeant Joaquin Acosta, who believes that the case was taken from him for political reasons. Mario and Daniel carry out a hit on a woman named Linda, whose husband Pol seeks help from Acosta, his former colleague. Pol reveals that Tiu's murder is one of several assassinations ordered by Manrique's close friend Pacheco, a military officer campaigning for the Philippine Senate. Acosta agrees to protect Pol and heads to the station; along the way, he encounters Coronel and Bernabe. Meanwhile, Daniel shoots Pol, but his pistol jams before he can fire a fatal shot. The three officers converge on them, forcing Daniel and Mario to flee, although they manage to locate and kill Pol at a hospital. As they split up to escape, Bernabe is shot and Acosta catches a glimpse of Mario's face. Coronel confronts Manrique and tells him that he intends to arrest Pacheco. Manrique warns Coronel that Pacheco's indictment will cause their downfall, as he considers Pacheco his last resort after having exhausted all his other options to remain affluent. Meanwhile, Acosta relays Mario's composite sketch to be broadcast on television before deciding to work with Coronel. When Coronel discovers Mario's identity, he visits Lulette and sees her flirting with Mario's friend, Boy. Coronel tells Acosta about his discovery, and Acosta uses it as leverage when he interrogates Mario. Although Acosta is unsuccessful, Mario later expresses to Daniel his feelings of betrayal. Mario then phones Tina, who tells her father that she saw his police sketch and that she intends to live independently. Tiu's father tells Acosta and Coronel that he has evidence that they can use to arrest Pacheco. Coronel goes to visit Pacheco, who admits to killing Coronel's father to "save the country". Coronel then uses his cellphone to secretly record a conversation between Pacheco and his men about the murder of Tiu's father. Soon after, Daniel kills Coronel in front of police headquarters, causing an enraged Acosta to attack Pacheco and Manrique's security detail. Meanwhile, a disheartened Mario realizes that he has no reason to leave prison since his family no longer wants anything to do with him, and stabs Daniel to death to remain incarcerated. Coronel's death prompts an investigation of Acosta, who is relieved of duty, and Pacheco, who tells journalists he is ready to be investigated. After attending Daniel's wake from afar, Mario goes home, kills Boy in front of his family, and returns to prison. Sometime later, a recovered Bernabe looks through Coronel's possessions and requisitions the phone Coronel used against Pacheco as evidence. ## Cast ## Production ### Development Director Erik Matti was inspired by a Viva Films service driver, an ex-convict who said that he used to be temporarily freed from prison to commit contract killings and reincarcerated. Matti shelved his idea until he ended his hiatus from directing with The Arrival, a short film he released to positive response at several film festivals in 2009. He opened the screenings with an eight-minute trailer for On the Job, with Joel Torre attempting to pitch the film. The trailer also had a favorable response, particularly from Twitch Film editor Todd Brown, who asked if the project had entered production. When Matti told him that the film did not yet have a screenplay, Brown encouraged him to write it while he looked for financing. During the script's ninth revision, screenwriter Michiko Yamamoto helped finalize the remainder of the draft. Four uncredited consultants were also hired to develop details of the story. During the writing process and after the final draft was complete, Brown was unable to attract investors; some felt that the story was too nontraditional for Philippine cinema or too large a risk for the overseas market. Star Cinema (the Philippines' largest production company) refused to make the film in 2010, deeming it too violent compared with their usual romantic-comedy projects. Matti offered the project to two of Star Cinema's talents, who also declined due to the film's violence. The project was again put on hold as Matti entered the post-production stage of his horror fantasy film, Tiktik: The Aswang Chronicles (2012). He was then contacted by a Star Cinema agent, who requested the revised script; three days later, the studio agreed to fund the film. Reality Entertainment, an independent film company co-founded by Matti, co-produced On the Job with Star Cinema. Reality Entertainment co-founder Dondon Monteverde, Lily Monteverde's son, said that many studios were impressed by the script but were reluctant to finance a big-budget action film. Although the production team considered cutting the film's budget, they decided against it in the hope of shifting away from low-budget films. Monteverde remembered arguing that it was "really time to do something big-budget and showcase it, rather than making something small and claiming budgetary restrictions. This time we didn't give ourselves any excuses. We went all the way". The film's production cost ₱47 million (about US\$1.1 million in 2013). ### Pre-production and filming Joel Torre, who plays Mario "Tatang" Maghari in the film, had already been cast before Matti's script revision. Torre said about the role, "[Mario] stuck with me, fought for me. And that gave me a lot of confidence, a Bushido Blade samurai." Matti asked Piolo Pascual to play attorney Francis Coronel Jr. The role of Daniel was originally written for John Lloyd Cruz, who was interested but had to decline due to scheduling conflicts; it went instead to Gerald Anderson. After a discussion between Pascual and Anderson about the film, Anderson signed for the role. He had only two weeks to film his scenes, since he was involved with a soap opera shoot at the time. The role of Sergeant Joaquin Acosta was to be played by Richard Gomez, but he decided to pursue a political career in Ormoc. Matti later cast Joey Marquez; although Marquez was seen primarily as a comedian, the director believed that he could play a charming, obnoxious character. Richard V. Somes was the film's production designer and action choreographer. To prepare for the prison scenes, the production crew built a set in an abandoned building in Marikina and hired 200 extras to play convicts. Principal photography took 33 days on location in a number of Manila areas, including City Hall, a Light Rail train station, and Caloocan. The opening scenes were shot during the annual Basaan Festival in San Juan. Filming was done in over 70 locations, and the crew sometimes shot in several areas on a given day. On choosing Manila as the film's key location, Matti said: > This is a Manila movie. We wanted to show as much of the cross section of Manila as we could. This is, I think, the most ambitious attempt at putting together as much variety [in a local film] in terms of look and feel. Francis Ricardo Buhay III, who had also worked on Matti's Tiktik and Rigodon (2013), was the film's cinematographer. Rather than setting up and changing lights for certain shots, Buhay filmed with the Red EPIC camera; with the Red EPIC's available-lighting function (including the ability to light an entire set), the film had a modern noir style without appearing low-budget. ### Music Erwin Romulo, editor-in-chief of the Philippine edition of Esquire until 2013, was On the Job's musical director. At their first meeting, Matti hired Romulo as the music supervisor; Romulo's role changed, however, since he wanted to produce most of the tracks he had planned for the film. Romulo used lesser-known original Pilipino music tracks from otherwise-prominent Filipino musicians, such as "Maskara" and "Pinoy Blues" by the Juan de la Cruz Band. He approached Dong Abay and Radioactive Sago Project bassist Francis de Veyra to perform the songs, which were arranged by Armi Millare. Additional tracks were performed by Ely Buendia, the late FrancisM, and local band Bent Lynchpin. Bent Lynchpin member Fred Sandoval was also the film's music editor. Romulo cited works by Lalo Schifrin and director Ishmael Bernal's longtime composer, Vanishing Tribe, as influences on the soundtrack. He also considered DJ Shadow's album Endtroducing..... a significant influence, "albeit unconsciously". ## Release ### Theatrical run and distribution On the Job had its world premiere in the Directors' Fortnight section of the 2013 Cannes Film Festival on May 24. Although it did not win the Caméra d'Or prize, it received a two-minute standing ovation from the audience. The film had its Philippine release on August 28, 2013, and grossed at the box office in three weeks. It was released in North America by Well Go USA Entertainment on September 27, 2013. Well Go USA had bought the North American rights for the film before its premiere at Cannes, and also acquired its DVD, Blu-ray, and video on demand distribution rights. The agreement was made by Well Go USA president Doris Pfardrescher and XYZ Films founders Nate Bolotin and Aram Tertzakian. On the Job played at 29 North American theaters in three weeks, grossing \$164,620. It was released in France by Wild Side Films, and in Australia by Madman Entertainment. The deals with the French and North American distribution companies secured \$350,000 (). The film was also made available on the North American market through Netflix by Well Go USA. ### Home media It was released on DVD and Blu-ray by Well Go USA on February 18, 2014. Special features include making-of footage and deleted scenes. Justin Remer of DVD Talk praised the Blu-ray's video and audio transfers, generally criticizing its special features. Kevin Yeoman of High-Def Digest and Jeffrey Kauffman of Blu-ray.com rated the release 3.5 out of 5, and agreed about the transfers and special features. On the Job amassed \$167,128 in North American video sales. ## Reception ### Critical response On the Job received good reviews from critics. Several critics praised the cast, with their performances called "well-acted" and "top-notch". Mikhail Lecaros praised the lead actors' "parallel depiction of the relationship between fathers and sons" in GMA News; according to Philippine Entertainment Portal's Mari-An Santos, they "provide the heart of the story". In Variety, Justin Chang praised Torre's departure from his usual "good-guy persona with a superbly menacing but very human performance". Santos said that Pascual "holds his own, but with a consistently excellent ensemble, his acting pales in comparison"; according to Lecaros, he "acquits himself well as a law enforcer whose crises of faith would be right at home in a Johnnie To (Election, Breaking News) or Michael Mann (Collateral, Miami Vice) film." Although Santos praised the female roles, Film Business Asia's Derek Elley called them "minimal and not especially memorable"; a Complex magazine reviewer called them "essentially talking props, which lessens the impact of developments". Critics praised the film as engaging and well-made, but considered its plot convoluted. Neil Young, in The Hollywood Reporter, found Matti and Yamamoto's script conventional and "many of the dialogue scenes operat[ing] on a functionally prosaic level". Lecaros and Santos praised the script, which Lecaros said "puts traditional notions of right, wrong, family, and loyalty through the wringer—and then some!" Although the police procedural subplot was described as "particularly colorful" by IndieWire, The A.V. Club's Ignatiy Vishnevetsky called it dull and "occasionally heavy-handed". Young said that Matti's "muscular handling of fast-paced action sequences consistently impresses", and The New York Times' Jeannette Catsoulis wrote that his "pitiless view of Filipino society may be deadening, but his filming is wondrously alive". Catsoulis made On the Job her "critic's pick", and Rappler's Carljoe Javier said it "serves as a shot of adrenaline, not only to the hearts of [Filipino] viewers, but hopefully also to mainstream [Philippine] cinema". Rotten Tomatoes gives the film a score of 94%, with an average rating of 6.63 out of 10, based on 16 reviews from critics. On Metacritic it received "Generally favorable reviews", with an overall weighted average of 70 out of 100, based on 11 critics. In the Philippines, members of the Cinema Evaluation Board gave the film an "A" grade. ### Accolades In addition to being featured at the Cannes Film Festival, On the Job was screened at the 17th Puchon International Fantastic Film Festival in Bucheon, South Korea. Joel Torre won the Best Actor award, and the film received the Jury Prize. At the 62nd FAMAS Awards, the film won six of its twelve nominations: Best Picture, Best Director (Matti), Best Screenplay (Matti and Michiko Yamamoto), Best Editing (Jay Halili), Best Story (Matti), and Best Sound (Corinne de San Jose). Piolo Pascual also received the Fernando Poe Jr. Memorial Award for Excellence for his performance. The film received eight nominations at the 37th Gawad Urian Awards, winning two: Best Actor (Torre) and Best Sound (de San Jose). ## Sequel, miniseries and planned remake ### On the Job: The Missing 8 On The Job'''s sequel On the Job: The Missing 8 was screened in competition at the 78th Venice International Film Festival on September 10, 2021. ### Miniseries On the Job is a six-part Philippine miniseries created and developed by Erik Matti for HBO Asia Originals. It was adapted from the two On the Job films: the 2013 film and its sequel The Missing 8'', both of which are directed by Matti. The first two episodes of the miniseries are a re-edited and remastered version of the first film. ### Planned remake An American version of the film was confirmed in June 2013, which would be directed by Icelandic filmmaker Baltasar Kormákur and produced by Kormákur's Blueeyes Productions. XYZ Films, the production and sales company holding the international rights, will co-produce the film and release it worldwide.
9,078,202
Frederick Scherger
1,153,554,617
Royal Australian Air Force chief
[ "1904 births", "1984 deaths", "Australian Companions of the Distinguished Service Order", "Australian Companions of the Order of the Bath", "Australian Knights Commander of the Order of the British Empire", "Australian aviators", "Australian military personnel of the Indonesia–Malaysia confrontation", "Australian military personnel of the Malayan Emergency", "Australian people of German descent", "Australian recipients of the Air Force Cross (United Kingdom)", "Chairmen, Chiefs of Staff Committee (Australia)", "Graduates of the Royal College of Defence Studies", "Military personnel from Victoria (state)", "People from Ararat, Victoria", "Royal Australian Air Force air marshals", "Royal Australian Air Force personnel of World War II", "Royal Military College, Duntroon graduates" ]
Air Chief Marshal Sir Frederick Rudolph William Scherger, KBE, CB, DSO, AFC (18 May 1904 – 16 January 1984) was a senior commander in the Royal Australian Air Force (RAAF). He served as Chief of the Air Staff, the RAAF's highest-ranking position, from 1957 until 1961, and as Chairman of the Chiefs of Staff Committee, forerunner of the role of Australia's Chief of the Defence Force, from 1961 until 1966. He was the first RAAF officer to hold the rank of air chief marshal. Born in Victoria of German origins, Scherger graduated from the Royal Military College, Duntroon, before transferring to the Air Force in 1925. He was considered one of the top aviators between the wars, serving as a fighter pilot, test pilot, and flying instructor. He held senior training posts in the late 1930s and the early years of World War II, earning the Air Force Cross in June 1940. Promoted to group captain, Scherger was acting commander of North-Western Area when Darwin suffered its first air raid in February 1942. Praised for his actions in the aftermath of the attack, he went on to lead the RAAF's major mobile strike force in the South West Pacific, No. 10 Operational Group (later the Australian First Tactical Air Force), and was awarded the Distinguished Service Order in September 1944 for his actions during the assaults on Aitape and Noemfoor in New Guinea. After the war, Scherger served in senior posts, including Deputy Chief of the Air Staff, Head of the Australian Joint Services Staff in Washington, D.C., and commander of Commonwealth air forces during the Malayan Emergency. In 1957, he was promoted to air marshal and became Chief of the Air Staff (CAS), presiding over a significant modernisation of RAAF equipment. Completing his term as CAS in 1961, he was the Air Force's first appointee to the position of Chairman of the Chiefs of Staff Committee (COSC). As Chairman of COSC, Scherger became Australia's first air chief marshal in 1965, and played a leading role in the commitment of troops to the Vietnam War. Leaving the military the following year, he was appointed chairman of the Australian National Airlines Commission and, from 1968, of the Commonwealth Aircraft Corporation. Popularly known as "Scherg", he retired in 1975 and lived in Melbourne until his death in 1984 at the age of seventy-nine. ## Early life and career Frederick Rudolph William Scherger was the third child of farmer Frederick Scherger and his wife Sarah Jane, née Chamberlain, both native Victorians. Born on 18 May 1904 in Ararat, young Fred was educated to junior certificate level at his local high school. His paternal grandparents were immigrants from Germany, and his family was the object of xenophobia in his childhood during World War I. This carried on into the early part of his military career and beyond; as late as 1941, the author of an anonymous letter from RAAF Station Wagga to Prime Minister Robert Menzies stated that his "blood ran cold" at the notion of someone called "Scherger" commanding trainee Australian pilots. ### 1920s: Duntroon to Point Cook Scherger entered the Royal Military College, Duntroon, in 1921 and graduated as a lieutenant in 1924, winning the King's Medal. Two days before graduation, he volunteered for an Air Force secondment, which was later made permanent. On 21 January 1925, he received a permanent commission in the RAAF as a pilot officer (temporary flying officer), and commenced his flight training at RAAF Point Cook, Victoria. He was promoted to flying officer with seniority from 21 January 1926. Scherger quickly took to the art of flying open-cockpit biplanes and gained a reputation as a skilful if occasionally reckless pilot, being berated early in his career by his flight commander for "inverted and very low flying". He was one of the Air Force's first volunteers for parachute instruction, under the tutelage of Flying Officer Ellis Wackett at RAAF Station Richmond, New South Wales, and made the first public freefall descent in Australia, at Essendon, Victoria on 21 August 1926. In February 1927, he was asked by the commanding officer of No. 1 Flying Training School (No. 1 FTS), Wing Commander Adrian "King" Cole, to drop a message to a woman at Port Melbourne before she departed on a steamer. After doing so, Scherger illegally flew his S.E.5 fighter between ship and wharf before heading back to Point Cook, only to be hauled into Cole's office the next morning to find the CO brandishing a photograph taken by a member of the public, catching the young pilot in the act. Sent for a dressing down to the Air Member for Personnel, Group Captain Jimmy Goble, Scherger was forced to admit it was not the first time he had engaged in such stunts. Goble responded, "Good, I'm glad to see we've still got a few in the Air Force with spirit." ### 1930s: Flying instructor to Director of Training By the 1930s, as a flight instructor and test pilot, Scherger was, according to historian Alan Stephens, "perhaps the RAAF's outstanding aviator". He married Thelma Harrick on 1 June 1929; they had a daughter. Promoted to flight lieutenant on 1 June 1929, Scherger became chief flying instructor (CFI) at Point Cook that August. He also flew with Fighter Squadron, a unit of No. 1 FTS operating Bristol Bulldogs. As one of the leading pilots of the Bulldog, then regarded as the peak of military technology, and in what was generally thought of as the RAAF's elite formation, he gained popular exposure that may have helped his later rise to senior leadership. In October 1931, he won an Aero Club derby at Adelaide in a Bulldog, clocking a top speed of 160.98 mph (259.07 km/h). In August 1934, Scherger was posted to England to study at RAF Staff College, Andover. Just prior to departing, he was involved in a notorious incident at RAAF Station Laverton. A squadron leader arrived home early from a mess function to find his wife sleeping with another officer, who escaped by crashing through the bedroom window. The squadron leader then pursued his wife with a loaded revolver, the pair eventually arriving at Scherger's quarters. Faced with the frightened woman and the enraged husband crying that he would "shoot the bitch", Scherger knocked the man down with a poker. The unconscious husband was placed in the guardhouse, and the woman given shelter off the base; the officer she had slept with promptly resigned his commission. Scherger graduated from Andover in December 1935 and subsequently completed courses at the RAF's School of Air Navigation and Central Flying School. He was promoted to squadron leader on 1 July 1936. Returning to Australia, he resumed his position as CFI at Point Cook in May 1937. As directed by the Federal government, he was responsible for training the Treasurer, Richard Casey, to fly; the use of Air Force facilities for his own benefit by an elected official led to adverse publicity when it was revealed by the media. In September, Scherger test flew the North American NA-16 at Laverton; the evaluation program led to the design being adapted as the CAC Wirraway the following year. He was appointed Director of Training at RAAF Headquarters, Melbourne, in January 1938, and promoted to wing commander on 1 March 1939. ## World War II ### 1939–1942: Outbreak of war to raid on Darwin As Director of Training at the outbreak of World War II, Scherger's main challenge was to expand the RAAF's pool of flying instructors. Central Flying School, Australia's first military aviation unit, was re-formed for this purpose in April 1940. Awarded the Air Force Cross in June 1940 for his "outstanding ability" as a pilot and instructor, he took charge of No. 2 Service Flying Training School near Wagga the following month, and was promoted to temporary group captain on 1 September. In October 1941, he was made commanding officer of RAAF Station Darwin, Northern Territory. Described by Major General Lewis H. Brereton, commander of the US Far East Air Force, as "energetic, efficient and very impatient", Scherger started improving the operational readiness of the base and its surrounds without waiting for specific orders from RAAF Headquarters. The following January, he was appointed senior air staff officer to Air Commodore Douglas Wilson, Air Officer Commanding (AOC) of North-Western Area Command (NWA), which administered RAAF Station Darwin and other airfields in the Northern Territory and north-west Western Australia. In Wilson's absence at ABDA Command Headquarters in Java, Scherger was acting AOC NWA on 19 February 1942 when Darwin suffered its first aerial attacks by the Japanese. Driving into town to meet Air Marshal Richard Williams, who was in transit on his way to England, Scherger first became aware of the assault after he heard anti-aircraft fire and counted twenty-seven enemy aircraft in the distance. He arrived at the civil airfield to witness a Curtiss P-40 crash land on the runway, before his car was strafed by fighters. In a lull after the initial attack that day, he made contact with Williams before the two men were forced to take shelter in a makeshift trench that was straddled by falling bombs as a second raid got under way. Afterwards, Scherger began to restore order and launched a Hudson light bomber on a reconnaissance mission, though there was no further contact with Japanese forces. As well as the loss of civil and military infrastructure, twenty-three aircraft and ten ships, and the death of some 250 people, 278 RAAF personnel had deserted Darwin in an exodus that became known as the "Adelaide River Stakes". "There was", in Scherger's words, "an awful panic and a lot of men simply went bush". Praised for his "great courage and energy", he was one of the few senior Air Force officers in the region to emerge from Commissioner Charles Lowe's inquiry into the debacle with his long-term career prospects undamaged. In the immediate aftermath, though, his outspoken criticism of the RAAF's state of preparedness alienated members of the Air Board, the service's controlling body that consisted of its most senior officers and which was chaired by the Chief of the Air Staff (CAS). He was relieved of his position at NWA by the CAS, Air Chief Marshal Sir Charles Burnett, and shunted through a series of postings for the remainder of the year, including commanding officer at RAAF Station Richmond, supernumerary at RAAF Headquarters, Director of Defence at Allied Air Forces Headquarters, South West Pacific Area, and Director of Training at RAAF Headquarters. Seeking restitution, he boldly went over the heads of the Air Board and successfully appealed to the Minister for Air, Arthur Drakeford, supported by Commissioner Lowe. ### 1943–1945: No. 10 Operational Group and First Tactical Air Force Scherger served as Officer Commanding No. 2 Training Group at RAAF Station Wagga from July 1943 until he was appointed AOC of the newly formed No. 10 Operational Group (No. 10 OG) in November. The Air Force's main mobile strike force, No. 10 OG at its formation consisted of No. 77 Wing, operating A-31 Vengeance dive bombers, and No. 78 Wing, operating P-40 Kittyhawk fighters, as well as several ancillary units. Promoted to acting air commodore on 25 January 1944, Scherger established his headquarters at Nadzab, Papua New Guinea, in support of the US Fifth Air Force. Though able to launch No. 78 Wing's first mission that same month, he had to deal with several organisational problems to bring all his squadrons to combat readiness, including lack of training in tropical conditions, and shortcomings in aircraft maintenance and staff rotation that resulted in the RAAF's operational rate of effort being inferior to similar USAAF formations. These issues were overcome later in the year and No. 10 OG units began exceeding the rate of effort of their American counterparts. By March 1944, No. 77 Wing's Vengeances had been withdrawn from operations due to their inferiority to newer equipment. Three squadrons from No. 9 Operational Group—one each flying Bostons, Beaufighters, and Beauforts—were assigned to the Wing as replacements, but No. 10 OG itself was moved from Nadzab to Cape Gloucester to permit USAAF units with longer-ranged aircraft to occupy vital airfields on the Allied front line. The group's disappointment with its withdrawal from Nadzab was tempered by news that it was to take part in the forthcoming attack on Aitape, New Guinea, codenamed Operation Reckless. Scherger was appointed air commander for the assault, leading US and Australian units. No. 78 Wing's Kittyhawks shadowed the main task force while heavier aircraft from NWA conducted bombing and mining sorties to indirectly support the operation. The landings on 22 April 1944 met little opposition, credited in part to the Allied bombardment in the days leading up to it. With elements of No. 10 OG going ashore on the first day, Aitape airfield was repaired and No. 78 Wing was operating from it within three days. In June, Scherger was named commander of Australian and US air forces for the attack on Noemfoor Island. Over the course of the battle that commenced on 2 July, he controlled Nos. 71, 77, 78 and 81 Wings RAAF, as well as the USAAF's 58th and 348th Fighter Groups and 307th and 417th Bombardment Groups. Scherger was promoted to temporary air commodore on 1 August, and was awarded the Distinguished Service Order for his actions at Aitape and Noemfoor, the citation noting that he "operated his air forces with great skill and success" and praising the way he placed himself "in the forefront of the landing of the ground troops", where "his personal courage and leadership proved an inspiration to all personnel". A jeep accident in August left Scherger with a fractured pelvis, necessitating his evacuation to Australia for rehabilitation. In his absence, Air Commodore Harry Cobby took command of No. 10 OG; two months later the formation was redesignated the Australian First Tactical Air Force (No. 1 TAF). Still recuperating, Scherger acted in the role of Air Member for Personnel at RAAF Headquarters, Melbourne, from January to May 1945. On 10 May, he was posted back to the Pacific to resume control of No. 1 TAF following Cobby's dismissal in the wake of the "Morotai Mutiny". He returned as Operation Oboe One, the Battle of Tarakan, was under way; No. 1 TAF's airfield construction teams had been tasked with opening the runway on Tarakan Island within a week of Allied landings but extensive pre-invasion damage and adverse environmental conditions delayed this until the end of June. He then led No. 1 TAF in Operation Oboe Six, the invasion of Labuan, going ashore on the afternoon of the landings on 10 June to establish his command post. By July, when the final Allied offensive of the Borneo Campaign took place as Operation Oboe Two in Balikpapan, No. 1 TAF had reached a strength of some 25,000 personnel; by the end of hostilities on 14 August this figure had been reduced with the transfer of units to the recently formed No. 11 Group. ## Post-war career ### 1946–1957: Rise to Chief of the Air Staff In October 1945, Scherger led a survey team to Japan to review airfields and other facilities being considered for the British Commonwealth Occupation Force, determining that substantial work was needed to bring them up to the required capacity. The following year, he attended the Imperial Defence College, London. He was promoted to substantive group captain on 1 January 1947, and was appointed Deputy Chief of the Air Staff (DCAS) on 1 July. Scherger was raised to substantive air commodore on 23 September 1948, and promoted to temporary air vice marshal on 1 May 1950. He was appointed a Commander of the Order of the British Empire (CBE) in the King's Birthday Honours the same year. As DCAS, Scherger reported to Air Marshal George Jones, whose ten-year term as CAS would the longest of any incumbent in the position. The pair enjoyed a cordial working relationship, and Jones earmarked the younger officer as a leader of the future. Scherger could not persuade his conservative chief to revamp the Air Force from its wartime area command structure into a more modern service organised along functional lines; this radical change would await Jones' successor, Air Marshal Sir Donald Hardman. After completing his tour as DCAS in July 1951, Scherger was posted to Washington, D.C., to head up the Australian Joint Services Staff. He was promoted to substantive air vice marshal on 1 July 1952. On 1 January 1953 he succeeded Air Vice Marshal George Mills as AOC of RAF Air Headquarters Malaya. In this role, Scherger commanded all Commonwealth air forces in the region and was responsible for operations against communist guerrillas during the emergency. Scherger deliberately sited his headquarters, which had been based in Singapore when he took over, next to the offices of the Director of Operations in Kuala Lumpur, to more closely align air tasking with overall military planning. He expanded the use of helicopters for troop delivery and casualty evacuation, and presided over a change in tactics that saw an earlier policy of indiscriminate saturation bombing of jungle areas replaced by one of precision strike against enemy camps. He also pioneered psychological warfare in the form of "voice" aircraft broadcasting propaganda, close cooperation between light aircraft spotters and ground forces to aid bombing missions, and defoliation to clear jungle cover. Appointed a Companion of the Order of the Bath on 30 April 1954 for his service in Malaya, Scherger joined the Air Board as Air Member for Personnel in March 1955. During his term he commissioned a review into the effectiveness of the syllabus at RAAF College for meeting the future needs of the Air Force in an age of missiles and nuclear weaponry. This led to a policy of cadets undertaking academic degrees, in line with similar institutions in the other armed services; the College was subsequently renamed RAAF Academy. Promoted air marshal, he became Chief of the Air Staff on 19 March 1957, succeeding Air Marshal Sir John McCauley. Long identified as a strong contender for the RAAF's senior role, Scherger was described by Air Marshal Hardman as "easily the best material on offer". He declared that as an administrator he was "not going to allow myself to be bogged down with minor matters of detail ... Broad policy comes from the top. These decisions have to be implemented in the commands—and that's the way it's going to be." ### 1957–1961: Chief of the Air Staff As CAS, one of Scherger's first tasks was investigating the feasibility of a nuclear arsenal for the Air Force. During visits to Britain and the US he explored the possibility of weapons being delivered by the RAAF's Sabre fighters or its Canberra bombers. In 1958, he held discussions with the Chief of Staff of the USAF, General Thomas D. White, about storing nuclear weapons in Australia under USAF control. In 1959 and 1960, Scherger had information sent out, including manuals and maintenance instructions, regarding equipping the Canberras with Mark 7 nuclear bombs, the same type that the British Canberras used. For a time, Scherger championed the purchase of a force of British-built Vulcan heavy bombers but excessive cost and a governmental determination to remain "under the shelter of the American nuclear umbrella" put paid to the proposal. Instead, in 1963, the decision was taken to purchase the General Dynamics F-111 swing-wing bomber "on the understanding that it could carry nuclear weapons". Turning to fighters, Scherger succeeded in reversing a publicly announced decision to purchase the F-104 Starfighter as a replacement for the Sabre, in favour of the Dassault Mirage III, a type better suited for Australia's requirements. During trials he had taken the controls of a Starfighter, reportedly becoming the first Australian to fly at twice the speed of sound. He was appointed Knight Commander of the Order of the British Empire (KBE) in the 1958 Queen's Birthday Honours. An advocate of helicopters since his experience in Malaya, Scherger influenced purchase of the UH-1 Iroquois for Australia. He also played a key role in the acquisition of the C-130 Hercules transport in 1958, over the Federal treasury's "bureaucratic hand-wringing"; the type soon proved itself vital to defence force activity in the region, being described as second only to the F-111 as "the most significant aircraft the RAAF has ever operated". The following year, harking back to his experience in 1942, Scherger proposed a second airfield in the Darwin area, which led eventually to the establishment of RAAF Base Tindal near Katherine. He transferred funding already in place for extension of the runway at Laverton to effect this, signalling a fundamental shift in the Air Force's "centre of gravity" to the north of Australia. The first edition of RAAF News (now Air Force News), which had been sponsored by Scherger, appeared in January 1960 and carried a message from the CAS concerning current defence policy, as well as announcing that Sidewinder air-to-air missiles would begin equipping the Air Force's Sabres. Scherger also oversaw introduction of Bloodhound surface-to-air missiles to the RAAF's arsenal. Towards the end of his term as CAS, he expressed interest in Britain's supersonic BAC TSR-2 as a replacement for the Canberra, but noted that it was "many years" from production. ### 1961–1966: Chairman of the Chiefs of Staff Committee Scherger became Chairman of the Chiefs of Staff Committee (COSC), the senior Australian military position at the time, in May 1961, taking over from Vice Admiral Sir Roy Dowling. Keen as ever to see a supersonic bomber replace the Canberra, he visited Britain in April 1963 to investigate progress of the TSR-2. Using back-channel sources of information, he satisfied himself that the RAF's pronouncements on the bomber's development were overly optimistic, and later that year began supporting selection of the F-111 as the aircraft best suited to supplant the Canberra. During the Indonesia–Malaysia Konfrontasi, Scherger acted as military liaison between the British and Australian governments. Openly sceptical about the cease-fire announced by President Sukarno on 25 January 1964, he supported British requests for Australian combat forces in Borneo but was in the short term "overruled by 'political cross-currents'". Towards the end of the year, he advocated bombing Indonesian air bases using RAAF Canberras in Malaya, but in this instance the British held back. Although Australia eventually deployed battalions of the Royal Australian Regiment from March 1965, Scherger's earlier optimistic estimation of the speed and level of his government's readiness to commit troops was said to have confused the British. The latter part of Scherger's tour as Chairman COSC coincided with the beginning of large-scale Australian involvement in the Vietnam War. By mid-1964, the Commonwealth had already sent a small team of military advisors, plus a detachment of newly acquired DHC-4 Caribou transports, to the region at the request of the South Vietnamese government. At a joint US, Australian and New Zealand conference from 30 March to 1 April 1965, and with instructions only to ascertain America's objectives in the conflict, Scherger indicated that Australia would be prepared to commit a sizeable ground force, of around battalion size. Within a week, Prime Minister Robert Menzies' Federal cabinet had ratified the proposal, which was formally announced on 19 April. The 1st Battalion, Royal Australian Regiment deployed to Vietnam in May 1965, and two squadrons of the RAAF were committed by mid-1966. With the formation of Australian Forces Vietnam (AFV) at this time, Scherger recommended that Air Force units effectively serve under Army control "to convey an image of all Australian forces fighting together, as one unit". The Minister for Air, Peter Howson, felt that this made Scherger and the Army guilty of "exaggerated national pride". Promoted to air chief marshal on 25 March 1965, Scherger became not only the first RAAF officer to attain four-star rank, but also the first Duntroon graduate to do so. Already considered "a particularly assertive Chairman" of COSC, his role was further strengthened by the promotion as he now out-ranked the three service heads. His predecessors in the position had not advanced beyond three-star rank. Scherger remained as chairman until retiring from military life on 18 May 1966, having twice had his term extended by unanimous vote of Federal cabinet. ## Later life After leaving the military, Scherger became chairman of the Australian National Airlines Commission (ACAC), the controlling body of the Federal government's domestic carrier Trans Australia Airlines (TAA), on 1 July 1966. Considered as bringing to TAA "the dash and leadership the new air age demanded", he presided over delivery of its first Douglas DC-9 twin-jet transport in 1967. The government's Two Airlines Policy, designed to ensure even competition between TAA and Australia's private domestic carrier, Ansett, meant that the decision of which airline would land the first DC-9 in the country came down to the toss of a coin, which Scherger won. He augmented his role at ACAC with chairmanship of the Commonwealth Aircraft Corporation (CAC) from 1968, and joined an Australian defence industries mission to the US the following year. Scherger continued to lead ACAC and CAC until retiring to live in Melbourne in 1975. He also served as director on the boards of other firms including electronics companies Plessey Pacific and International Computers (Australia) Limited. His wife Thelma died in a car accident in 1974. On 3 March 1975, at the age of seventy, he married Joy Robertson, a widow he had known three months. At the time, he was quoted as saying, "In the Air Force you have to move quickly or someone else will shoot you down". In retirement he attracted some controversy by continuing to advocate for the Australian military to acquire a nuclear capability. Sir Frederick Scherger died in Melbourne on 16 January 1984, having been ill following a stroke the previous year. ## Legacy Described by Alan Stephens as one of "the outstanding officers of the post-war era" and "among the RAAF's better chiefs", Scherger is credited with helping to shift Australia's defence posture to the north by developing the concept of a series of front-line air bases in the continent's top end, beginning with plans for RAAF Tindal in 1959. From the time of his command of No. 10 Operational Group, he had an easy rapport—and worked to foster relations—with the US military, presaging closer defence ties with the Americans that he pursued as CAS. Among other things this manifested itself in the purchase of more and more US equipment for the Air Force, and far less from the United Kingdom. Once elevated to the position of Chairman of COSC, he further severed ties with Britain by removing senior Royal Australian Navy officers from the Royal Navy List, and dropping the words "... and Chief of the Australian Section of the Imperial General Staff" from the title of Chief of the General Staff in the Australian Army List. As Chairman of COSC, Scherger played a leading role in the large-scale commitment of Australian forces to Vietnam. In an address at the Australian War Memorial in 2005, journalist Paul Kelly referred to him as "Australia's most prominent military hawk" at the time, who "exceeded his brief" by promising a battalion to the Americans before a formal request had been made. Historians Peter Edwards and Gregory Pemberton have written that "no official could have done more to press Australia into a military commitment in Vietnam than its most highly ranked serviceman, Air Chief Marshal Scherger". Reflecting later on Australia's involvement in the war, Scherger said "If you want allies, you've got to support allies ... It was never conceivable to us that America could lose—no way." Along with Athol Townley, Minister for Defence from 1958 to 1963, Scherger urged the establishment of an Australian Joint Services Staff College (JSSC), to further inter-service knowledge and cooperation against an indigenous background instead of sending officers to overseas colleges; the JSSC opened in 1970 as the Joint Services Wing of a proposed Australian Services Staff College, later being subsumed by the Australian Defence College. Scherger was also an early advocate for "one Australian Defence Force" comprising three branches, under one Minister for Defence, rather than three competing services, each with its own minister. According to his biographer, Harry Rayner, he bequeathed to his successor as Chairman of COSC, Lieutenant General Sir John Wilton, a position much invigorated and respected by the service chiefs and the government, and contributing to a more cohesive Australian defence organisation. In 1973, the single-service ministries were abolished in favour of an all-encompassing Department of Defence; by 1984, the Chairman COSC position had evolved to become the Chief of the Defence Force, directly commanding all three armed services through their respective chiefs. Rayner described Scherger as "the most quoted and best known of contemporary military leaders" in Australia from 1957 to 1966, recognised and admired by civilian and soldier alike. Detractors accused him of cunning and excessive politicking, Air Marshal Williams declaring that Scherger favoured his friends in the service and later in TAA and CAC, and Prime Minister John Gorton famously calling him "a politician in uniform". Scherger was also labelled a self-publicist, but argued "... you can't sell your ideas unless you can sell yourself, and if you can sell yourself you're half way to selling the ideas that you've got". The newest of the northern air bases he proposed while CAS, near Weipa in Cape York, was opened in 1998 and named RAAF Base Scherger in his honour. His name is also borne by Sir Frederick Scherger Drive in North Turramurra, New South Wales.
764,772
Operation Varsity
1,172,317,525
1945 Allied airborne operation in WWII
[ "1st Canadian Parachute Battalion", "Aerial operations and battles of World War II involving the United Kingdom", "Airborne operations of World War II", "Battles and operations of World War II involving the United States", "Battles of World War II involving Canada", "Glider Pilot Regiment operations", "Land battles and operations of World War II involving the United Kingdom", "March 1945 events", "Military history of Canada during World War II", "Military operations of World War II involving Germany", "Operation Plunder", "Rhine Province" ]
Operation Varsity (24 March 1945) was a successful airborne forces operation launched by Allied troops that took place toward the end of World War II. Involving more than 16,000 paratroopers and several thousand aircraft, it was the largest airborne operation in history to be conducted on a single day and in one location. Varsity was part of Operation Plunder, the Anglo-American-Canadian assault under Field Marshal Bernard Montgomery to cross the northern Rhine River and from there enter Northern Germany. Varsity was meant to help the surface river assault troops secure a foothold across the Rhine River in Western Germany by landing two airborne divisions on the eastern bank of the Rhine near the village of Hamminkeln and the town of Wesel. The plans called for the dropping of two divisions from U.S. XVIII Airborne Corps, under Major General Matthew B. Ridgway, to capture key territory and to generally disrupt German defenses to aid the advance of Allied ground forces. The British 6th Airborne Division was ordered to capture the villages of Schnappenberg and Hamminkeln, clear part of the Diersfordter Wald (Diersfordt Forest) of German forces, and secure three bridges over the River Issel. The U.S. 17th Airborne Division was to capture the village of Diersfordt and clear the rest of the Diersfordter Wald of any remaining German forces. The two divisions would hold the territory they had captured until relieved by advancing units of 21st Army Group, and then join in the general advance into northern Germany. The airborne forces made several mistakes, most notably when pilot error caused paratroopers from the 513th Parachute Infantry Regiment, a regiment of the U.S. 17th Airborne Division, to miss their drop zone and land on a British drop zone instead. However, the operation was a success, with both divisions capturing Rhine bridges and securing towns that could have been used by Germany to delay the advance of the British ground forces. The two divisions incurred more than 2,000 casualties, but captured about 3,500 German soldiers. The operation was the last large-scale Allied airborne operation of World War II. ## Background By March 1945, the Allied armies had advanced into Germany and had reached the River Rhine. The Rhine was a formidable natural obstacle to the Allied advance, but if breached would allow the Allies to access the North German Plain and ultimately advance on Berlin and other major cities in Northern Germany. Following the "Broad Front Approach" laid out by General Dwight David Eisenhower, the Supreme Allied Commander of the Allied Expeditionary Force, it was decided to attempt to breach the Rhine in several areas. Field Marshal Sir Bernard Montgomery, commanding the Anglo-Canadian 21st Army Group, devised a plan, code-named Operation Plunder, that would allow the forces under his command to breach the Rhine, which was subsequently authorized by Eisenhower. Plunder envisioned the British Second Army, under Lieutenant-General Miles C. Dempsey, and the U.S. Ninth Army, under Lieutenant General William Simpson, crossing the Rhine at Rees, Wesel, and an area south of the Lippe Canal. To ensure that the operation was a success, Montgomery insisted that an airborne component be inserted into the plans for the operation, to support the amphibious assaults that would take place; this was code-named Operation Varsity. Three airborne divisions were initially chosen to participate in the operation, these being the British 6th Airborne Division, the U.S. 13th Airborne Division and the U.S. 17th Airborne Division, all of which were assigned to U.S. XVIII Airborne Corps, commanded by Major General Matthew B. Ridgway. One of these airborne formations, the British 6th Airborne Division, commanded by Major-General Eric Bols, was a veteran division; it had taken part in Operation Overlord, the assault on Normandy in June the previous year. However, the U.S. 17th Airborne Division, under Major General William Miley, had been activated only in April 1943 and had arrived in Britain in August 1944, too late to participate in Operation Overlord. The division did not participate in Operation Market Garden. It did, however, participate in the Ardennes campaign but had yet to take part in a combat drop. The U.S. 13th Airborne Division, under Major General Eldridge Chapman, had been activated in August 1943 and was transferred to France in 1945; the formation itself had never seen action, although one of its regiments, the 517th Parachute Infantry, had fought briefly in Italy, and later in Southern France and the Ardennes campaign. ## Prelude ### Allied preparation Operation Varsity was therefore planned with these three airborne divisions in mind, with all three to be dropped behind German lines in support of the 21st Army Group as it conducted its amphibious assaults to breach the Rhine. However, during the earliest planning stages, it became apparent that the 13th Airborne Division would be unable to participate in the operation, as there were only enough combat transport aircraft in the area to transport two divisions effectively. The plan for the operation was therefore altered to accommodate the two remaining airborne divisions, the British 6th and U.S. 17th Airborne Divisions. The two airborne divisions would be dropped behind German lines, with their objective to land around Wesel and disrupt enemy defences in order to aid the advance of the British Second Army towards Wesel. To achieve this, both divisions would be dropped near the village of Hamminkeln, and were tasked with a number of objectives: they were to seize the Diersfordter Wald, a forest that overlooked the Rhine, including a road linking several towns together; several bridges over a smaller waterway, the River Issel, were to be seized to facilitate the advance; and the village of Hamminkeln was to be captured. The Diersfordter Wald was chosen by Lieutenant-General Dempsey, the British Second Army commander, as the initial objective because its seizure would deny the Germans artillery positions from which they could disrupt Second Army's bridging operations. Once these objectives were taken, the airborne troops would consolidate their positions and await the arrival of Allied ground forces, defending the territory captured against the German forces known to be in the area. Operation Varsity would be the largest single-lift airborne operation conducted during the conflict; more significantly, it would contradict previous airborne strategy by having the airborne troops drop after the initial amphibious landings, in order to minimize the risks to the airborne troops learned from the experiences of Operation Market Garden, the attempt to capture the Rhine bridges in the Netherlands in 1944. Unlike Market Garden, the airborne forces would be dropped only a relatively short distance behind German lines, thereby ensuring that reinforcements in the form of Allied ground forces would be able to link up with them within a short period: this avoided risking the same type of disaster that had befallen the British 1st Airborne Division when it had been isolated and practically annihilated by German infantry and armour at Arnhem. It was also decided by the commander of the First Allied Airborne Army, General Lewis H. Brereton, who commanded all Allied airborne forces, including U.S. XVIII Airborne Corps, that the two airborne divisions participating in Operation Varsity would be dropped simultaneously in a single "lift," instead of being dropped several hours apart, addressing what had also been a problem during Operation Market Garden. Supply drops for the airborne forces would also be made as soon as possible to ensure adequate supplies were available to the airborne troops as they fought. ### German preparation By this period of the conflict, the number of German divisions remaining on the Western Front was rapidly declining, both in numbers and quality, a fact in the Allies' favour. By the night of 23 March, Montgomery had the equivalent of more than 30 divisions under his command, while the Germans fielded around 10 divisions, all weakened from constant fighting. The best German formation the Allied airborne troops would face was the 1st Parachute Army, although even this formation had been weakened from the losses it had sustained in earlier fighting, particularly when it had engaged Allied forces in the Reichswald Forest in February. First Parachute Army had three corps stationed along the river; the II Parachute Corps to the north, LXXXVI Army Corps in the centre, and LXIII Army Corps in the south. Of these formations, the II Parachute Corps and LXXXVI Corps had a shared boundary that ran through the proposed landing zones for the Allied airborne divisions, meaning that the leading formation of each corps — these being the 7th Parachute and 84th Infantry Divisions — would face the airborne assault. After their retreat to the Rhine both divisions were under-strength and did not number more than 4,000 men each, with 84th Infantry Division supported by only 50 or so medium artillery pieces. The seven divisions that formed the 1st Parachute Army were short of manpower and munitions, and although farms and villages were well prepared for defensive purposes, there were few mobile reserves, ensuring that the defenders had little way to concentrate their forces against the Allied bridgehead when the assault began. The mobile reserves that the Germans did possess consisted of some 150 armoured fighting vehicles under the command of 1st Parachute Army, the majority of which belonged to XLVII Panzer Corps. Allied intelligence believed that of the two divisions that formed XLVII Panzer Corps, the 116th Panzer Division had up to 70 tanks, and the 15th Panzergrenadier Division 15 tanks and between 20–30 assault guns. Intelligence also pointed to the possibility of a heavy anti-tank battalion being stationed in the area. Also, the Germans possessed a great number of antiaircraft weapons; on 17 March Allied intelligence estimated that the Germans had 103 heavy and 153 light anti-aircraft guns, a number which was drastically revised a week later to 114 heavy and 712 light anti-aircraft guns. The situation of the German defenders, and their ability to counter any assault effectively, was worsened when the Allies launched a large-scale air attack one week prior to Operation Varsity. The air attack involved more than 10,000 Allied sorties and concentrated primarily on Luftwaffe airfields and the German transportation system. The German defenders were also hampered by the fact that they had no reliable intelligence as to where the actual assault would be launched; although German forces along the Rhine had been alerted as to the general possibility of an Allied airborne attack, it was only when British engineers began to set up smoke generators opposite Emmerich and began laying a 60-mile (97 km) long smokescreen that the Germans knew where the assault would come. ## Battle Operation Plunder began at 9 pm on the evening of 23 March, and by the early hours of the morning of 24 March Allied ground units had secured a number of crossings on the eastern bank of the Rhine. In the first few hours of the day, the transport aircraft carrying the two airborne divisions that formed Operation Varsity began to take off from airbases in England and France and began to rendezvous over Brussels, before turning northeast for the Rhine dropping zones. The airlift consisted of 541 transport aircraft containing airborne troops, and a further 1,050 troop-carriers towing 1,350 gliders. The U.S. 17th Airborne Division consisted of 9,387 personnel, who were transported in 836 C-47 Skytrain transports, 72 C-46 Commando transports, and more than 900 Waco CG-4A gliders. The British 6th Airborne Division consisted of 7,220 personnel transported by 42 Douglas C-54 and 752 C-47 Dakota transport aircraft, as well as 420 Airspeed Horsa and General Aircraft Hamilcar gliders. This immense armada stretched more than 200 miles (322 km) in the sky and took 2 hours and 37 minutes to pass any given point, and was protected by some 2,153 Allied fighters from the U.S. Ninth Air Force and the Royal Air Force. The combination of the two divisions in one lift made this the largest single day airborne drop in history. At 10 am British and American airborne troops belonging to the 6th Airborne Division and 17th Airborne Division began landing on German soil, some 13 hours after the Allied ground assault began. ### 6th Airborne Division The first element of the British 6th Airborne Division to land was the 8th Parachute Battalion, part of the 3rd Parachute Brigade under Brigadier James Hill. The brigade actually dropped nine minutes earlier than scheduled, but successfully landed in drop zone A, while facing significant small-arms and 20 mm anti-aircraft fire. The brigade suffered a number of casualties as it engaged the German forces in the Diersfordter Wald, but by 11:00 hours the drop zone was all but completely clear of enemy forces and all battalions of the brigade had formed up. The key place of Schnappenberg was captured by the 9th Parachute Battalion in conjunction with the 1st Canadian Parachute Battalion, the latter unit having lost its Commanding Officer (CO), Lieutenant Colonel Jeff Nicklin, to German small-arms fire only moments after he had landed. Despite taking casualties the brigade cleared the area of German forces, and by 13:45 Brigadier Hill could report that the brigade had secured all of its objectives. Canadian medical orderly Corporal Frederick George Topham was awarded the Victoria Cross for his efforts to recover casualties and take them for treatment, despite his own wounds, and great personal danger. The next British airborne unit to land was the 5th Parachute Brigade, commanded by Brigadier Nigel Poett. The brigade was designated to land on drop zone B and achieved this, although not as accurately as 3rd Parachute Brigade due to poor visibility around the drop zone, which also made it more difficult for paratroopers of the brigade to rally. The drop zone came under heavy fire from German troops stationed nearby, and was subjected to shellfire and mortaring which inflicted casualties in the battalion rendezvous areas. However, the 7th Parachute Battalion soon cleared the DZ of German troops, many of whom were situated in farms and houses, and the 12th Parachute Battalion and 13th Parachute Battalion rapidly secured the rest of the brigade's objectives. The brigade was then ordered to move due east and clear an area near Schermbeck, as well as to engage German forces gathered to the west of the farmhouse where the 6th Airborne Division Headquarters was established. By 15:30 Brigadier Poett reported that the brigade had secured all of its objectives and linked up with other British airborne units. The third airborne unit that formed a part of the 6th Airborne Division was the 6th Airlanding Brigade, commanded by Brigadier Hugh Bellamy. The brigade was tasked with landing in company-sized groups and capturing several objectives, including the town of Hamminkeln. The gliders containing the airborne troops of the brigade landed in landing zones P, O, U and R under considerable antiaircraft fire, the landing being made even more difficult due to the presence of a great deal of haze and smoke. This resulted in a number of glider pilots being unable to identify their landing areas and losing their bearings; a number of gliders landed in the wrong areas or crashed. However, the majority of the gliders survived, allowing the battalions of the brigade to secure intact the three bridges over the River Issel that they had been tasked with capturing, as well as the village of Hamminkeln with the aid of American paratroopers of the 513th Parachute Infantry Regiment, which had been dropped by mistake nearby. The brigade secured all of its objectives shortly after capturing Hamminkeln. ### 17th Airborne Division The 507th Parachute Infantry Regiment, under the command of Colonel Edson Raff, was the lead assault formation for the 17th Airborne Division, and was consequently the first American airborne unit to land as part of Operation Varsity. The entire regiment was meant to be dropped in drop zone W, a clearing 2 miles (3 km) north of Wesel; however, excessive ground haze confused the pilots of the transport aircraft carrying the regiment, and as such when the 507th dropped it split into two halves. Colonel Raff and approximately 690 of his paratroopers landed northwest of the drop zone near the town of Diersfordt, with the rest of the regiment successfully landing in drop zone W. The colonel rallied his separated paratroopers and led them to drop zone W, engaging a battery of German artillery en route, killing or capturing the artillery crews before reuniting with the rest of the regiment. By 2 pm, the 507th PIR had secured all of its objectives and cleared the area around Diersfordt, having engaged numerous German troops and also destroying a German tank. The actions of the 507th Parachute Infantry during the initial landing also gained the division its second Medal of Honor, when Private George Peters posthumously received the award after charging a German machine gun nest and eliminating it with rifle fire and grenades, allowing his fellow paratroopers to gather their equipment and capture the regiment's first objective. The 513th Parachute Infantry Regiment was the second American airborne unit to land after the 507th, under the command of Colonel James Coutts. En route to the drop zone, the transport aircraft carrying the 513th had the misfortune to pass through a belt of German antiaircraft weapons, losing 22 of the C-46 transport aircraft and damaging a further 38. Just as the 507th had, the 513th also suffered from pilot error due to the ground haze, and as such the regiment actually missed its designated drop zone, DZ X, and was dropped on one of the landing zones designated for the British 6th Airlanding Brigade. Despite this inaccuracy the paratroopers swiftly rallied and aided the British glider-borne troops who were landing simultaneously, eliminating several German artillery batteries that were covering the area. Once the German troops in the area had been eliminated, a combined force of American and British airborne troops stormed Hamminkeln and secured the town. By 2 pm, Colonel Coutts reported to Divisional Headquarters that the 513th Parachute Infantry had secured all of its objectives, having knocked out two tanks and two complete regiments of artillery during their assault. During its attempts to secure its objectives, the regiment also gained a third Medal of Honor for the 17th Airborne Division when Private First Class Stuart Stryker posthumously received the award after leading a charge against a German machine-gun nest, creating a distraction to allow the rest of his platoon to capture the fortified position in which the machine-gun was situated. The third component of the 17th Airborne Division to take part in the operation was the 194th Glider Infantry Regiment (GIR), under the command of Colonel James Pierce. Troopers of the 194th GIR landed accurately in landing zone S, but their gliders and tow aircraft took heavy casualties; 12 C-47 transports were lost due to anti-aircraft fire, and a further 140 were damaged by the same fire. The regiment landed in the midst of a number of German artillery batteries that were engaging Allied ground forces crossing the Rhine, and as such many of the gliders were engaged by German artillery pieces that had their barrels lowered for direct-fire. However, these artillery batteries and their crews were defeated by the glider-borne troops, and the 194th Glider Infantry Regiment was soon able to report that its objectives had been secured, having destroyed 42 artillery pieces, 10 tanks, 2 self-propelled anti-aircraft vehicles and 5 self-propelled guns. ## OSS teams The Office of Strategic Services sent four teams of two (codename Algonquin, teams Alsace, Poissy, S&S and Student), with Operation Varsity to infiltrate and report from behind enemy lines, but none succeeded. Team S&S had two agents in Wehrmacht uniforms and a captured Kϋbelwagon; to report by radio. But the Kϋbelwagon was put out of action while in the glider; three tires and the long-range radio were shot up (German gunners were told to attack the gliders not the tow planes). ## Aftermath Operation Varsity was a successful large-scale airborne operation. All of the objectives that the airborne troops had been tasked with had been captured and held, usually within only a few hours of the operation beginning. The bridges over the Issel had been successfully captured, although one later had to be destroyed to prevent its capture by counter-attacking German forces. The Diersfordter Forest had been cleared of enemy troops, and the roads through which the Germans might have routed reinforcements against the advance had been cut by airborne troops. Finally, Hamminkeln, the village that dominated the area and through which any advance would be made, had been secured by air-lifted units. By nightfall of 24 March, 15th (Scottish) Infantry Division had joined up with elements of 6th Airborne, and by midnight the first light bridge was across the Rhine. By 27 March, twelve bridges suitable for heavy armour had been installed over the Rhine and the Allies had 14 divisions on the east bank of the river, penetrating up to 10 miles (16 km). According to Generalmajor Heinz Fiebig, commanding officer of one of the defending German formations, 84th Infantry Division, the German forces defending the area had been greatly surprised by the speed with which the two airborne divisions had landed their troops, explaining that their sudden appearance had had a "shattering effect" on the greatly outnumbered defenders. He revealed during his interrogation that his division had been badly depleted and could muster barely 4,000 soldiers. The U.S. 17th Airborne Division gained its fourth Medal of Honor in the days following the operation, when Technical Sergeant Clinton M. Hedrick of the 194th Glider Infantry Regiment received the award posthumously after aiding in the capture of Lembeck Castle, which had been turned into a fortified position by the Germans. ### Casualties The casualties taken by both airborne formations were quite heavy, although lighter than had been expected. By nightfall of 24 March, the 6th Airborne Division had suffered around 1,400 personnel killed, wounded or missing in action out of the 7,220 personnel who were landed in the operation. The division also claimed to have secured around 1,500 prisoners of war. The 17th Airborne Division suffered a similar casualty rate, reporting around 1,300 casualties out of 9,650 personnel who took part in the operation, while the division claimed to have taken 2,000 POWs, a number similar to those taken by 6th Airborne. This made a total of around 3,500 POWs taken by both airborne formations during the operation. Between 24 and 29 March, the 17th Airborne had taken a total of 1,346 casualties. The air forces involved in the operation also suffered casualties; 56 aircraft in total were lost during the 24th, 21 out of the 144 transport aircraft transporting the 17th Airborne were shot down and 59 were damaged by antiaircraft fire, and 16 bombers from the Eighth Air Force were also shot down during supply drops. ### Battle honours In the British and Commonwealth system of battle honours, there was no distinct award for service in Operation Varsity. Instead, units that participated in the operation were included in the awards made between 1956 and 1959 to all units that participated in the Rhine crossing between 23 March and 1 April 1945: Rhine, or The Rhine to Canadian units, later translated to Le Rhin for French Canadian units. ## Post-war praise Contemporary observers and historians generally agree that Operation Varsity was successful. General Eisenhower called it "the most successful airborne operation carried out to date", and an observer later wrote that the operation showed "the highest state of development attained by troop-carrier and airborne units". In the official summary of the operation, Major General Ridgway wrote that the operation had been flawless, and that the two airborne divisions involved had destroyed enemy defences that might otherwise have taken days to reduce, ensuring the operation was successful. Several modern historians have also praised the operation and the improvements that were made for Varsity. G. G. Norton argued that the operation benefited from the lessons learned from previous operations, and Brian Jewell agrees, arguing that the lessons of Market Garden had been learned as the airborne forces were concentrated and quickly dropped, giving the defenders little time to recover. Norton also argues that improvements were made for supporting the airborne troops; he notes that a large number of artillery pieces were available to cover the landings and that observers were dropped with the airborne forces, thus augmenting the firepower and flexibility of the airborne troops. He also highlights the development of a technique that allowed entire brigades to be landed in tactical groups, giving them greater flexibility. Dropping the airborne forces after the ground forces had breached the Rhine also ensured that the airborne troops would not have to fight for long before being relieved, a major improvement on the manner in which the previous large-scale airborne operation, Market Garden, had been conducted. Historian Peter Allen states that while the airborne forces took heavy casualties, Varsity diverted German attention from the Rhine crossing onto themselves. Thus, the troops fighting to create a bridgehead, across the Rhine, suffered relatively few casualties, and were able to "break out from the Rhine in hours rather than days". ## Post-war criticism Despite a great deal of official accolade and praise over the success of the operation, a number of criticisms have been made of the operation and the errors that were made. Several military historians have been critical of the need for the operation, with one historian, Barry Gregory, arguing that "Operation Varsity was not entirely necessary..." Another historian, James A. Huston, argues that "...had the same resources been employed on the ground, it is conceivable that the advance to the east might have been even more rapid than it was". In The Last Offensive the US Army official history by Charles B. MacDonald (1990) he asked whether under the prevailing circumstances an airborne attack (was) necessary or .. even justified. ### Aircraft shortages One specific failure in the massive operation was the critical lack of transport aircraft for the operation, an unsolved flaw that had dogged every large-scale airborne operation the Allies had conducted. In the original planning for Varsity, an extra airborne division, the 13th, had been included; however, a lack of transport aircraft to drop this division led to it being excluded from the final plan. Thus, the unsolved problem of a shortage of transport aircraft meant that a third of the planned troops to be used were discarded, weakening the fighting power of the airborne formation. In the event, the airborne troops actually employed were sufficient to overwhelm the defenders. There was also a shortage of gliders, although Brereton eventually got the 906 CG-4As he needed for Varsity and 926 for Operation Choker II , an American crossing of the Rhine at Worms planned for March. New gliders were shipped crated from America for assembly in Europe. Some were recovered from the Netherlands despite pilfering for fabric and instruments and a storm which destroyed over a hundred; after two months only 281 of the 2000 gliders there were retrieved. There was little recovery of gliders from Normandy. Some historians have commented on this failure; Gerard Devlin argues that because of this lack of aircraft the remaining two divisions were forced to shoulder the operation by themselves. ### Aircraft and troop losses Losses of airborne troops were high. The cause of this high casualty rate can most likely be traced to the fact that the operation was launched in full daylight, rather than a night-assault. The airborne landings were conducted during the day primarily because the planners believed that a daytime operation had a better chance of success than at night, the troops being less scattered. However, landing paratroopers, and especially gliders, without the cover of darkness left them exceedingly vulnerable to anti-aircraft fire. The official history of the British Airborne Divisions highlights the cost of this trade-off, stating that of the 416 gliders that landed, only 88 remained undamaged by enemy fire, and that between 20–30 percent of the glider pilots were casualties. Another historian argues that the gliders landing in daylight was a calamity, with the 194th Glider Infantry Regiment having two-thirds of their gliders hit by ground fire and suffering heavy casualties as they landed. The casualty rates were worsened by the slow rates of release and descent of the gliders themselves, and the fact that each aircraft towed two gliders, slowing them even further; as the time to release a glider unit was 3–4 times longer than a parachute unit, the gliders were vulnerable to flak. A large number of paratroop drop aircraft were hit and lost as well. This was largely due to the hostile conditions encountered by the drop aircraft. Operation Varsity's paratroop drop phase was flown in daylight at slow speeds at very low altitudes, using unarmed cargo aircraft, over heavy concentrations of German 20 mm, 37 mm, and larger calibre antiaircraft (AA) cannon utilizing explosive, incendiary, and armor-piercing incendiary ammunition. By that stage of the war, German AA crews had trained to a high state of readiness; many batteries had considerable combat experience in firing on and destroying high speed, well-armed fighter and fighter-bomber aircraft while under fire themselves. Finally, while many if not all of the C-47s used in Operation Varsity had been retrofitted with self-sealing fuel tanks, the much larger C-46 Commando aircraft employed in the drop received no such modification. This was exacerbated by the C-46's unvented wings, which tended to pool leaked gasoline at the wing root where it could be ignited by flak or a stray spark. Although 19 of 72 C-46 aircraft were destroyed during Operation Varsity, losses of other aircraft types from AA fire during the same operation were also significant, including 13 gliders shot down, 14 crashed, and 126 damaged; 15 Consolidated B-24 bombers shot down, and 104 damaged; and 30 C-47s shot down and 339 damaged. Lieutenant-Colonel Otway, who wrote an official history of the British airborne forces during World War II, stated that Operation Varsity highlighted the vulnerability of glider-borne units. While they arrived in complete sub-units and were able to move off more quickly than airborne troops dropped by parachute, the gliders were easy targets for anti-aircraft fire and short-range small-arms fire once landed; Otway concluded that in any future operations, troops dropped by parachute should secure landing zones prior to the arrival of glider-borne units. Thus, by having the landings conducted during daylight to ensure greater accuracy, the Allied planners incurred a far greater casualty rate, particularly amongst the glider-borne elements. The operation also suffered from poor piloting. Although the piloting was of a better quality than in the Sicilian and Normandy operations, there were still significant failures on the part of the pilots, especially when it is considered that the drop was conducted in daylight. A significant error occurred when the pilots of the transports carrying 513th Parachute Infantry Regiment dropped much of the regiment several miles from their designated drop zones, with the mis-dropped units actually landing in the British landing zones. ## See also - List of military operations in the West European Theater during World War II by year
67,411,594
Goodwin Fire
1,169,001,678
2017 wildfire in Arizona, United States
[ "2017 Arizona wildfires", "History of Yavapai County, Arizona", "July 2017 events in the United States", "June 2017 events in the United States" ]
The Goodwin Fire was a wildfire that burned 28,516 acres (11,540 ha) in the U.S. state of Arizona over 16 days, from June 24 to July 10, 2017. The fire destroyed 17 homes and damaged another 19 structures, but no firefighters or civilians were injured or died in the fire. Investigators did not determine any particular cause for the fire. The fire was first detected on June 24, 2017, by a two-man fire patrol that spotted smoke in the Bradshaw Mountains near Prescott, Arizona. Benefiting from undisturbed chaparral and high winds, the fire spread rapidly and forced the evacuation of several townships within Yavapai County and the closure of Arizona State Route 69. Despite firefighting aircraft being twice grounded by civilian drones operating in the burn area, firefighters made rapid progress containing the fire's spread after June 28. The fire was fully contained on July 10 and had lasting environmental consequences. ## Background Wildfires are a natural part of the ecological cycle of the Southwestern United States. The Goodwin Fire was one of 2,321 wildfires that burned a total of 429,564 acres (173,838 ha) in Arizona in 2017. The state had expected a "normal" fire season in its forests but high potential in the state's southern grasslands due to high temperatures, low humidity, and an abundance of fuels. By August 2017, wildfires had burned the most land since the 2011 season. In May 2018, the Ecological Restoration Institute at Northern Arizona University published a study of the 2017 wildfire season in Arizona and New Mexico and observed that more land had burned in Arizona than the average of the previous ten years. Eleven fires were studied, of which ten were in Arizona and included the Goodwin Fire. ## Fire At around 4:00 pm (MST), June 24, 2017, a two-man fire patrol monitoring the Bradshaw Mountains observed a column of smoke rising from a location about 14 mi (23 km) south of Prescott, in Yavapai County, Arizona. The pair reported the fire and began digging a firebreak; firefighting units arrived two hours later and began fire suppression efforts. Fed by undisturbed growths of dry shrubland (chaparral) and high winds, and with fire crews impaired by difficult terrain, the fire grew from 150 acres (61 ha) on June 24 to 25,000 acres (10,000 ha) on June 29. Yavapai County officials issued warnings about the smoke billowing from the fire on June 29. In response to the Goodwin Fire's rapid spread, all roads within or leading into the burn area were closed on June 26, and the communities of Mayer and Breezy Pines were evacuated the next day. On June 27, Arizona State Route 69 (SR 69) was closed between Prescott and Interstate 17 and residents of Walker, Potato Patch, Mountain Pine Acres, and Mount Union were issued preemptive evacuation notices. Doug Ducey, the Governor of Arizona, declared a state of emergency in Yavapai County the next day, and he secured additional state and federal resources for containing the Goodwin Fire. Ducey visited Dewey–Humboldt and the perimeter of the fire on June 29 to meet with firefighters and evacuees. By June 29, the containment of the Goodwin Fire's spread was estimated at 43%. Evacuation orders for residents of Mayer were lifted, as were all preemptive evacuation orders. SR 69 reopened on June 30. Firefighting aircraft were grounded on June 28 by a civilian drone flying over the burn area, a crime in Arizona (causing interference with emergency or law-enforcement efforts) for which the drone's operator was arrested on July 1. The operator was charged on July 7 with hindering firefighting efforts, but the charges were dropped on August 18. By July 4, when firefighting aircraft were again grounded by civilian drones, the Goodwin Fire had grown to 28,508 acres (11,537 ha) but had been 91% contained. The fire was fully contained on July 10. ## Aftermath The Goodwin Fire burned 28,516 acres (11,540 ha) over 16 days and cost \$15 million to suppress (equivalent to \$ million in ). Of the total area burned, 56% suffered total foliage mortality. The fire forced the evacuation of 9,000 people, destroyed 17 homes, and damaged another 19 structures. More than 650 firefighters were involved in containing the Goodwin Fire at its height. As early as July 5, officials began warning of the possibility of severe flooding during the North American monsoon as a consequence of the Goodwin Fire creating terrain incapable of absorbing water. On July 19, rainwater drained from the Goodwin Fire burn scar into Big Bug Creek, near Mayer, which overflowed into a trailer park within Mayer's municipal limits. The flood damaged 109 houses and two residents had to be rescued from their homes. Some evacuations ordered in response to the flooding remained in place until August 19. Firefighters suspected a human cause, but the subsequent investigation did not determine a specific cause. ### Environmental consequences On August 8, the United States Forest Service published a burned area emergency response assessment of the Goodwin Fire's burn scar and recommended immediate stabilization of severely burned areas via aerial reseeding. Senator Jeff Flake () toured the burn scar on August 17. Helicopters began dropping 27,365 lb (12,413 kg) of grass seed on August 18.
1,960,791
Margaret Lea Houston
1,166,991,218
First Lady of the Republic of Texas (1819–1867)
[ "1819 births", "1867 deaths", "American slave owners", "American women slave owners", "Baptists from Alabama", "Baptists from Texas", "Deaths from yellow fever", "First ladies and gentlemen of Texas", "First ladies of the Republic of Texas", "Immigrants to the Republic of Texas", "People from Independence, Texas", "People from Marion, Alabama", "Sam Houston" ]
Margaret Lea Houston (April 11, 1819 – December 3, 1867) was First Lady of the Republic of Texas during her husband Sam Houston's second term as President of the Republic of Texas. They met following the first of his two non-consecutive terms as the Republic's president, and married when he was a representative in the Congress of the Republic of Texas. She was his third wife, remaining with him until his death. She came from a close-knit family in Alabama, many of whom also moved to Texas when she married the man who was an accomplished politician in both Tennessee and Texas, and who had won the Battle of San Jacinto during the Texas Revolution. The couple had eight children, and she gave birth to most of them while he was away attending to politics. Her mother Nancy Lea was a constant in their lives, helping with the children, managing the household help, and always providing either financial assistance or temporary housing. With the help of her extended family in Texas, Margaret convinced her husband to give up both alcohol and profane language. He believed his wife to be an exemplary woman of faith and, under her influence, converted to the Baptist denomination, after he had many years earlier been baptized a Catholic in Nacogdoches, Texas. Following the Annexation of Texas to the United States, Sam Houston shuttled back and forth to Washington, D.C. as the state's U.S. senator for 13 years, while Margaret remained in Texas raising their children. When he was elected the state's governor, Margaret became First Lady of the state of Texas and was pregnant with their last child. Her brief tenure came on the cusp of the Civil War, at a time when the state was torn apart over the debate of whether or not to secede from the United States, while her husband worked in vain to defeat the Texas Ordinance of Secession. There was an attempt on his life, and angry mobs gathered in the streets near the governor's mansion. With no government protection provided, she lived in fear for her family's safety. Her husband was removed from office by the Texas Secession Convention for refusing to swear loyalty to the Confederacy. Margaret became a wartime mother, whose eldest son joined the Confederate Army and was taken prisoner at the Battle of Shiloh. Her husband died before the end of the war. In her few remaining years, she became the keeper of the Sam Houston legacy and opened his records to a trusted biographer. When she died of yellow fever four and a half years later, Margaret could not be buried with her husband in a public cemetery in Huntsville for fear of contamination, and was instead interred next to her mother on private property. ## Early life Margaret Moffette Lea was born April 11, 1819, into a family of devout Baptists in Perry County, Alabama. Her father Temple Lea was a church deacon and the state treasurer of the Alabama Baptist Convention, and her mother Nancy Moffette Lea was the only woman delegate at the convention's formation. Margaret was the fifth of six children that included older brothers Martin, Henry Clinton and Vernal, older sister Varilla, and younger sister Antoinette. The Lea cotton plantation had been acquired with money from a Moffette family inheritance, and was operated by Nancy. When her father died in 1834, she inherited five slaves: Joshua, Eliza, her favorite, Viannah, Charlotte and Jackson. The older Lea children had married prior to Temple's death, but Vernal, Margaret and Antoinette accompanied the widowed Nancy when she moved into her son Henry's home at Marion. He was an accomplished attorney who sat on the boards of educational institutions, and would be elected to the Alabama State Senate in 1836. Margaret was enrolled at Professor McLean's School, and also attended Judson Female Institute. The latter was founded by Baptists to instruct genteel young women in what were considered acceptable goals of their time and place, "proficiency in needlework, dancing, drawing, and penmanship". Heavy emphasis was put on Baptist theology and missionary work. She wrote poetry and read romantic novels, while also becoming accomplished on guitar, harp and piano. Reverend Peter Crawford baptized her in the Siloam Baptist Church of Marion when she was 19, by which time the eligible young lady was considered "accomplished, well-connected and deeply religious". ## Marriage Sam Houston was an attorney by profession and politically accomplished even before he moved to Texas. In Tennessee, he had been both a member of the United States House of Representatives and governor. His military victory at the Battle of San Jacinto elevated him to hero status in Texas. After completing his first term as President of the Republic of Texas in early December 1838, he continued to practice law from his office in Liberty. He arrived in Mobile, Alabama, in the early months of 1839 as a partner of the Sabine City Company, seeking investors to develop a community that is today known as Sabine Pass. Through Martin Lea, he made the acquaintance of Antoinette's husband William Bledsoe, a wealthy businessman who in turn suggested Nancy Lea as a possible investor. Invited to a garden party at Martin's home, it was there Houston first became acquainted with Margaret. The mutual attraction was instantaneous. Nancy was favorably impressed with Houston's land sales pitch, but not so impressed with his interest in her daughter. She and others in the family were concerned about his reputation as a hard-drinking carouser with a proclivity for profanity, who was 26 years older than Margaret and twice married. Several weeks of love letters had been exchanged between Margaret and Houston by the time he proposed marriage that summer of 1839, presenting her with his image carved on a brooch. In an effort to assuage the family's opposition to the union, Houston spent several weeks in the Lea home in Alabama. In September during his absence from Texas, his supporters in San Augustine County elected him to serve in the Republic of Texas House of Representatives. When the couple's engagement was announced in newspapers, the Leas were not the only ones who were skeptical. Acquaintances in Texas were well versed with his personal history and aware that he only recently obtained a divorce from his first wife, Eliza Allen of Gatlin, TN. The original divorce paperwork, in 1829, was lost and not filed; Houston was unaware until 1837 so he filed the paperwork immediately to finalize his divorce. He had hopes of marrying a Texas woman, Anna Raguet, as it played out she rejected him for his friend Mr. Irion. Political crony Barnard E. Bee Sr. tried to discourage him from making a third attempt at marriage, believing him to be "totally disqualified for domestic happiness". As the day of their May 9, 1840, wedding approached, some family members still looked upon Houston with uncertainty and were determined to stop what they believed would be a disastrous union for Margaret. She would not be deterred, however, and the Reverend Peter Crawford officiated over the wedding of Margaret and the man with whom she had fallen in love. The newlyweds spent their honeymoon week at the Lafayette Hotel before sailing to Galveston, where Nancy and the Bledsoes had already established residencies. Houston retained a house he owned in the city named for him, but Margaret had no taste for the hustle and bustle and preferred the lesser-populated Galveston. She and her personal slaves, who had accompanied the newlyweds from Alabama, shared her mother's house while Houston traveled. ## First Lady of the Republic The year before he met Margaret, Houston had purchased property at Cedar Point on Galveston Bay in Chambers County, which he named Raven Moor, and planned to expand with income from his law practice. The existing two-room log dogtrot house with its detached slave quarters overlooked Galveston Bay and became the newlyweds' first home, filled with both Margaret's personal furnishings from Alabama, as well as newer pieces. She renamed it Ben Lomond as a tip of the hat to the romantic Walter Scott works she had read, and delegated management of the household to her mother Nancy. During his second term as representative from San Augustine, Houston was elected in 1841 to once again serve as the Republic's president. Margaret disliked campaign events and, giving up her privacy, so she frequently stayed home while her husband traveled about the Republic canvassing for votes. Yet, when she rose to the occasion, such as the extended post-election tour of San Augustine County and victory celebrations in Washington County and Houston City, the public adored her, and she became an impressive political asset. She rode in a local presidential parade, but stayed home rather than travel to the inauguration in Austin. When the couple appeared at several events in Nacogdoches, his old friends took notice of his total avoidance of alcohol, and he continued to assure her that he was giving it up completely. He also began to clean up his language to please his new wife, and would eventually claim to have eliminated his profanity altogether. Approximately 26 miles (42 km) north of Ben Lomond, the Bledsoes operated a sugar cane plantation at Grand Cane in Liberty County. Financially supplemented by Nancy, the plantation became a family gathering place. About a year after Vernal and Mary Lea also moved there, Mary suffered a pregnancy miscarriage. Not long after that, the couple accepted trusteeship of a 7-year-old Galveston orphan named Susan Virginia Thorne, who was then placed in the care of Nancy. It was a problematic relationship from the beginning, and would grow to have legal ramifications for Margaret. Events leading up to the 1842 Battle of Salado Creek caused Houston to believe that Mexico was planning a full-scale invasion to re-take Texas. In response, he moved the Republic's capital farther east to Washington-on-the-Brazos, and sent Margaret back to her relatives in Alabama. Upon her later return, they temporarily lived with the Lockhart family at Washington-on-the-Brazos until they were able to acquire a small home there. The couple's first child Sam Houston Jr. was born in the new house on May 25, 1843. Upon learning of her son Martin's death in a duel, Nancy moved in with the Houstons, helping Margaret with the new baby, and over Houston's objections, pitching in with some financial assistance for food and household necessities. ## Extended family life ### Raven Hill and Woodland When his presidential term ended on December 9, 1844, Houston turned his attention to the Raven Hill plantation he had acquired that year northwest of Grand Cane and east of Huntsville. Margaret's slave Joshua was put in charge of the carpentry to build her a new house. Nancy, Margaret and sister Antoinette devoted their time to activities in Grand Cane's Concord Baptist Church, of which they were founding members. She continued to be a wife who was happiest when she and her husband stayed close to home. Although she accompanied him to President Andrew Jackson's Tennessee funeral in the summer of 1845, she did not attend fetes held in her husband's honor by his old friends and supporters. During the latter part of the year, Antoinette's husband William died, followed a few months later by the death of Vernal's wife Mary. Prior to her death, she had elicited a promise from Margaret to assume the trusteeship of Susan Virginia Thorne. Texas officially relinquished its sovereignty on February 19, 1846, to become the 28th state in the union, and Houston was elected by the Texas State Legislature to serve in the United States Senate. Margaret's pregnancy prevented her from accompanying him, so when time and duty permitted he traveled back and forth between Texas and a temporary hotel residence in the nation's capital. When Reverend George W. Samson first met Houston at the E-Street Baptist Church in Washington, D.C., the senator told him that his attendance had been influenced by "one of the best Christians on earth", his wife Margaret. For the duration of his senatorial service, Houston regularly attended the E-Street church, sharing his wife's letters with Samson and delving into theological discussions pertaining to Margaret's interpretation of scriptures. Margaret's sister Antoinette eloped with wealthy Galveston businessman Charles Power in April and began a new life on his sugar plantation. Houston was home during a Congressional recess when their second child Nancy (Nannie) Elizabeth Houston was born at Raven Hill on September 6. About this time, in a letter to Houston that gave insight into Nancy's forceful constant presence in their lives, Margaret conceded, "She is high spirited and a little overbearing, I admit ..." but advised her husband to just give in to the insignificant issues. Houston replied, "I love the old Lady as a Mother, and have resolved to defer to her age and her disposition. Her blood is much like my own." During the early part of 1847, Houston's letters to Margaret were filled with his weariness of being away from home, and his concern that he had no letters from her for weeks. He promised that at the end of the current legislative session, he would "... fly with all speed to meet and greet my Love and embrace our little ones." When she finally answered, she initially only told him of a serious illness that Sam Jr. had since recovered from, even though he was aware of previous problems she had with a breast lump. She had been advised to see a specialist in Memphis, Tennessee, if there was a recurrence. When complications appeared, family friend Dr. Ashbel Smith recommended surgery in Texas; only then, did she inform her husband of the situation. Upon receipt of communication from her, Houston immediately departed Washington, D.C. After his return home, Houston negotiated a labor-swap arrangement with Raven Hill's overseer Captain Frank Hatch. In lieu of a cash payment for Hatch's services, the bulk of Houston's slave labor force was engaged to work on Hatch's property at Bermuda Spring. The remaining slaves were retained as house labor for Margaret. Eventually, Houston became the owner of Bermuda Spring when he and Hatch swapped properties, and he set about to build the Woodland home for his wife. The first child to be born in the house was Margaret (Maggie) Lea Houston, arriving on April 13, 1848, while Congress was in session and Houston was in Washington. The widowed Vernal remarried to Catherine Davis Goodall in 1849, but trusteeship of Susan Virginia Thorne, by now a teenager, remained with Margaret. With most of his time spent in the nation's capital, Houston's perception of Thorne was primarily second-hand gleanings from Margaret's letters; yet, he disliked and distrusted the orphaned girl to the point where he feared for the health and safety of his children with her in the house. Exacerbating the situation was Margaret's disapproval of the relationship that the teenage girl developed with overseer Thomas Gott. Push literally came to shove during an incident in which Margaret disciplined her for what she believed was rough handling of one of the children. Thorne alleged that during the ensuing dispute over the situation, Margaret had used threats and physical violence against her. After Thorne eloped with Gott a month later, the couple filed assault and battery charges against Margaret. When a grand jury investigation resulted in a deadlock, the matter was referred to the local Baptist church that Margaret helped found, and she was acquitted of the charges. Houston came to believe that the filing of legal charges against his wife had been encouraged by his political enemies. Daughter Mary William (Mary Willie) Houston was born on April 9, 1850, in the Woodland house, during another Congressional session when Houston was in Washington. Their fourth child Antoinette (Nettie) Power Houston arrived on January 20, 1852, while he was again away on a business trip. Many friends and acquaintances came to visit the Houstons at Woodland, including members of the Alabama-Coushatta Tribe who had allied with Houston during the Texas Revolution; he in return had assisted them in their being granted a reservation in east Texas. Throughout the last years of his presidency, Houston had made numerous efforts for the Republic to find common ground with the various tribes, asserting their right to own land. Many tribes had come to respect him as their friend. ### Sam Houston's profession of faith Nancy moved southwest of Huntsville to Independence in 1852, and much of the remaining Lea family began to form its nucleus in the Washington County community. Antoinette and Charles Power were also living in Independence after their Galveston sugar plantation was decimated by a hurricane. Brothers Vernal and Henry both died that year. The following year, Varilla's husband Robertus Royston also died and she joined the rest of the family in Independence. That August, the Houstons bought a house near the original Baylor University campus in Independence. While Houston was attending to business in Washington, their sixth child Andrew Jackson Houston was born on June 21, 1854. As required by Mexican federal law for property ownership in Coahuila y Tejas, Houston had been baptized into the Catholic faith in the Adolphus Sterne House in Nacogdoches prior to Texas independence. By 1854, when Houston told Reverend Samson he felt compelled to make a public profession of faith, perhaps on the floor of the United States Senate, Margaret and her family had spent 14 years influencing her husband's faith. Ultimately, he decided to make the profession among those who knew him best in Texas. Word quickly spread about Houston's upcoming public baptism, and spectators traveled from neighboring communities to witness the event. Reverend Rufus Columbus Burleson, the president of Baylor University and local church pastor, performed the rite in Little Rocky Creek, 2 miles (3.2 km) southeast of town. Houston afterwards still felt unworthy of taking the Eucharist and becoming a member of Margaret's church. In gratitude and celebration, Nancy sold her silverware to purchase a bell for the Rocky Creek Baptist Church. At her request, Reverend George Washington Baines of Brenham counseled with him to eliminate his self-doubts. Baines, who was the maternal great-grandfather of President Lyndon B. Johnson, maintained a close friendship with Sam Houston for the rest of Houston's life. Baines' son Joseph Wilson Baines served in the Texas state legislature, and was the father of Rebekah Baines, mother of Lyndon Johnson. ## First Lady of the state The state legislature decided during Houston's third senatorial term not to re-elect him, so he ran for the office of Governor of Texas, losing to Hardin Richard Runnels. He was still in Washington when William (Willie) Rogers Houston was born on May 25, 1858, their last child born in the Woodland home. In order to satisfy creditors of his gubernatorial campaign debts, Houston was forced to sell the house to his political supporter J. Carroll Smith. He subsequently defeated incumbent Runnels with a second bid for the office during a period when the populace was bitterly divided over the issue of secession from the United States, and was sworn on December 31, 1859. Construction on the Texas Governor's Mansion in Austin had been completed three years earlier and first occupied by Governor Elisha M. Pease, whose wife played hostess to anyone who stopped by for a visit. The Houston family and their retinue of slaves moved into the mansion during a political climate that grew increasingly hostile over the secession debate. The family furniture had been moved from Independence by Joshua, since the state government had no budget for staffing, furnishing or maintaining the governor's residence. That financial burden fell on the shoulders of the incumbent, and the state partially defaulted on Houston's salary. Margaret feared for the family's safety, as her husband worked towards defeating passage of the state's Ordinance of Secession. There had been a botched assassination attempt on Houston, and she saw throngs of angry malcontents gathering in the city. Margaret closed the mansion doors to all but those with an invitation from the Houstons. The family and household slaves resided on the second floor of the mansion, while others lived in the stable. As with everywhere else had they lived, she cared nothing about public life, and instead worked with Eliza and the other servants to create a home that welcomed extended family members and personal friends of the Houstons. Houston would occasionally hire out some of his labor force. The first child born in the Texas governor's mansion was also the last of the Houston children; Temple Lea Houston was delivered on August 12, 1860. This last birth left the 41-year-old Margaret debilitated for almost two weeks, with a watchful Houston constantly by her side. The Texas Secession Convention passed the Texas Ordinance of Secession on February 1, 1861, effectively becoming part of the Confederate States of America on March 1. Houston, like all other office holders in the state, was expected to take an oath of loyalty to the Confederacy. He refused and was removed from office by the Secession Convention on March 16, succeeded by Lieutenant Governor Edward Clark. ## Final years Their home in Independence having been leased out to the Baptists, retreating there was not an option. Houston was in poor health, as well as spiritually and financially broken. After a brief sojourn in Nancy's home, and over her objections, the family returned to Ben Lomond in early April. Sometime during August 1861, Sam Houston, Jr., enlisted in the Confederate States Army 2nd Texas Infantry Regiment, Company C Bayland Guards, sending Margaret into melancholia. She dreaded that her first-born child would never be home again. "My heart seems almost broken ... what shall I do? How shall I bear it? When I first heard the news, I thought I would lie down and die", she wrote to her mother. Houston tried to help out by assuming care of their other children in between his extended visits to Galveston. Her fears seemed well-founded when her son was critically wounded and left for dead at the April 1862 Battle of Shiloh. A second bullet was stopped by his Bible, bearing an inside inscription from Margaret. He was found languishing in a field by a Union Army clergyman who picked up the Bible and also found a letter from Margaret in his pocket. Taken prisoner and sent to Camp Douglas in Illinois, he was later released in a prisoner exchange and received a medical discharge in October. Lacking the financial means to buy back their Woodland home, they rented the Steamboat House in Huntsville. The 69-year-old Houston was in his final days and physically feeble, requiring the use of a cane to get around. Until daughter Maggie took over as his personal assistant, his wife shouldered the duties. Even so, during this period, he managed to get the Confederate War Department to discharge all draftees from the Alabama-Coushatta tribe, which had distanced itself completely from the conflict. On July 26, 1863, with Margaret at his bedside reading the 23rd Psalm to him, Houston died. His will named her as his executrix, and named his cousin Thomas Caruthers, as well as family friends Thomas Gibbs, J. Carroll Smith and Anthony Martin Branch, as executors. He had died land rich, but cash poor. The inventory compiled of his estate after his death listed several thousand acres in real estate, \$250 cash, slaves (one of whom was Joshua Houston), a handful of livestock and his personal possessions. Margaret was now a widow with seven of her eight children under the age of 18 and financially dependent on her. She returned to live near her mother in Independence, Texas, swapping land for a nearby property that became known as the Mrs. Sam Houston House. The Texas legislature eventually gave Margaret an amount equivalent to her husband's unpaid gubernatorial salary; nevertheless, in order to afford Sam Jr.'s enrollment at medical school at the University of Pennsylvania, she rented out the Ben Lomond plantation. Nancy Lea died of an undiagnosed set of flu-like ailments on February 7, 1864, and was entombed on the grounds of her home. Margaret died on December 3, 1867, having contracted yellow fever during an epidemic. Walter Reed would not make his discovery of the cause of yellow fever through mosquito bite until 1900; contamination through contact was the pervading fear in 1867, and prevented Margaret's remains from being interred in a public cemetery with her husband's. She was buried in the ground beside Nancy's tomb at 11 p.m. by her servant Bingley, family friend Major Eber Cave, and her two daughters Nettie and Mary Willie. No funeral service was performed. ## Legacy Two years after Sam Houston's death, Baylor University president William Carey Crane was commissioned by Margaret to write her husband's biography, allowing complete access to all correspondence and records. Crane was a Lea family friend from Alabama who had little more than a passing acquaintance with "the hero of San Jacinto". His perception of Margaret, however, was that of an extraordinary woman, in many aspects equal to the man she married. He stated that Houston's "guardian angel", as he called her, had set out from the time she met Houston to refine his rough edges and provide a solid foundation for his personal life. That assessment of Margaret's relationship with her husband was echoed over a century later by author James L. Haley, "... Houston trusted the care of his soul to Margaret, that he had no more war to fight within himself, left him with more energy to wage political battle." Ultimately, several of Houston's associates were cooperative with the Crane endeavor, but not everyone was inspired to join the effort. According to daughter Maggie, the author had told her that many valuable documents were destroyed by Margaret in a fit of anger when someone she considered a friend expressed disinterest. Life and Select Literary Remains of Sam Houston of Texas was rejected by the initial publisher, but was eventually published by J. B. Lippincott in 1884. After emancipation and Margaret's death, "Aunt Eliza", as the children called her, alternated her time between Nannie's and Maggie's households. When Eliza died in 1898, at her request, she was buried next to Margaret. Nancy's tomb fell to decay over the years, after which she was re-interred in the ground with Margaret and Eliza. There was much discussion during the Texas 1936 centennial about moving Margaret's remains next to her husband's in Huntsville, but the family and various authorities never came to an agreement over it. Not until May 15, 1965, was an historical marker erected in Independence to denote her contributions to Texas history. ### Children "First Lady and the matriarch of one of the most significant families in Texas history." – Texas Historical Commission - Sam Houston Jr. (1843–1894) became a physician and author. He was widowed early into his marriage to Lucy Anderson and spent his final years living with his sister Maggie. : Sam Jr.'s daughter Margaret Bell Houston (1877–1966) was a writer and suffragist who became the first president of the Dallas Equal Suffrage Association. - Nancy (Nannie) Elizabeth Houston (1846–1920) married businessman Joseph Clay Stiles Morrow. When her mother died, Nannie assumed guardianship of her younger siblings. : Nannie's great granddaughter Jean Houston Baldwin (1916–2002) was the wife of Texas Governor Price Daniel. : Nannie's great-great-grandson Price Daniel Jr. (1941–1981) was Speaker of the Texas House of Representatives. - Margaret (Maggie) Lea Houston (1848–1906) married Weston Lafayette Williams. The couple purchased Margaret's house where they helped Nannie provide a home for their younger siblings, and also raised their own five children there. - Mary William (Mary Willie) Houston (1850–1931) married attorney John Simeon Morrow. Widowed young with five children to support, she became postmistress of Abilene, Texas, and held the position for 22 years. - Antoinette (Nettie) Power Houston (1852–1932) was poet laureate and state historian for the Daughters of the Republic of Texas. She married the then Texas A&M University president William Lorraine Bringhurst. Her funeral was held at the Alamo Mission in San Antonio where her body had lain in state for public viewing. - Andrew Jackson Houston (1854–1941) was a United States Senator. A graduate of West Point, he served in Teddy Roosevelt's Rough Riders during the Spanish–American War. He was a proponent of prohibition and supportive of suffrage for women. His first wife was Carrie Glenn Purnell; after her death, he remarried to Elizabeth Hart Good. - William (Willie) Rogers Houston (1858–1920) was a lifelong bachelor, and became a career Special Agent of the Bureau of Indian Affairs. He died from what is believed to have been either a heart attack and/or a fall from his horse while on official duty, on the grounds of Goodland Indian School in Choctaw County, Oklahoma. - Temple Lea Houston (1860–1905) served as Texas State Senator, District 19, and Senate President Pro Tem. He was a multi-linguist in ten languages that included seven spoken by Native Americans. Temple Lea became the most famous of the Houston children and was considered a brilliant legal counsel whose "Soiled Dove Plea" won the acquittal of a woman accused of prostitution. Married to Laura Cross, he lived his final years in Oklahoma where locals gave him the nickname "Lone Wolf of the Canadian (river)". The Temple Houston television series was based on his legal career. ### Historic residences and sites - Sam Houston's house in Houston City has been replaced by an office building. - Ben Lomond and Raven Hill homes deteriorated through the years and were destroyed, as was Nancy Lea's home in Independence. - Steamboat House was moved in 1936 to the grounds of the Sam Houston Memorial Museum at Sam Houston State University, and designated a Recorded Texas Historic Landmark in 1964. - The Mrs. Sam Houston House in Independence was listed on the National Register of Historic Places listings in Washington County on October 22, 1970. - The Woodland home was listed on the National Register of Historic Places listings in Walker County on May 30, 1974, as the Sam Houston House, and is part of the Sam Houston Memorial Museum. - The site of Cedar Point home located at the mouth of Cedar Bayou, in Baytown, Texas, on Trinity Bay now lies in the bayou, as meandering erosion has consumed it. This home is where the Houstons' lived when Sam Jr. enlisted in The Bayland Guards, CSA. - The Rocky Creek Baptist Church bell purchased by Nancy Lea is currently located at the intersection of Farm to Market Road 50 and Farm to Market Road 390. - Sam Houston's baptismal site is marked by the Texas Historical Commission on Farm to Market Road 150 at Sam Houston Road. ### Depiction in popular media The actress Nancy Rennick (1932–2006), who had a leading role in the syndicated adventure television series Rescue 8, played Mrs. Houston in the 1958 episode "The Girl Who Walked with a Giant" of the syndicated anthology series, Death Valley Days, hosted by Stanley Andrews. The story focuses on Margaret's role as a confidant of her husband from his days as president of the Republic of Texas to his time as governor, a post that he resigned in 1861 because he could not in good conscience support the Confederate States of America, of which Texas was a partner.
157,472
Book of Kells
1,173,329,845
8th-century illuminated manuscript Gospel book, held in Trinity College, Dublin
[ "800s", "9th century in Scotland", "9th-century Latin books", "Christianity in medieval Ireland", "County Meath", "Gospel Books", "Hiberno-Saxon manuscripts", "History of County Meath", "Irish manuscripts", "Library of Trinity College Dublin", "Religion in County Meath", "Vetus Latina New Testament manuscripts", "Vulgate manuscripts" ]
The Book of Kells (Latin: Codex Cenannensis; Irish: Leabhar Cheanannais; Dublin, Trinity College Library, MS A. I. [58], sometimes known as the Book of Columba) is an illuminated manuscript and Celtic Gospel book in Latin, containing the four Gospels of the New Testament together with various prefatory texts and tables. It was created in a Columban monastery in either Ireland or Scotland, and may have had contributions from various Columban institutions from each of these areas. It is believed to have been created c. 800 AD. The text of the Gospels is largely drawn from the Vulgate, although it also includes several passages drawn from the earlier versions of the Bible known as the Vetus Latina. It is regarded as a masterwork of Western calligraphy and the pinnacle of Insular illumination. The manuscript takes its name from the Abbey of Kells, County Meath, which was its home for centuries. The illustrations and ornamentation of the Book of Kells surpass those of other Insular Gospel books in extravagance and complexity. The decoration combines traditional Christian iconography with the ornate swirling motifs typical of Insular art. Figures of humans, animals and mythical beasts, together with Celtic knots and interlacing patterns in vibrant colours, enliven the manuscript's pages. Many of these minor decorative elements are imbued with Christian symbolism and so further emphasise the themes of the major illustrations. The manuscript today comprises 340 leaves or folios; the recto and verso of each leaf total 680 pages. Since 1953, it has been bound in four volumes, 330 mm by 250 mm (13 inches by 9.8 inches). The leaves are high-quality calf vellum; the unprecedentedly elaborate ornamentation that covers them includes ten full-page illustrations and text pages that are vibrant with decorated initials and interlinear miniatures, marking the furthest extension of the anti-classical and energetic qualities of Insular art. The Insular majuscule script of the text appears to be the work of at least three different scribes. The lettering is in iron gall ink, and the colours used were derived from a wide range of substances, some of which were imported from distant lands. The manuscript is on display to visitors in Trinity College Library, Dublin and shows two pages at any one time, rotated every 12 weeks. A digitised version of the entire manuscript may also be seen online. ## History ### Origin The Book of Kells is one of the finest and most famous, and also one of the latest, of a group of manuscripts in what is known as the Insular style, produced from the late 6th through the early 9th centuries in monasteries in Ireland, Scotland and England and in continental monasteries with Hiberno-Scottish or Anglo-Saxon foundations. These manuscripts include the Cathach of St. Columba, the Ambrosiana Orosius, fragmentary Gospel in the Durham Dean and Chapter Library (all from the early 7th century), and the Book of Durrow (from the second half of the 7th century). From the early 8th century come the Durham Gospels, the Echternach Gospels, the Lindisfarne Gospels (see illustration at right), and the Lichfield Gospels. Among others, the St. Gall Gospel Book belongs to the late 8th century and the Book of Armagh (dated to 807–809) to the early 9th century. Scholars place these manuscripts together based on similarities in artistic style, script, and textual traditions. The fully developed style of the ornamentation of the Book of Kells places it late in this series, either from the late 8th or early 9th century. The Book of Kells follows many of the iconographic and stylistic traditions found in these earlier manuscripts. For example, the form of the decorated letters found in the incipit pages for the Gospels is surprisingly consistent in Insular Gospels. Compare, for example, the incipit pages of the Gospel of Matthew in the Lindisfarne Gospels and in the Book of Kells, both of which feature intricate decorative knotwork patterns inside the outlines formed by the enlarged initial letters of the text. (For a more complete list of related manuscripts, see: List of Hiberno-Saxon illustrated manuscripts). The Abbey of Kells in Kells, County Meath had been founded, or refounded, from Iona Abbey, construction taking from 807 until the consecration of the church in 814. The manuscript's date and place of production have been subjects of considerable debate. Traditionally, the book was thought to have been created in the time of Columba, possibly even as the work of his own hands. This tradition has long been discredited on paleographic and stylistic grounds: most evidence points to a composition date c. 800, long after St. Columba's death in 597. The proposed dating in the 9th century coincides with Viking raids on Lindisfarne and Iona, which began c. 793-794 and eventually dispersed the monks and their holy relics into Ireland and Scotland. There is another tradition, with some traction among Irish scholars, that suggests the manuscript was created for the 200th anniversary of the saint's death. Alternatively, as is thought possible for the Northumbrian Lindisfarne Gospels and also the St Cuthbert Gospel, both with Saint Cuthbert, it may have been produced to mark the "translation" or moving of Columba's remains into a shrine reliquary, which probably had taken place by the 750s. There are at least four competing theories about the manuscript's place of origin and time of completion. First, the book, or perhaps just the text, may have been created at Iona and then completed in Kells. Second, the book may have been produced entirely at Iona. Third, the manuscript may have been produced entirely in the scriptorium at Kells. Finally, it may have been the product of Dunkeld or another monastery in Pictish Scotland, though there is no actual evidence for this theory, especially considering the absence of any surviving manuscript from Pictland. Although the question of the exact location of the book's production will probably never be answered conclusively, the first theory, that it was begun at Iona and continued at Kells, is widely accepted. Regardless of which theory is true, it is certain that the Book of Kells was produced by Columban monks closely associated with the community at Iona. The historical circumstances which informed the Book of Kells' production were the preservation of the Latin language after the fall of the Roman Empire and the establishment of monastic life which entailed the production of texts. Cassiodorus in particular advocated both practices, having founded the monastery Vivarium in the sixth century and having written Institutiones, a work which describes and recommends several texts—both religious and secular—for study by monks. Vivarium included a scriptorium for the reproduction of books in both genres. Later, the Carolingian period introduced the innovation of copying texts onto vellum, a material much more durable than the papyrus to which many ancient writings had been committed. Gradually, these traditions spread throughout the European continent and finally to the British Isles. ### Medieval period Kells Abbey was pillaged by Vikings many times at the beginning of the 9th century, and how the book survived is not known. The earliest historical reference to the book, and indeed to the book's presence at Kells, can be found in a 1007 entry in the Annals of Ulster. This entry records that "the great Gospel of Columkille [Columba], the chief relic of the Western World, was wickedly stolen during the night from the western sacristy of the great stone church at Cenannas on account of its wrought shrine". The manuscript was recovered a few months later—minus its golden and bejewelled cover—"under a sod". It is generally assumed that the "great Gospel of Columkille" is the Book of Kells. If this is correct, then the book was in Kells by 1007 and had been there long enough for thieves to learn of its presence. The force of ripping the manuscript free from its cover may account for the folios missing from the beginning and end of the Book of Kells. The description in the Annals of the book as "of Columkille"—that is, having belonged to, and perhaps being made by Columba—suggests that the book was believed at that time to have been made on Iona. Regardless, the book was certainly at Kells in the 12th century, when land charters pertaining to the Abbey of Kells were copied onto some of its blank pages. The practice of copying charters into important books was widespread in the medieval period, and such inscriptions in the Book of Kells provide concrete evidence about its location at the time. The Abbey of Kells was dissolved because of the ecclesiastical reforms of the 12th century. The abbey church was converted to a parish church in which the Book of Kells remained. #### Book of Kildare The 12th-century writer Gerald of Wales, in his Topographia Hibernica, described seeing a great Gospel Book in Kildare which many have since assumed was the Book of Kells. The description certainly matches Kells: > This book contains the harmony of the Four Evangelists according to Jerome, where for almost every page there are different designs... and other forms almost infinite... Fine craftsmanship is all about you, but you might not notice it. Look more keenly at it and you will penetrate to the very shrine of art. You will make out intricacies, so delicate and subtle, so exact and compact, so full of knots and links, with colours so fresh and vivid, that you might say that all this was the work of an angel, and not of a man. Since Gerald claims to have seen this book in Kildare, he may have seen another, now lost, book equal in quality to the Book of Kells, or he may have misstated his location. ### Modern period The Book of Kells remained in Kells until 1654. In that year, Cromwell's cavalry was quartered in the church at Kells, and the governor of the town sent the book to Dublin for safekeeping. Henry Jones, then Bishop of Clogher and Vice-Chancellor of the University of Dublin, presented the manuscript to Trinity College in Dublin in 1661, and it has remained there ever since, except for brief loans to other libraries and museums. It has been on display to the public in the Old Library at Trinity since the 19th century. The manuscript's rise to worldwide fame began in the 19th century. The association with St. Columba, who died the same year Augustine brought Christianity and literacy to Canterbury from Rome, was used to demonstrate Ireland's cultural primacy, seemingly providing "irrefutable precedence in the debate on the relative authority of the Irish and Roman churches". Queen Victoria and Prince Albert were invited to sign the book in 1849. The book's artistry was influential on the Celtic Revival; several Victorian picture books of medieval illuminations featured designs from the book which were in turn extensively copied and adapted, patterns appearing in metalwork, embroidery, furniture and pottery among other crafts. Over the centuries, the book has been rebound several times. During a 19th-century rebinding, the pages were badly cropped, with small parts of some illustrations being lost. The book was also rebound in 1895, but that rebinding broke down quickly. By the late 1920s, several folios had detached completely and were kept separate from the main volume. In 1953, bookbinder Roger Powell rebound the manuscript in four volumes and stretched several pages that had developed bulges. One volume is always on display at Trinity, opened at either a major decorated page or a text page with smaller decorations. In 2000, the volume containing the Gospel of Mark was sent to Canberra, Australia, for an exhibition of illuminated manuscripts. This was only the fourth time the Book of Kells had been sent abroad for exhibition. The volume suffered what has been called "minor pigment damage" while en route to Canberra. It is thought that the vibrations from the aeroplane's engines during the long flight may have caused the damage. ## Description The Book of Kells contains the four Gospels of the Christian scriptures written in black, red, purple, and yellow ink in an insular majuscule script, preceded by prefaces, summaries, and concordances of Gospel passages. Today, it consists of 340 vellum leaves, or folios, totalling 680 pages. Almost all folios are numbered at recto, bottom left. One folio number, 36, was mistakenly double-counted. As a result, the pagination of the entire book is reckoned thus: folio 1r — 36v, 36\*r — 36\*v (the double-counted folio), and 37r — 339v. The majority of the folios are part of larger sheets, called bifolios, which are folded in half to form two folios. The bifolios are nested inside of each other and sewn together to form gatherings called quires. On occasion, a folio is not part of a bifolio but is instead a single sheet inserted within a quire. The extant folios are gathered into 38 quires. There are between four and twelve folios (two to six bifolios) per quire; the folios are commonly, but not invariably, bound in groups of ten. Some folios are single sheets, as is frequently the case with the important decorated pages. The folios had lines drawn for the text, sometimes on both sides, after the bifolios were folded. Prick marks and guidelines can still be seen on some pages. The vellum is of high quality, although the folios have an uneven thickness, with some being close to leather while others are so thin as to be almost translucent. As many as twelve individuals may have collaborated on the book's production, of whom four scribes and three painters have been distinguished. The book's current dimensions are 330 by 250 mm. Originally, the folios were of no standard size, but they were cropped to the current size during a 19th-century rebinding. The text area is approximately 250 by 170 mm. Each text page has 16 to 18 lines of text. The manuscript is in remarkably good condition considering its age, though many pages have suffered some damage to the delicate artwork due to rubbing. The book must have been the product of a major scriptorium over several years, yet was apparently never finished, the projected decoration of some pages appearing only in outline. It is believed that the original manuscript consisted of about 370 folios, based on gaps in the text and the absence of key illustrations. The bulk of the missing material (or, about 30 folios) was perhaps lost when the book was stolen in the early 11th century. In 1621 the prominent Anglican clergyman James Ussher counted just 344 folios; presently another four or five are missing from the body of the text, after folios 177, 239, and 330. The missing bifolium 335-36 was found and restored in 1741. ### Contents The extant book contains preliminary matter, the complete text of the Gospels of Matthew, Mark and Luke, and the Gospel of John through John 17:13. The remaining preliminary matter consists of two fragmentary lists of Hebrew names contained in the Gospels, Breves causae (Gospel summaries), Argumenta (short biographies of the Evangelists), and Eusebian canon tables. It is probable that, like the Lindisfarne Gospels and the Books of Durrow and Armagh, part of the lost preliminary material included the letter of Jerome to Pope Damasus I beginning Novum opus, in which Jerome explains the purpose of his translation. It is also possible, though less likely, that the lost material included the letter of Eusebius to Carpianus, in which he explains the use of the canon tables. Of all the insular Gospels, only the Lindisfarne manuscript contains this letter. There are two fragments of the lists of Hebrew names; one on the recto of the first surviving folio and one on folio 26, which is currently inserted at the end of the prefatory matter for John. The first list fragment contains the end of the list for the Gospel of Matthew. The missing names from Matthew would require an additional two folios. The second list fragment, on folio 26, contains about a fourth of the list for Luke. The list for Luke would require an additional three folios. The structure of the quire in which folio 26 occurs is such that it is unlikely that there are three folios missing between folios 26 and 27, so that it is almost certain that folio 26 is not now in its original location. There is no trace of the lists for Mark and John. The first list fragment is followed by the canon tables of Eusebius of Caesarea. These tables, which predate the text of the Vulgate, were developed to cross-reference the Gospels. Eusebius divided the Gospel into chapters and then created tables that allowed readers to find where a given episode in the life of Christ was located in each of the Gospels. The canon tables were traditionally included in the prefatory material in most medieval copies of the Vulgate text of the Gospels. The tables in the Book of Kells are however unusable, first because the scribe condensed the tables in such a way as to make them confused. Second and more importantly, the corresponding chapter numbers were never inserted into the margins of the text, making it impossible to find the sections to which the canon tables refer. The reason for the omission remains unclear: the scribe may have planned to add the references upon the manuscript's completion, or he may have deliberately left them out so as not to spoil the appearance of pages. The Breves causae and Argumenta belong to a pre-Vulgate tradition of manuscripts. The Breves causae are summaries of the Old Latin translations of the Gospels and are divided into numbered chapters. These chapter numbers, like the numbers for the canon tables, are not used on the text pages of the Gospels. It is unlikely that these numbers would have been used, even if the manuscript had been completed, because the chapter numbers corresponded to old Latin translations and would have been difficult to harmonise with the Vulgate text. The Argumenta are collections of legends about the Evangelists. The Breves causae and Argumenta are arranged in a strange order: first, come the Breves causae and Argumenta for Matthew, followed by the Breves and Argumenta for Mark, then, quite oddly, come the Argumenta of both Luke and John, followed by their Breves causae. This anomalous order mirrors that found in the Book of Durrow, although in the latter instance, the misplaced sections appear at the very end of the manuscript rather than as part of a continuous preliminary. In other insular manuscripts, such as the Lindisfarne Gospels, the Book of Armagh, and the Echternach Gospels, each Gospel is treated as a separate work and has its preliminaries immediately preceding it. The slavish repetition in Kells of the order of the Breves causae and Argumenta found in Durrow led scholar T. K. Abbott to conclude that the scribes of Kells had either the Book of Durrow or a common model in hand. ### Text and script The Book of Kells contains the text of the four Gospels based on the Vulgate. It does not, however, contain a pure copy of the Vulgate. There are numerous differences from the Vulgate, where Old Latin translations are used in lieu of Jerome's text. Although such variants are common in all the insular Gospels, there does not seem to be a consistent pattern of variation amongst the various insular texts. Evidence suggests that when the scribes were writing the text they often depended on memory rather than on their exemplar. The manuscript is written primarily in insular majuscule with some occurrences of minuscule letters (usually e or s). The text is usually written in one long line across the page. Françoise Henry identified at least three scribes in the manuscript, whom she named Hand A, Hand B, and Hand C. Hand A is found on folios 1 through 19v, folios 276 through 289, and folios 307 through the end of the manuscript. Hand A, for the most part, writes eighteen or nineteen lines per page in the brown gall ink common throughout the West. Hand B is found on folios 19r through 26 and folios 124 through 128. Hand B has a somewhat greater tendency to use minuscule and uses red, purple and black ink and a variable number of lines per page. Hand C is found throughout the majority of the text. Hand C also has a greater tendency to use minuscule than Hand A. Hand C uses the same brownish gall ink used by hand A and wrote, almost always, seventeen lines per page. Additionally a fourth scribe named Hand D has been hypothesized, to whom folio 104r was attributed. #### Errors and deviations There are several differences between the text and the accepted Gospels. In the genealogy of Jesus, which starts at Luke 3:23, Kells names an extra ancestor. At Matthew 10:34, a common English translation reads "I came not to send peace, but a sword". However, the manuscript reads gaudium ("joy") where it should read gladium ("sword"), thus translating as "I came not (only) to send peace, but joy." The lavishly decorated opening page of the Gospel according to John had been deciphered by George Bain as: "In principio erat verbum verum" (In the beginning was the True Word). Therefore, the incipit is a free translation into Latin of the Greek original λογος rather than a mere copy of the Roman version. #### Annotations Over the centuries multiple annotations have been written in the book, recording page information and historical events. During the 19th century, former Trinity Librarian J.H. Todd numbered the book's folios at recto, bottom left. On several of the blank pages among the preliminaries (folios 5v-7r and 27r) are found land charters pertaining to the Abbey of Kells; recording charters in important books was a common custom in the medieval period. James Ussher transcribed the charters in his collected works, and they were later translated into English. A blank page at the end of Luke (folio 289v) contains a poem complaining of taxation upon church land, dated to the 14th or 15th century. In the early 17th century one Richardus Whit recorded several recent events on the same page in "clumsy" Latin, including a famine in 1586, the accession of James I, and plague in Ireland during 1604. The signature of Thomas Ridgeway, 17th century Treasurer of Ireland, is extant on folio 31v, and the 1853 monogram of John O. Westwood, author of an early modern account of the book, is found on 339r. Three notes concerning the book's pagination are found together on a single page (folio 334v): in 1568 one Geralde Plunket noted his annotations of the Gospel's chapter numbers throughout the book. A second note from 1588 gave a folio count, and a third note by James Ussher reported 344 folios in the book as of 1621. The bifolium 335-336 was lost and subsequently restored in 1741, recorded in two notes on folio 337r. Plunket's accretions were varied and significant. He inscribed transcriptions in the margins of the major illuminated folios 8r, 29r, 203r and 292r. On folio 32v, he added the annotation "Jesus Christus" in the spandrels of the composition's architecture, identifying the portrait's subject as Christ; in the 19th century, this annotation was covered by white paint, altering the composition. Plunket also wrote his name on multiple pages, and added small animal embellishments. ### Decoration The text is accompanied by many full-page miniature illustrations, while smaller painted decorations appear throughout the text in unprecedented quantities. The decoration of the book is famous for combining intricate detail with bold and energetic compositions. The characteristics of the insular manuscript initial, as described by Carl Nordenfalk, here reach their most extreme realisation: "the initials ... are conceived as elastic forms expanding and contracting with a pulsating rhythm. The kinetic energy of their contours escapes into freely drawn appendices, a spiral line which in turn generates new curvilinear motifs...". The illustrations feature a broad range of colours, with purple, lilac, red, pink, green, and yellow being the colours most often used. Earlier manuscripts tend toward more narrow palettes: the Book of Durrow, for example, uses only four colours. As is usual with insular work, there was no use of gold or silver leaf in the manuscript. The pigments for the illustrations included red and yellow ochre, green copper pigment (sometimes called verdigris), indigo, and possibly lapis lazuli. These would have been imported from the Mediterranean region and, in the case of the lapis lazuli (also known as ultramarine), from northeast Afghanistan. Though the presence of lapis lazuli has long been considered evidence of the great cost required to create the manuscript, recent examination of the pigments has shown that lapis lazuli was not used. The lavish illumination programme is far greater than any other surviving Insular Gospel book. Thirty-three of the surviving pages contain decorative elements which dominate the entire page. These include ten full-page miniature illustrations: a portrait of the Virgin and Child, three pages of evangelist symbols informed by the tetramorphs described in Ezekiel and Revelation, two evangelist portraits, a portrait of Christ enthroned, a carpet page, and scenes of the Arrest of Jesus and Temptation of Christ. Twelve fully decorated text pages embellish the book's verses, of which the most extreme examples are the four incipits beginning each Gospel, together with the Chi Rho monogram, a page receiving comparable treatment which heralds a "second beginning" of Matthew, the narrative of Christ's life following his genealogy. Another six fully decorated text pages emphasize various points in the Passion story, while a seventh corresponds to the Temptation. The first eleven pages of the extant manuscript begin with a decorated list of Hebrew names, followed by ten pages of Eusebian canon tables framed by architectural elements. Additionally, fourteen pages feature large decorative elements which do not extend throughout the entire page. It is highly probable that there were other pages of miniature and decorated text that are now lost. Henry identified at least three distinct artists. The "Goldsmith" was responsible for the Chi Rho page, using colour to convey metallic hues. The "Illustrator" was given to idiosyncratic portraits, having produced the Temptation and the Arrest of Christ. The "Portrait Painter" executed the portraits of Christ and the Evangelists. Almost every page contains a decorative element incorporating colour; throughout the text pages, these are commonly stylized capitals. Only two pages—folios 29v and 301v—are devoid of pigment colouration or overt pictorial elements, but even they contain trace decorations in ink. The extant folios of the manuscript start with the fragment of the glossary of Hebrew names. This fragment occupies the left-hand column of folio 1r. A miniature of the four evangelist symbols, now much abraded, occupies the right-hand column. The miniature is oriented so that the volume must be turned ninety degrees to view it properly. The four evangelist symbols are a visual theme that runs throughout the book. They are almost always shown together to emphasise the doctrine of the four Gospels' unity of message. The unity of the Gospels is further emphasised by the decoration of the Eusebian canon tables. The canon tables illustrate the unity of the Gospels by organising corresponding passages from the Gospels. The Eusebian canon tables normally require twelve pages. In the Book of Kells, the makers of the manuscript planned for twelve pages (folios 1v through 7r) but for unknown reasons, condensed them into ten, leaving folios 6v and 7r blank. This condensation rendered the canon tables unusable. The decoration of the first eight pages of the canon tables is heavily influenced by early Gospel Books from the Mediterranean, where it was traditional to enclose the tables within an arcade (as seen in the London Canon Tables). The Kells manuscript presents this motif in an Insular spirit, where the arcades are not seen as architectural elements but rather become stylised geometric patterns with Insular ornamentation. The four evangelist symbols occupy the spaces under and above the arches. The last two canon tables are presented within a grid. This presentation is limited to Insular manuscripts and was first seen in the Book of Durrow. The preliminary matter is introduced by an iconic image of the Virgin and Child (folio 7v), the first representation of the Virgin Mary in a Western manuscript. Mary is shown in an odd mixture of frontal and three-quarter pose. This miniature also bears a stylistic similarity to the carved image on the lid of St. Cuthbert's coffin of 698. The iconography of the miniature seems to derive from Byzantine, Armenian or Coptic art. The Ireland in which the Book of Kells was crafted and manufactured, writes Christopher de Hamel, "was clearly no primitive backwater but a civilization which could now read Latin, although never occupied by the Romans, and which was somehow familiar with texts and artistic designs which have unambiguous parallels in the Coptic and Greek churches, such as carpet pages and Canon tables. Although the Book of Kells itself is as uniquely Irish as anything imaginable, it is a Mediterranean text and the pigments used in making it include orpiment, a yellow made from arsenic sulphide, exported from Italy, where it is found in volcanoes. There are clearly lines of trade and communication unknown to us." The miniature of the Virgin and Child faces the first page of the text, which begins the Breves causae of Matthew with the phrase Nativitas Christi in Bethlem (the birth of Christ in Bethlehem). The beginning page (folio 8r) of the text of the Breves causae is decorated and contained within an elaborate frame. The two-page spread of the miniature and the text makes a vivid introductory statement for the prefatory material. The opening lines of six of the other seven pieces of preliminary matter are enlarged and decorated (see above for the Breves causae of Luke), but no other section of the preliminaries is given the same full-page treatment as the beginning of the Breves causae of Matthew. The book was designed so that each of the Gospels would have an elaborate introductory decorative programme. Each Gospel was originally prefaced by a full-page miniature containing the four evangelist symbols, followed by a blank page. Then came a portrait of the evangelist which faced the opening text of the Gospel, itself given an elaborate decorative treatment. The Gospel of Matthew retains both its Evangelist portrait (folio 28v) and its page of Evangelist symbols (folio 27v, see above). The Gospel of Mark is missing the Evangelist portrait but retains its Evangelist symbols page (folio 129v). The Gospel of Luke is missing both the portrait and the Evangelist symbols page. The Gospel of John, like the Gospel of Matthew, retains both its portrait (folio 291v, see at right) and its Evangelist symbols page (folio 290v). It can be assumed that the portraits for Mark and Luke and the symbols page for Luke at one time existed but have been lost. The ornamentation of the opening few words of each Gospel is lavish; their decoration is so elaborate that the text itself is almost illegible. The opening page (folio 29r) of Matthew may stand as an example. (See illustration at left.) The page consists of only two words: Liber generationis ("The book of the generation"). The lib of Liber is turned into a giant monogram which dominates the entire page. The er of Liber is presented as an interlaced ornament within the b of the lib monogram. Generationis is broken into three lines and contained within an elaborate frame in the right lower quadrant of the page. The entire assemblage is contained within an elaborate border, further decorated with elaborate spirals and knot work, many of which are zoomorphic. The opening words of the gospel of Mark, Initium evangelii Iesu Christi ("The beginning of the Gospel of Jesus Christ"), Luke, Quoniam ("Forasmuch"), and John, In principio erat verbum verum ("In the beginning was the True Word"), are all given similar treatments. Although the decoration of these pages was most extensive in the Book of Kells, they are all decorated in the other Insular Gospel books. The Gospel of Matthew begins with a genealogy of Jesus, followed by his portrait. Folio 32v (top of article) has a miniature of Christ enthroned, flanked by peacocks. Peacocks function as symbols of Christ throughout the book. According to earlier accounts given by Isidore of Seville and Augustine in The City of God, the peacocks' flesh does not putrefy; the animals therefore became associated with Christ via the Resurrection. Facing the portrait of Christ on folio 33r is the only carpet page in the Book of Kells, which is rather anomalous; the Lindisfarne Gospels have five extant carpet pages and the Book of Durrow has six. The blank verso of folio 33 faces the single most lavish miniature of the early medieval period, the Book of Kells Chi Rho monogram, which serves as incipit for the narrative of the life of Christ. At Matthew 1:18 (folio 34r), the actual narrative of Christ's life starts. This "second beginning" to Matthew was given emphasis in many early Gospel Books, so much so that the two sections were often treated as separate works. The second beginning starts with the word Christ. The Greek letters chi and rho were normally used in medieval manuscripts to abbreviate the word Christ. In Insular Gospel books, the initial Chi Rho monogram was enlarged and decorated. In the Book of Kells, this second beginning was given a decorative programme equal to those prefacing the Gospels, its Chi Rho monogram having grown to consume the entire page. The letter chi dominates the page with one arm swooping across the majority of the page. The letter rho is snuggled underneath the arms of the chi. Both letters are divided into compartments which are lavishly decorated with knotwork and other patterns. The background is likewise awash in a mass of swirling and knotted decoration. Within this mass of decoration are hidden animals and insects. Three angels arise from one of the cross arms of the chi. This miniature is the largest and most lavish extant Chi Rho monogram in any Insular Gospel book, the culmination of a tradition that started with the Book of Durrow. The Book of Kells contains two other full-page illustrations, which depict episodes from the Passion story. The text of Matthew is illustrated with a full-page illumination of the Arrest of Christ (folio 114r). Jesus is shown beneath a stylised arcade while being held by two much smaller figures. In the text of Luke, there is a full-sized miniature of the Temptation of Christ (folio 202v). Christ is shown from the waist up on top of the Temple. To his right is a crowd of people, perhaps representing his disciples. To his left and below him is a black figure of Satan. Above him hover two angels. Throughout the body of the Gospels, six fully decorated text pages receive treatment comparable to that of the page which began the Breves causae of Matthew. Of these, five correspond to episodes in the Passion story, and one refers to the Temptation. The verso of the folio containing the Arrest of Christ (114v) has a full page of decorated text which reads "Tunc dicit illis Iesus omnes vos scan(dalum)" (Matthew 26:31), where Jesus addresses his disciples immediately before his arrest. A few pages later (folio 124r) is found a very similar decoration of the phrase "Tunc crucifixerant Xpi cum eo duos latrones" (Matthew 27:38), Christ's crucifixion together with two thieves. In the Gospel of Mark, another decorated page (folio 183r) gives a description of the Crucifixion (Mark 15:25), while the final (and decorated) page of Mark (folio 187v) describes Christ's Resurrection and Ascension (Mark 16:19–20). In the Gospel of Luke, folio 203r faces the illustration of the Temptation, itself an illumination of the text (Luke 4:1) beginning the Temptation narrative. Finally, folio 285r is a fully decorated page corresponding to another moment of the Passion, (Luke 23:56-Luke 24:1) between the Crucifixion and the Resurrection. Since the missing folios of John contain another Passion narrative, it is likely that John contained full pages of decorated text that have been lost. Apart from the thirty-three fully illuminated pages, fourteen receive substantial decoration not extending over the entire page. Among the Preliminaries and apart from the fully decorated page beginning the Breves causae of Matthew, six pages begin six of the eight sections of Breves causae and Argumenta with embellished names. The exception is folio 24v which introduces the final section of the Breves causae of John without a comparable device. Five pages (folios 200r-202v) give an organized decoration of Luke's genealogy of Christ, just before the Temptation narrative. Another three pages contain large illuminated elements not extending throughout the entire page. Folio 40v contains text of the Beatitudes in Matthew (Matthew 5:3–10) where the letters B beginning each line are linked into an ornate chain along the left margin of the page. Folio 127v has an embellished line beginning the final chapter of Matthew, which gives an account of the Resurrection. A similar treatment is given to a line in folio 188v (Luke 1:5), which begins an account of the Nativity. ## Purpose The book had a sacramental rather than educational purpose. Such a large, lavish Gospel would have been left on the high altar of the church and removed only for the reading of the Gospel during Mass, with the reader probably reciting from memory more than reading the text. It is significant that the Chronicles of Ulster state the book was stolen from the sacristy, where the vessels and other accoutrements of the Mass were stored, rather than from the monastic library. Its design seems to take this purpose in mind; that is, the book was produced with appearance taking precedence over practicality. There are numerous uncorrected mistakes in the text. Lines were often completed in a blank space in the line above. The chapter headings that were necessary to make the canon tables usable were not inserted into the margins of the page. In general, nothing was done to disrupt the look of the page: aesthetics were given priority over utility. ## Reproductions Some of the first faithful reproductions made of pages and elements of the Book of Kells were by the artist Helen Campbell D'Olier in the 19th century. She used vellum and reproduced the pigments used in the original manuscript. Photographs of her drawings were included in Sullivan's study of the Book of Kells, first printed in 1913. In 1951, the Swiss publisher Urs Graf Verlag Bern produced the first facsimile of the Book of Kells. The majority of the pages were reproduced in black-and-white photographs, but the edition also featured forty-eight colour reproductions, including all the full-page decorations. Under licence from the Board of Trinity College Dublin, Thames and Hudson produced a partial facsimile edition in 1974, which included a scholarly treatment of the work by Françoise Henry. This edition included all the full-page illustrations in the manuscript and a representative selection of the ornamentation of the text pages, together with some enlarged details of the illustrations. The reproductions were all in full colour, with photography by John Kennedy, Green Studio, Dublin. In 1979, Swiss publisher Faksimile-Verlag Luzern requested permission to produce a full-colour facsimile of the book. Permission was initially denied because Trinity College officials felt that the risk of damage to the book was too high. By 1986, Faksimile-Verlag had developed a process that used gentle suction to straighten a page so that it could be photographed without touching it and so won permission to publish a new facsimile. After each page was photographed, a single-page facsimile was prepared so the colours could be carefully compared to the original and adjustments made where necessary. The completed work was published in 1990 in a two-volume set containing the full facsimile and scholarly commentary. One copy is held by the Anglican Church in Kells, on the site of the original monastery. The ill-fated Celtworld heritage centre, which opened in Tramore, County Waterford in 1992, included a replica of the Book of Kells. It cost approximately £18,000 to produce. In 1994, Bernard Meehan, Keeper of Manuscripts at Trinity College Dublin, produced an introductory booklet on the Book of Kells, with 110 colour images of the manuscript. His 2012 book contained more than 80 pages from the manuscript reproduced full-size and in full colour. A digital copy of the manuscript was produced by Trinity College in 2006 and made available for purchase through Trinity College on DVD-ROM. It included the ability to leaf through each page, view two pages at a time, or look at a single page in a magnified setting. There were also commentary tracks about the specific pages as well as the history of the book. Users were given the option to search by specific illuminated categories including animals, capitols and angels. It retailed for approximately €30 but has since been discontinued. The Faksimile-Verlag images are now online at Trinity College's Digital Collections portal. ## In popular culture The 2009 animated film The Secret of Kells tells a fictional story of the creation of the Book of Kells by an elderly monk Aidan and his young apprentice Brendan, who struggle to work on the manuscript in the face of destructive Viking raids. It was directed by Tomm Moore and nominated for the Academy Award for Best Animated Feature in 2009.
12,354,337
Queen angelfish
1,171,259,664
Species of marine angelfish
[ "Fish described in 1758", "Fish of the Western Atlantic", "Holacanthus", "Taxa named by Carl Linnaeus" ]
The queen angelfish (Holacanthus ciliaris), also known as the blue angelfish, golden angelfish, or yellow angelfish, is a species of marine angelfish found in the western Atlantic Ocean. It is a benthic (ocean floor) warm-water species that lives in coral reefs. It is recognized by its blue and yellow coloration and a distinctive spot or "crown" on its forehead. This crown distinguishes it from the closely related and similar-looking Bermuda blue angelfish (Holacanthus bermudensis), with which it overlaps in range and can interbreed. Adult queen angelfish are selective feeders and primarily eat sponges. Their social structure consists of harems which include one male and up to four females. They live within a territory where the females forage separately and are tended to by the male. Breeding in the species occurs near a full moon. The transparent eggs float in the water until they hatch. Juveniles of the species have different coloration than adults and act as cleaner fish. The queen angelfish is popular in the aquarium trade and has been a particularly common exported species from Brazil. In 2010, the queen angelfish was assessed as least concern by the International Union for Conservation of Nature as the wild population appeared to be stable. ## Taxonomy The queen angelfish was first described as Chaetodon ciliaris in 1758 by Carl Linnaeus in the 10th edition of his Systema Naturae, with the type locality given as the "Western Atlantic/Caribbean". In 1802 it was moved by French naturalist Bernard Germain de Lacépède to the genus Holacanthus, the name of which is derived from the Ancient Greek words "holos" (full) and "akantha" (thorn). Its specific name ciliaris means "fringed", a reference to its squamis ciliatis ("ciliate scales"). Other common names for the species include "blue angelfish", "golden angelfish" and "yellow angelfish". Marine angelfish of the genus Holacanthus likely emerged between 10.2 and 7.6 million years ago (mya). The most basal species is the Guinean angelfish (Holacanthus africanus) off the coast of West Africa, indicating that the lineage colonized the Atlantic from the Indian Ocean. The closure of the Isthmus of Panama 3.5–3.1 mya led to the splitting off of the Tropical Eastern Pacific species. The closest relative and sister species of the queen angelfish is the sympatric and similar Bermuda blue angelfish (H. bermudensis), from which it split around 1.5 mya. They are known to interbreed, producing a hybrid known as the Townsend angelfish which has features similar to both parent species. The Townsend angelfish is fertile, and individuals can breed both with each other and with the two parent species. The following cladogram is based on molecular evidence: ## Description The queen angelfish has a broad, flattened, oval-shaped body with triangular tail fin, a reduced, dulled snout and a small mouth containing bristle-like teeth. The dorsal fin contains 14 spines and 19–21 soft rays, and the anal fin has 3 spines and 20–21 soft rays. The dorsal and anal fins both dangle behind the body. This species attains a maximum total length of 45 cm (18 in) and weight of 1,600 g (56 oz). Males may be larger than females. The species is covered in yellow-tipped blue-green scales, with a bright yellow tail, pectoral and pelvic fins. Both the dorsal and anal fins have orange-yellow end points, while the pectoral fins have blue patches at the base. On the forehead is an eye-like spot or "crown" that is cobalt blue with an electric blue outer ring and dotted with electric blue spots. This crown is the main feature distinguishing the species from the Bermuda blue angelfish. Juveniles are dark blue with bright blue vertical stripes and a yellow pectoral area. They resemble juvenile blue angelfish and are distinguished by more curved vertical stripes. Growing juveniles develop transitional patterns as they reach their adult coloration. Seven other color morphs have been recorded off the coast of the Saint Peter and Saint Paul Archipelago, Brazil. The most commonly recorded is a mostly gold or bright orange morph. Other morphs may be bright blue with some yellow, black or white coloration or even all white. Another color morph was recorded off Dry Tortugas, Florida, in 2009. This fish was mostly cobalt blue with white and yellow-orange colored areas. There are records of at least two wild queen angelfish at St. Peter and St. Paul with a "pughead" skeletal deformity, a squashed upper jaw and a lower jaw that sticks out. Such deformities mostly occur in captive fish. ## Ecology Queen angelfish are found in tropical and subtropical areas of the Western Atlantic Ocean around the coasts and islands of the Americas. They occur from Florida along the Gulf of Mexico and the Caribbean Sea down to Brazil. Their range extends as far east as Bermuda and the Saint Peter and Saint Paul Archipelago. Queen angelfish are benthic or bottom-dwelling and occur from shallow waters close to shore down to 70 m (230 ft). They live in coral reefs, preferring soft corals, and swim either alone or in pairs. Queen angelfish eat sponges, tunicates, jellyfish, corals, plankton, and algae. Juveniles act as cleaner fish and establish and remove ectoparasites from bigger fish. Off St. Thomas Island and Salvador, Bahia, 90% of the diet of adults is sponges. Off the Saint Peter and Saint Paul Archipelago, more than 30 prey species may be consumed, 68% being sponges, 25% being algae, and 5% being bryozoans. Queen angelfish appear to be selective feeders as the proportion of prey in their diet does not correlate with their abundance. On the species level, the angelfish of the Saint Peter and Saint Paul Archipelago target the less common sponges Geodia neptuni, Erylus latens, Clathria calla, and Asteropus niger. ## Life cycle Male queen angelfish have large territories with a harem of two to four females. Little is known about the sexual development of the species, though they are presumed to be protogynous hermaphrodites. The largest harem female may transform into a male if the territorial male disappears. Around midday, the females forage individually in different locations. The male tends to each of them, rushing at, circling, and feeding next to them. Spawning in this species occurs year-round. It is observed sometime around a full moon. Courtship involves the male showing his side to the female and flick his pectoral fins at her or "soaring" above them. At the beginning of spawning, the female swims towards the surface with the male swimming under her with his snout pressing against her vent. They then deposit their eggs and semen into the water. The female discharges between 25 and 75 thousand eggs a day. After spawning, the pair split and head to back to the ocean floor. The transparent eggs are pelagic and remain suspended in the water for 15–20 hours. The hatched larvae have a large yolk sac with no functional eyes, gut or fins, but two days later, the yolk is absorbed, and the larvae have more of a resemblance to fish. These larvae are plankton-eaters and grow quickly. Between the ages of three and four weeks old, when they have reached a length of 15 to 20 mm (0.6 to 0.8 in), they descend to the floor as juveniles. Juvenile angelfish live alone and in territories encompassing finger sponges and coral, where they establish cleaning stations for other fish. ## Human interactions Queen angelfish are not normally eaten or commercially fished. They are captured mostly for the aquarium trade, where they are highly valued. As juveniles, angelfish can adapt to eating typical aquarium food and hence have a higher survival rate than individuals taken as adults, which require a more specialized diet. In Brazil, the queen angelfish is the most common marine ornamental fish sold aboard. From 1995 to 2000, 43,730 queen angelfish were traded at Fortaleza in the northeast of the country, and in 1995, 75% of marine fish sold were both queen and French angelfish. In 2010, the queen angelfish was assessed as least concern by the International Union for Conservation of Nature, as the species is only significantly fished off Brazil and the wild population appeared to be secure. Queen angelfish have been caught in the eastern Adriatic Sea, off Croatia, in 2011, and the Mediterranean Sea, off Malta, in 2020. These are likely introductions from the aquarium industry and not natural colonizations. In 2015, an aquarium-introduced angelfish was found in the Red Sea at Eilat's Coral Beach, Israel. Its kidney was infected with the disease-causing bacterium Photobacterium damselae piscicida, which was not previously recorded in Red Sea fish, raising concerns that it could infect native fish.
417,878
Flag of Belarus
1,173,550,396
National flag
[ "Flag controversies", "Flags introduced in 1995", "Flags introduced in 2012", "Flags of Belarus", "National flags", "National symbols of Belarus" ]
The national flag of Belarus is a red-and-green flag with a white-and-red ornament pattern placed at the hoist (staff) end. The current design was introduced in 2012 by the State Committee for Standardisation of the Republic of Belarus, and is adapted from a design approved in a May 1995 referendum. It is a modification of the 1951 flag used while the country was a republic of the Soviet Union. Changes made to the Soviet-era flag were the removal of communist symbols – the hammer and sickle and the red star – as well as the reversal of the colours in the ornament pattern. Since the 1995 referendum, several flags used by Belarusian government officials and agencies have been modelled on this national flag. Historically, the white-red-white flag was used by the Belarusian People's Republic in 1918 before Belarus became a Soviet Republic, then by the Belarusian national movement in West Belarus followed by widespread unofficial use during the Nazi occupation of Belarus between 1942 and 1944, and again after it regained its independence in 1991 until the 1995 referendum. Opposition groups have continued to use this flag, though its display in Belarus has been restricted by the government of Belarus, which claims it is linked with Nazi collaboration due to its use by Belarusian collaborators during World War II. The white-red-white flag has been used in protests against the government, most recently the 2020–2021 Belarusian protests, and by the Belarusian diaspora. ## Design The basic design of the national flag of Belarus was first described in Presidential Decree No. 214 of 7 June 1995. The flag is a rectangular cloth consisting of two horizontal stripes: a red upper stripe covering two-thirds of the flag's height, and green lower stripe covering one-third. A vertical red-on-white traditional Belarusian decorative pattern, which occupies one-ninth of the flag's length, is placed against the flagstaff. The flag's ratio of width to length is 1:2. The flag does not differ significantly from the flag of the Byelorussian Soviet Socialist Republic (Byelorussian SSR), other than the removal of the hammer and sickle and the red star, as well as the reversal of red and white in the hoist pattern, from white-on-red to red-on-white. While there is no official interpretation for the colours of the flag, an explanation given by President Alexander Lukashenko is that red represents freedom and the sacrifice of the nation's forefathers, while green represents life. In addition to the 1995 decree, "STB 911-2008: National Flag of the Republic of Belarus" was published by the State Committee for Standardisation of the Republic of Belarus in 2008. It gives the technical specifications of the national flag, such as the details of the colours and the ornament pattern. The red ornament design on the national flag was, until 2012, 1⁄12 the width of the flag, and 1⁄9 with the white margin. As of 2012, the red pattern has occupied the whole of the white margin (which stayed at 1⁄9). ### Colours The colours of the national flag are regulated in "STB 911-2008: National Flag of the Republic of Belarus" and are listed in the CIE Standard illuminant D65. {\| class="wikitable" \|+Standard Colour Sample of the National Flag \|- ! rowspan=2 \| Colour ! colspan=2 \| Colour coordinate ! rowspan=2 \| Y<sub>10</sub> \|- ! x<sub>10</sub> !! y<sub>10</sub> \|- !style="background:#C0000A;color:white"\| Red \|\| 0.553 ± 0.010 \|\| 0.318 ± 0.010 \|\|14.8 ± 1.0 \|- !style="background:#0EA000;color:white"\| Green \|\| 0.297 ± 0.010 \|\| 0.481 ± 0.010 \|\|29.6 ± 1.0 {\| class="wikitable" style="text-align:center;" \|- ! 2012–present ! ! style="background:#009739; width:100px; color:white;"\| Green ! style="background:#D22730; width:100px; color:white;"\| Red \|- \| Pantone \|\| 355 C \|\| 1795 C \|- \| CMYK \|\| 93-0-100-0 \|\| 0-96-82-1 \|- \|HEX \|#009739 \|#D22730 \|- \|RGB \|0-151-57 \|210-39-48 ### Construction Sheet ### Hoist ornament pattern A decorative pattern, designed in 1917 by Matrona Markevich [be], is displayed on the hoist of the flag (as it was previously, on the 1951 flag). The pattern, derived from local plants and flowers, is a traditional type commonly used in Belarus. These patterns are sometimes used in woven garments, most importantly in the traditional ruchnik, a woven cloth used for ceremonial events like religious services, funerals, and other more minor social functions, such as a host offering guests bread and salt served on a ručnik. The husband of Matrona Markevich was arrested for anti-Soviet propaganda and executed during Soviet repression in Belarus in 1937, after which the family was dekulakised. The original ruchnik has not survived and was either confiscated by the NKVD in 1937 or destroyed during World War II. The brother of Matrona Markevich, Mikhail Katsar, head of the ethnography and folklore department at the Academy of Sciences of Belarus, was included into the commission that was ordered to create a new flag for the Belarusian SSR in 1951. A monument to Matrona Markevich was erected in Sianno in 2015. ## Flag protocol Belarusian law requires that the flag be flown daily, weather permitting, from the following locations: - The residence of the president of Belarus - The buildings of the National Assembly of Belarus (House of Representatives and Council of the Republic) - The building of the Council of Ministers of Belarus - Courts of Belarus - Offices of local executive and administrative bodies - Above buildings in which sessions of local Councils of deputies take place (during the meetings) - Military bases or military ships - Belarusian embassies and consulates - At checks points and posts at the borders of Belarus The Belarusian flag is also officially flown on the sites of special occasions: - Sessions of local executive and administrative bodies - Voting/polling places - Sports arenas during competitions (although the IOC has its own rules on flag display) Belarusian diplomats and various government officials (such as the President and the Prime Minister) display the flag on vehicles. On special occasions, such as memorial services and family holidays, and it can be used at ceremonies and events hosted by public organisations, companies, and NGOs. The regulations were issued in the same decree that defined the Belarusian flag. The national flag has been incorporated into the badge of the guard units in the Belarusian armed forces. The pole should be three times longer than the width of the flag. According to the 1995 presidential decree, the national flag is to be used on a staff that is coloured gold (ochre). Other parts of the protocol specify the finial (the metal ornament on a flag pole) as diamond-shaped and coloured in a yellow metal. In this diamond there is a five-pointed star (similar to that used in the national emblem). The diamond pattern represents another continuation of Soviet flag traditions. The Day of the National Emblem and Flag of Belarus is 15 May. ## Historical flags ### White-red-white flag The white-red-white flag was used by the Belarusian People's Republic in 1918 before Belarus became a Soviet Republic, then by the Belarusian national movement in West Belarus followed by widespread unofficial use during the Nazi occupation of Belarus between 1942 and 1944, and again after it regained its independence in 1991 until the 1995 referendum. Opposition groups have continued to use this flag, though its display in Belarus has been restricted by the government of Belarus, which claims it is linked with Nazi collaboration due to its use by Belarusian collaborators during World War II. The white-red-white flag has been used in protests against the government, most recently the 2020–2021 Belarusian protests, and by the Belarusian diaspora. ### Soviet era #### 1919–1951 Before 1951, several different flags had been in use since the Revolution. The earliest flag was plain red, and was used in 1919 during the existence of the Lithuanian–Byelorussian Soviet Socialist Republic. After the formation of the Byelorussian SSR, the lettering ССРБ (SSRB) was added in gold to the top hoist. This design was established with the passage of the first Constitution of the Byelorussian SSR. It was later modified in the 1927 Constitution where the letters were changed to БССР (BSSR) but kept the overall design the same. This design was changed in 1937, when a hammer and sickle and red star were placed above the letters. The flag dimensions were also formally established as 1:2 for the first time. This flag remained in use until the adoption of the 1951 flag, which did away with the letters. #### 1951–1991 The flag of the Byelorussian SSR was adopted by decree on 25 December 1951. The flag was slightly modified in 1956 when construction details were added for the red star and the golden hammer and sickle. The final specifications of the flag was set in Article 120 of the Constitution of the Byelorussian SSR and are very similar to the current Belarusian flag. The flag had a length-to-width ratio of one to two (1:2), just like the flag of the Soviet Union (and the other fourteen union republics). The main portion of the flag was red (representing the Revolution), with the rest being green (representing the Belarusian forests). A pattern of white drawn on red decorated the hoist portion of the flag; this design is often used on Belarusian traditional costumes. In the upper corner of the flag, in the red portion, a gold hammer and sickle was added, with a red star outlined in gold above it. The hammer represented the worker, and the sickle the peasant; according to Soviet ideology, these two symbols crossed together symbolised co-operation between the two classes. The red star, a symbol commonly used by Communist parties, was said to stand either for the five social groups (workers, youth, peasants, military, and academics), the five known continents, or the five fingers of the worker's hand. The hammer, sickle and star were sometimes not displayed on the reverse of the flag. The purpose for this design was that the Byelorussian SSR, along with the Soviet Union and the Ukrainian SSR, were admitted to the United Nations in 1945 as founding members and needed distinct flags for each other. The designer of the flag was Mikhail Gusyev. ### 1995 referendum The referendum that was held to adopt the state symbols took place on 14 May 1995. With a voter turnout of 64.7%, the new flag was approved by a majority in the ratio of three to one (75.1% to 24.9%). The other three questions were also passed by the voters. The way of carrying out the referendum as well as the legality of questioning the national symbols on a referendum was heavily criticised by the opposition. Opposition parties claimed that only 48.7% of the entire voting population (75.1% of the 64.7% who showed at the polling stations) supported the adoption of the new flag, but Belarusian law (as in many other countries) states that only a majority of voters is needed to decide on a referendum issue. Upon the results going in favor of President Lukashenko, he proclaimed that the return of the Soviet-style flag brought a sense of youth and pleasant memories to the nation. Lukashenko had tried to hold a similar referendum before, in 1993, but failed to get parliamentary support. Two months before the May 1995 referendum, Lukashenko proposed a flag design that consisted of two small bars of green and one wide bar of red. While it is not known what became of this suggestion, new designs (called "projects" in Belarus) were suggested a few days later, which were then put up to vote in the 1995 referendum. ## Other related flags Since the introduction of the 1995 flag, several other flags adopted by government agencies or bodies have been modelled on it. The presidential standard, which has been in use since 1997, was adopted by a decree called "Concerning the Standard of the President of Republic of Belarus". The standard's design is an exact copy of the national flag, with the addition of the Belarusian national emblem in gold and red. The standard's ratio of 5:6 differs from that of the national flag, making the standard almost square. It is used at buildings and on vehicles to denote the presence of the president. In 2001, President Lukashenko issued a decree granting a flag to the Armed Forces of Belarus. The flag, which has a ratio of 1:1.7, has the national ornamental pattern along the length of the hoist side of the flag. On the front of the flag is the Belarusian coat of arms, with the wording УЗБРОЕНЫЯ СІЛЫ ("Armed Forces") arched over it, and РЭСПУБЛІКІ БЕЛАРУСЬ ("of Republic of Belarus") written below; the text of both is in gold. On the reverse of the flag, the centre contains the symbol of the armed forces, which is a red star surrounded by a wreath of oak and laurel. Above the symbol is the phrase ЗА НАШУ РАДЗІМУ ("For our Motherland"), while below is the full name of the military unit.
26,399,628
Menominee Tribe v. United States
1,107,036,902
null
[ "1968 in United States case law", "Menominee tribe", "United States Native American treaty case law", "United States Supreme Court cases", "United States Supreme Court cases of the Warren Court" ]
Menominee Tribe v. United States, 391 U.S. 404 (1968), is a case in which the Supreme Court ruled that the Menominee Indian Tribe kept their historical hunting and fishing rights even after the federal government ceased to recognize the tribe. It was a landmark decision in Native American case law. The Menominee Indian Tribe had entered into a series of treaties with the United States that did not specifically state that they had hunting and fishing rights. In 1961, Congress terminated the tribe's federal recognition, ending its right to govern itself, federal support of health care and education programs, police and fire protection, and tribal rights to land. In 1963, three members of the tribe were charged with violating Wisconsin's hunting and fishing laws on land which had been a reservation for over 100 years. The tribe members were acquitted, but when the state appealed, the Wisconsin Supreme Court held that the Menominee tribe no longer had hunting and fishing rights because of the termination action by Congress. The tribe sued the United States for compensation in the US Court of Claims, which ruled that tribal members still had hunting and fishing rights and that Congress had not abrogated the rights. The opposite rulings by the state and federal courts brought the issue to the Supreme Court. In 1968, the Supreme Court held that the tribe retained its hunting and fishing rights under the treaties involved and the rights were not lost after federal recognition was ended by the Menominee Indian Termination Act without a clear and unequivocal statement by Congress removing the rights. ## Background ### Early treaties Ancestors of the Menominee Indian Tribe may have lived in the states of Wisconsin and Michigan for the last 10,000 years. Their traditional territory was about 10 million acres (4 million hectares). They first acknowledged that they were under the protection of the United States in the Treaty of St. Louis (1817). In 1825 and 1827, the treaties of Prairie du Chien and Butte des Morts answered boundary questions. None of the early treaties addressed hunting and fishing rights. In 1831, the tribe entered into the Treaty of Washington, which ceded about 3,000,000 acres (1,200,000 ha) to the federal government. These two treaties reserved hunting and fishing rights for the tribe on the ceded land until the President of the United States ordered the land surveyed and sold to settlers. In 1836, the tribe entered into the Treaty of Cedar Point, under which 4,184,000 acres (1,693,000 ha) were ceded to the federal government. The treaty did not mention hunting or fishing rights. In 1848, the tribe entered into another treaty with the United States, the Treaty of Lake Poygan, which ceded the tribe's remaining approximately 4,000,000 acres (1,600,000 ha) in exchange for 600,000 acres (240,000 ha) west of the Mississippi River in present-day Minnesota. This treaty was contingent on the tribe examining the land proposed for them and accepting it as suitable. In 1850, Chief Oshkosh led a delegation to the Crow Wing area and determined that the land was not suitable for the tribe, mainly because the proposed reservation was located between two warring tribes, the Dakota and Ojibwe. Oshkosh then pressed for a new treaty, stating that he "preferred a home somewhere in Wisconsin, for the poorest region in Wisconsin was better than the Crow Wing." ### Treaty of 1854 The tribe had been living in an area near the Wolf River. They entered into the Treaty of Wolf River with the United States in 1854. The United States set aside 276,480 acres (111,890 ha) of land for a reservation in present-day Menominee County, Wisconsin. In return, the tribe ceded the land in Minnesota back to the federal government. None of the previous treaties except the Treaty of Washington addressed the tribe's retained hunting and fishing rights, but stated that the reservation was "to be held as Indian lands are held". Since the Treaty of Wolf River, this area has been the tribe's home, and they were free from state taxation, regulation and court jurisdiction. Of the original land, 230,000 acres (93,000 ha) of prime timberland remained under the tribe's control, while the remaining land was transferred to the Mahican and Lenape (the latter also known as the Delaware or Munsee) tribes. During this period, the Menominee enjoyed complete freedom to regulate hunting and fishing on the reservation, with the acquiescence of Wisconsin. ### Tribal termination In the mid- to late-1940s, the Menominee tribe was considered by a government survey to identify tribes for termination, a process in which federal recognition of the tribe would be withdrawn and the tribe would no longer be dependent on the Bureau of Indian Affairs (BIA) to support them. The Menominee were thought to be a tribe that could be terminated because they were one of the richest tribes in the nation. The federal government thought that termination would allow the tribal members to be assimilated into mainstream American culture, becoming hard-working, tax-paying, productive citizens. In 1954, Congress terminated the federally recognized status of the tribe with the Menominee Indian Termination Act. According to the terms of the Termination Act, the federally recognized status was to end in 1958. The tribe and the state of Wisconsin successfully lobbied for a delay in the implementation of termination until 1961. The tribe was opposed to termination for a number of reasons; their concerns included the loss of tribal culture, the loss of land due to tax liens, the possibility of bankruptcy and loss of the tribal timber industry, and the lack of tribal members who were trained to run a county government. The state of Wisconsin was concerned that with no industry for the tribe to tax, the state would be responsible for the large financial outlay that would be required to maintain governmental operations for the former reservation. On termination, the Menominee, which was one of the wealthiest tribes prior to termination, became one of the poorest. In 1954, the tribe's timber operations allowed it to be self-sufficient. The tribe, which owned utility companies, paid for a hospital, BIA salaries, local schools, and a stipend to tribal members. The tribe was forced to use its reserve funds to develop a termination plan that they did not want and instead of having a reserve, they entered into termination with a \$300,000 deficit. Menominee County was created out of the old reservation boundaries and the tribe immediately had to finance its own police and fire protection. Without federal support and with no tax base, the situation became dire. The tribe closed the hospital, sold its utility company, and contracted those services to neighboring counties. The Menominee Enterprises, Inc., formed to care for the tribe's needs after termination, was unable to pay property taxes and began to consider selling off tribal property. Many Menominee tribal members believed that the sponsor of the termination bill, Senator Arthur Wilkins of Utah, intended to force the loss of rich tribal lands to non-Indians. In 1962, the state of Wisconsin took the position that the hunting and fishing rights were abrogated by the termination act and that the tribal members were subject to state hunting and fishing regulations. With the poverty in the former reservation, the loss of hunting rights meant the loss of one of their last remaining means of survival. ### State enforcement actions In 1962, tribal members Joseph L. Sanapaw, William J. Grignon, and Francis Basina were charged with violating state hunting and fishing regulations. All three admitted to the acts in open court, but claimed that the Wolf River Treaty gave them the right to hunt. The state trial court agreed and acquitted the three. The state was given leave to pursue a writ of error and appealed to the Wisconsin Supreme Court to answer whether the Termination Act canceled those rights retained by treaty. The Wisconsin Supreme Court in State v. Sanapaw held that the treaty rights were terminated by Congress. In analyzing the case, the Wisconsin Supreme Court first had to determine whether the tribe had hunting and fishing rights under treaties with the United States. It found that although the Wolf River Treaty did not specifically mention hunting and fishing rights, the term "to be held as Indian lands are held" was clear. Indians have always been able to hunt and fish on their own land, and if a term in a treaty with Indians is ambiguous, the Wisconsin Supreme Court found that it must be resolved in favor of the tribe. Since the tribe originally had hunting and fishing rights under the treaty, the Wisconsin Supreme Court then looked to determine whether Congress had removed those rights by enacting the Menominee Termination Act. The Wisconsin Supreme Court held that Congress had used its plenary power to abrogate those rights. The Wisconsin Supreme Court placed special emphasis on the phrase "all statutes of the United States which affect Indians because of their status as Indians shall no longer be applicable to the members of the tribe, and the laws of the several States shall apply to the tribe and its members in the same manner as they apply to other citizens or persons within their jurisdiction." The Wisconsin Supreme Court held that the latter section was controlling, despite the tribal members' argument that hunting rights were retained by treaty rather than by statute. The Wisconsin Supreme Court held that the tribe had lost their hunting and fishing rights. The tribal members appealed to the U.S. Supreme Court, which declined to hear the appeal. ### Federal Court of Claims The Menominee sued in the U.S. Court of Claims to recover compensation for the loss of their hunting and fishing rights. The Court of Claims first clarified that the Menominee Termination Act did not abolish the tribe or its membership, but merely ended federal supervision of the tribe. Since the Menominee was still a tribe, although not one under federal trusteeship, the tribe had a right to assert a claim arising out of the Wolf River Treaty in accordance with the Indian Claims Commission Act and the Tucker Act. The Court of Claims looked at whether the tribe had hunting and fishing rights and drew the same conclusion as the Wisconsin Supreme Court—that the terms of the treaty had to be resolved in the favor of the tribe, citing The Menominee Tribe of Indians v. United States, 95 Ct.Cl. 232 (Ct.Cl., 1941). In that decision, the Court of Claims had observed that the reason the tribe had agreed to the site of the reservation was that it was well suited for hunting, with plenty of game. The hunting rights by treaty were therefore confirmed. The Court of Claims had to determine whether the Menominee Termination Act had taken away that right. If it had, the tribe would have a valid claim for compensation; but if not, then there would be no compensation. On April 14, 1967, the Court of Claims denied the claim, stating that the hunting and fishing rights had not been abrogated by the Termination Act. In arriving at this decision, it said that the legislative history included two witnesses who stated that the Act would not affect hunting and fishing rights acquired by treaty, but would abrogate any such rights acquired by statute. Additionally, the Court of Claims observed that Congress also amended Public Law 280 so that Indian hunting and fishing rights were protected in Wisconsin. The decision contradicted the decision of the Wisconsin Supreme Court. On October 9, 1967, the U.S. Supreme Court agreed to hear the appeal and granted certiorari (a writ to the lower court to send the case to them for review) to resolve the conflict between the Wisconsin Supreme Court and the federal Court of Claims. ## Supreme Court ### Argument In most appeals, the parties argue opposing positions. In this case, both the appellee (the Menominee) and the appellant (the United States) argued that the decision of the Court of Claims should be affirmed. The State of Wisconsin, as amicus curiae, argued that the Court of Claims ruling should be reversed. The tribe was represented by Charles A. Hobbs of Washington, D.C. The tribe argued that the Menominee Termination Act did not extinguish treaty rights, but instead had two purposes; to terminate federal supervision of the tribe and to transfer to the state general criminal and civil jurisdiction—which had already been accomplished by Public Law 280 and that law expressly preserved hunting and fishing rights. In the event that the court would decide that the hunting and fishing rights were extinguished, then the tribe should receive compensation for the loss of the rights. The United States was represented by Louis F. Claiborne, assistant to the U.S. Solicitor General. The United States also argued that the Menominee Termination Act did not extinguish hunting and fishing rights under the 1854 treaty and therefore the tribe was not due compensation from the United States. Claiborne also argued that whatever regulatory rights which were held by the federal government were transferred to the state of Wisconsin by the termination act. The case was originally argued on January 22, 1968. During oral argument, some of the justices were concerned that the state of Wisconsin was not a party to the case. Following oral arguments, the court called for reargument and requested that Wisconsin present an oral argument in addition to the brief it had filed with the court. Justice Marshall recused himself from the case, as he had been the U.S. Solicitor General the previous year and had participated in the government's preparation of the case. ### Reargument On April 25, 1968, the case was reargued. The tribe was again represented by Hobbs, who made the same basic argument that the hunting and fishing rights were not extinguished. The state of Wisconsin was represented by Bronson La Follette, the Attorney General of Wisconsin. La Follette argued that the plain language of the termination act not only ended federal supervision of the tribe, but extinguished the tribe and with it all treaty rights. He argued that the Court of Claims ruling was incorrect and should be reversed, and that the tribe was due compensation from the federal government. The United States was again represented by Claiborne, who reiterated his earlier argument. ### Opinion of the court Justice William O. Douglas delivered the opinion of the court. In a 6-2 decision, the ruling of the U.S. Court of Claims was affirmed, ruling that the tribe retained its hunting and fishing rights under the treaty. Douglas noted that Public Law 280 had been enacted and was fully in force for approximately seven years before the Termination Act became effective. The section of that law that dealt with Wisconsin provided that hunting and fishing rights in "Indian Country" were protected from state regulation and action. Thus from 1954 until termination in 1961, the Menominee's hunting and fishing rights were not interfered with by Wisconsin. The Termination Act stated that all federal statutes dealing with the tribe were no longer in force, but Douglas noted that it was silent with regard to treaties. The act did not specifically address the hunting and fishing rights, and Douglas stated that the U.S. Supreme Court would "decline to construe the Termination Act as a backhanded way of abrogating the hunting and fishing rights of these Indians." He noted that in a similar bill for the Klamath Tribe, there was a discussion on paying the tribe to buy out their hunting and fishing rights, a clear indication that Congress was aware of the implications. Douglas found it hard to believe that Congress would subject the United States to a claim for compensation without an explicit statement to that effect. He found that without a specific abrogation of those rights, the tribe retained those rights. ### Dissent Justice Potter Stewart, joined by Justice Hugo Black, dissented. Stewart acknowledged that the Wolf River Treaty unquestionably conferred hunting and fishing rights on the tribe and its members. He stated that the Termination Act subjected the members of the tribe to the same laws that all other citizens of Wisconsin were held to, including hunting and fishing regulations. In Stewart's opinion, Public Law 280 had no bearing on the case and the rights were not protected by the Termination Act, so they were lost. Stewart did note that this would have also made the claim for compensation valid under Shoshone Tribe v. United States, regardless of whether Congress intended it or not. He would have reversed the decision of the Court of Claims. ## Subsequent developments Menominee Tribe v. United States is a landmark case in Native American law, primarily in the area of reserved tribal rights. It has been used in college courses to explain tribal sovereignty rights and that tribes retain some rights even if the tribe has been terminated, as the Menominee tribe was. The decision in the case has affected subsequent legislation, such as the Alaska Native Claims Settlement Act, in which Congress expressly extinguished all aboriginal rights. The case has been discussed internationally, for example in Australia regarding the relevance of indigenous or aboriginal title. ### Law reviews and journals The case has been cited in over 300 law review articles as of October 2013. A consistent point made in numerous articles is that while Congress may terminate tribal and treaty rights, it must show a "specific intent to abrogate them." It is repeatedly cited by cases and law reviews to show that the court will construe laws and treaties, where ambiguous, in favor of the tribes. Judges and legal experts have noted that hunting and fishing rights are valuable property rights, and if the government takes away such rights, it must compensate those who hold the rights for their loss. Courts must also construe treaty rights and statutes liberally in favor of the Indians, even when the treaty does not specifically speak of hunting and fishing. ### Restoration of federal recognition In 1973, Congress repealed termination and restored federal recognition of the Menominee tribe. The Menominee Restoration Act was signed by Richard Nixon; it repealed the Menominee Indian Termination Act, reopened the tribal rolls, re-established the trust status and provided for the reformation of tribal government. The tribe was the first terminated tribe to be restored to trust and recognition status. The Restoration Act signaled the end of the termination era. ## See also - Menominee Tribe of Wis. v. United States: A 2016 U.S. Supreme Court decision
17,708,853
Andrew Johnston (singer)
1,169,798,011
Scottish singer
[ "1994 births", "21st-century British male singers", "21st-century British singers", "Anglo-Scots", "Boy sopranos", "Britain's Got Talent contestants", "British baritones", "British child singers", "English people of Scottish descent", "Living people", "Musicians from Carlisle, Cumbria", "Musicians from Dumfries", "People charged with rape" ]
Andrew Johnston (born 23 September 1994) is a British singer who rose to fame when he appeared as a boy soprano on the second series of the UK television talent show Britain's Got Talent in 2008. Although he did not win the competition, he received a contract to record with Syco Music, a label owned by the Britain's Got Talent judge Simon Cowell. Johnston's debut album, One Voice, was released in September of the same year, and reached number four on the UK Albums Chart. Although Johnston originally performed as a treble, his voice has since matured to baritone, and he is now a member of the National Youth Choir. Johnston was born in Dumfries, Scotland, and his parents separated when he was an infant. He and his mother moved to Carlisle, where they lived in "poverty". He became head chorister at Carlisle Cathedral, and was bullied at school because of his love of classical music. While some journalists have argued Britain's Got Talent producers took advantage of Johnston's background, others have hailed his story as inspirational. In 2009, he graduated from Trinity School. Johnston now studies full-time at the Royal Northern College of Music. ## History ### Early life and Carlisle Cathedral Choir Johnston was born on 23 September 1994 in Dumfries, Scotland, the son of Andrew Johnston and Morag Brannock. He was given the extensive name Andrew Aaron Lewis Patrick Brannock John Grieve Michael Robert Oscar Schmidt Johnston. Johnston's parents separated when he was eight months old, and from that time he lived with his mother and three older siblings in Carlisle, Cumbria, in the north of England, where he attended Trinity School. Johnston tried out for Carlisle Cathedral Choir at the age of six at the recommendation of Kim Harris, a teacher at his primary school. He was auditioned by the choirmaster Jeremy Suter and accepted into the choir at the age of seven. Johnston's mother, who had no previous association with the cathedral, described her feelings of being overwhelmed by emotion at having her boy singing in such a "stunning building among those extraordinary voices". His mother also described Johnston's busy regimen of practice four times a week and all day Sundays, saying that it took up all of their spare time. However, she said that the cathedral staff became like a family to her son, and that "it was such a lovely, safe, close feeling for him". Johnston, who attended Trinity School, was subject to abuse and threats from bullies which drove him to contemplate quitting the choir, but he was helped through the ordeal by his choirmaster and the dean and canons of the cathedral. By the time of his participation in Britain's Got Talent, Johnston was head chorister. In September 2008, after his appearance on Britain's Got Talent but before the release of his first album, Johnston embarked on a tour of Norway with the choir, performing at Stavanger Cathedral and Utstein Abbey, among other places. The tour was conceived because the Diocese of Stavanger is connected with the Diocese of Carlisle through the Partnership for World Mission. This was Johnston's last tour with the choir. Johnston features as head chorister on one of the choir's albums, The Choral Music of F.W Wadely, released in November 2008. ### Britain's Got Talent Johnston was entered as a competitor in the second series of Britain's Got Talent by his mother. He passed the first public audition, singing "Pie Jesu" from Andrew Lloyd Webber's Requiem. Amanda Holden, one of the competition's judges, was brought to tears, and the audience offered Johnston a standing ovation. Johnston was tipped as the favourite to win the competition. Later, Johnston described his initial audition as daunting, saying that "it was scary singing in front of 2,500 people. I had never sang on stage before – then there was also Simon, Amanda and Piers". He won his semi-final heat on 27 May 2008, receiving the most public votes on the night and thereby qualifying for the final. He sang "Tears in Heaven" by Eric Clapton; judge Holden told him he had "a gift from God in [his] voice". At the final on 30 May, he again sang "Pie Jesu". He finished in third place, behind the winner, the street dancer George Sampson and runners-up, the dance group Signature. Johnston left the stage in tears, later saying that he "was upset. But when you see the talent that was there, it was an honour just to be in the final". The day after the final, Cowell's publicist Max Clifford said that it was "quite possible" that Cowell would be offering record contracts to some of the finalists, including Johnston. Johnston and other contestants then embarked on a national arena tour. During his initial audition, Johnston claimed that he was bullied and victimised from the age of six because of his singing. When asked how he dealt with the issue, he stated "I carry on singing." In The Times, Johnston's success story was described as "the stuff of fairytales", as he was successful despite having been raised in "poverty". Johnston said he did not talk about being bullied because he was told to do so by producers, but "because I believed it would help people who were going through what I had gone through be stronger". Johnston has subsequently visited schools and elsewhere to help other victims of bullying. He said "I want to use my experience of bullies to help other kids". ### One Voice On 12 June 2008, while Johnston was travelling with the Britain's Got Talent Live Tour, it was announced that Johnston had signed a record deal with Syco Music, a division of Sony BMG, and that his first album would be produced after the tour. The deal was reportedly for £1 million. After signing with Syco, Johnston made public appearances, including performing at Andrew Lloyd Webber's birthday celebrations on 14 September, and at Carlisle United's Brunton Park. Johnston's debut album, One Voice, was released on 29 September 2008. It includes a cover of "Walking in the Air", performed with Faryl Smith. The album was recorded over a six-week period in London, and the track listing was chosen by Cowell. Johnston described the recording process as "brilliant", and that it was "really good – just to be in a recording studio and meet the different people". The album debuted in the British charts at number five, and finished the week at number four. The album was later certified gold, having sold 100,000 copies, and Johnston was presented a gold disc by daytime television presenter Penny Smith. Critics responded positively to the album, with Kate Leaver, writing for the Korea JoongAng Daily, saying Johnston "has truer talent than hordes of his musical elders" and that "the vulnerability" of Johnston's performance on the album "makes for a haunting musical experience". In Music Week, the album was described as "highly-anticipated", and Johnston was called "exceptionally-talented". After the album's release, Johnston became involved in the Sing Up campaign, appearing in schools around the country to encourage other young people to join choirs. In December 2008, Johnston made a guest appearance at Whitehaven's Christmas fair, and performed at a carol service in Bradford. Johnston was also invited to turn on the Carlisle Christmas lights and perform at the celebrations. Mike Mitchelson, of Carlisle City Council, described Johnston as "one of our local heroes". ### Hiatus and 2010s In September 2009, Johnston announced that he would be taking a year off from singing as his voice had broken, changing him to a tenor. He had previously performed as a treble. He said "the tutors at [the Royal Northern College of Music] said they'll be able to train my voice up again. It's the same as it ever was, just deeper". Johnston's voice then changed from a tenor to a baritone. After remaining out of the spotlight for two years, he joined the National Youth Choir. In 2011, he was awarded a Royal School of Church Music Gold medal; public performances that year included a charitable concert, alongside organists John Bromley and Tony Green, at St Paul's Church, Helsby in November. In September 2013, Johnston began to study for a Bachelor of Music degree at the Royal Northern College of Music, under the tutelage of Jeff Lawton, who had previously tutored him at the Junior College. He immediately joined the college's Chamber Choir and the Manchester Cathedral choir, but said that he intended to still sing with the Carlisle Cathedral choir where possible. While a student, Johnston's singing was adversely affected by a broken nose, the result of an unprovoked attack in a Carlisle nightclub on New Year's Day, 2014. ## Personal life Johnston's family home is in Stanwix, Carlisle. His mother, Morag Brannock, worked for the Office for National Statistics before giving up her job to support her son's career. Prior to his Britain's Got Talent appearances, he attended Trinity School, and later received tuition from a personal tutor. Johnston said that he "had a lot of support from local people when ... taking part in Britain's Got Talent", and was given a civic award for outstanding achievement by Carlisle City Council in March 2009. Johnston's interests include jujitsu, in which he has a black belt. The Carlisle newspaper News and Star reported in September 2012 that Johnston had become the youngest person in the world to be granted a licence to teach the sport. In 2019, Johnstone said that he had been working full-time as a roofer since 2017. ### Rape charges Johnstone appeared before Westminster Magistrates' Court in 2022 charged with three sexual offences, including two charges of rape, dating between November 2019 and March 2020. Johnstone did not enter any pleas, with the case being sent on for a pre-trial hearing on 7 September 2022. ## Discography Studio albums
46,880,915
Youth on the Prow, and Pleasure at the Helm
1,063,372,750
1832 painting by William Etty, inspired by a metaphor in Thomas Gray's poem The Bard
[ "1832 paintings", "19th-century allegorical paintings", "Allegorical paintings by English artists", "Birds in art", "Collection of the Tate galleries", "Maritime paintings", "Nude art", "Paintings based on literature", "Paintings by William Etty" ]
Youth on the Prow, and Pleasure at the Helm (also known as Fair Laughs the Morn and Youth and Pleasure) is an oil painting on canvas by English artist William Etty, first exhibited in 1832. Etty had been planning the painting since 1818–19, and an early version was exhibited in 1822. The piece was inspired by a metaphor in Thomas Gray's poem The Bard in which the apparently bright start to the notorious misrule of Richard II of England was compared to a gilded ship whose occupants are unaware of an approaching storm. Etty chose to illustrate Gray's lines literally, depicting a golden boat filled with and surrounded by nude and near-nude figures. Etty felt that his approach to the work illustrated a moral warning about the pursuit of pleasure, but his approach was not entirely successful. The Bard was about a supposed curse on the House of Plantagenet placed by a Welsh bard following Edward I of England's attempts to eradicate Welsh culture, and critics felt that Etty had somewhat misunderstood the point of Gray's poem. Some reviewers greatly praised the piece, and in particular Etty's technical abilities, but audiences of the time found it hard to understand the purpose of Etty's painting, and his use of nude figures led some critics to consider the work tasteless and offensive. The painting was bought in 1832 by Robert Vernon to form part of his collection of British art. Vernon donated his collection, including Youth on the Prow, and Pleasure at the Helm, to the National Gallery in 1847, which, in turn, transferred it to the Tate Gallery in 1949. It remains one of Etty's best-known works, and formed part of major exhibitions at Tate Britain in 2001–02 and at the York Art Gallery in 2011–12. ## Background William Etty, the seventh son of a York baker and miller, had been an apprentice printer in Hull. On completing his seven-year apprenticeship at the age of 18 he moved to London "with a few pieces of chalk crayons", and the intention of becoming a history painter in the tradition of the Old Masters. He enrolled in the Schools of the Royal Academy of Arts, studying under renowned portrait painter Thomas Lawrence. He submitted numerous paintings to the Royal Academy over the following decade, all of which were either rejected or received little attention when exhibited. In 1821 Etty's The Arrival of Cleopatra in Cilicia (also known as The Triumph of Cleopatra) was a critical success. The painting featured nude figures, and over the following years Etty painted further nudes in biblical, literary and mythological settings. All but one of the 15 paintings Etty exhibited in the 1820s included at least one nude figure. While some nudes existed in private collections, England had no tradition of nude painting and the display and distribution of nude material to the public had been suppressed since the 1787 Proclamation for the Discouragement of Vice. Etty was the first British artist to specialise in the nude, and the reaction of the lower classes to these paintings caused concern throughout the 19th century. Although his portraits of male nudes were generally well received, many critics condemned his repeated depictions of female nudity as indecent. ## Composition Youth on the Prow, and Pleasure at the Helm was inspired by a passage in Thomas Gray's poem The Bard. The theme of The Bard was the English king Edward I's conquest of Wales, and a curse placed by a Welsh bard upon Edward's descendants after he ordered the execution of all bards and the eradication of Welsh culture. Etty used a passage Gray intended to symbolise the seemingly bright start to the disastrous reign of Edward's great-great-grandson Richard II. Etty chose to illustrate Gray's words literally, creating what has been described as "a poetic romance". Youth and Pleasure depicts a small gilded boat. Above the boat, a nude figure representing Zephyr blows on the sails. Another nude representing Pleasure lies on a large bouquet of flowers, loosely holding the helm of the boat and allowing Zephyr's breeze to guide it. A nude child blows bubbles, which another nude on the prow of the ship, representing Youth, reaches to catch. Naiads, again nude, swim around and clamber on the boat. Although the seas are calm, a "sweeping whirlwind" is forming on the horizon, with a demonic figure within the storm clouds. (Deterioration and restoration means this demonic figure is now barely visible.) The intertwined limbs of the participants were intended to evoke the sensation of transient and passing pleasure, and to express the themes of female sexual appetites entrapping innocent youth, and the sexual power women hold over men. Etty said of his approach to the text that he was hoping to create "a general allegory of Human Life, its empty vain pleasures—if not founded on the laws of Him who is the Rock of Ages." While Etty felt that the work conveyed a clear moral warning about the pursuit of pleasure, this lesson was largely lost upon its audiences. When Etty exhibited the completed painting at the Royal Academy Summer Exhibition in 1832, it was shown untitled, with the relevant six lines from The Bard attached; writers at the time sometimes referred to it by its incipit of Fair Laughs the Morn. By the time of Etty's death in 1849, it had acquired its present title of Youth on the Prow, and Pleasure at the Helm. ## Versions The final version of Youth and Pleasure was painted between 1830 and 1832, but Etty had been contemplating a painting on the theme since 1818–19. In 1822 he had exhibited an early version at the British Institution titled A Sketch from One of Gray's Odes (Youth on the Prow); in this version the group of figures on the prow is reversed, and the swimmers around the boat are absent. Another rough version of the painting also survives, similar to the 1832 version but again with the figures on the prow reversed. This version was exhibited at a retrospective of Etty's work at the Society of Arts in 1849; it is dated 1848 but this is likely to be a misprint of 1828, making it a preliminary study for the 1832 painting. Although it received little notice when first exhibited, the 1822 version provoked a strong reaction from The Times: > We take this opportunity of advising Mr. Etty, who got some reputation for painting "Cleopatra's Galley", not to be seduced into a style which can gratify only the most vicious taste. Naked figures, when painted with the purity of Raphael, may be endured: but nakedness without purity is offensive and indecent, and on Mr. Etty's canvass is mere dirty flesh. Mr. Howard, whose poetical subjects sometimes require naked figures, never disgusts the eye or mind. Let Mr. Etty strive to acquire a taste equally pure: he should know, that just delicate taste and pure moral sense are synonymous terms. An oil sketch attributed to Etty, given to York Art Gallery in 1952 by Judith Hare, Countess of Listowel and entitled Three Female Nudes, is possibly a preliminary study by Etty for Youth and Pleasure, or a copy by a student of the three central figures. Art historian Sarah Burnage considers both possibilities unlikely, as neither the arrangement of figures, the subject matter or the sea serpent approaching the group appear to relate to the completed Youth and Pleasure, and considers it more likely to be a preliminary sketch for a now-unknown work. ## Reception Youth on the Prow, and Pleasure at the Helm met with a mixed reception on exhibition, and while critics generally praised Etty's technical ability, there was a certain confusion as to what the painting was actually intended to represent and a general feeling that he had seriously misunderstood what The Bard was actually about. The Library of the Fine Arts felt "in classical design, anatomical drawing, elegance of attitude, fineness of form, and gracefulness of grouping, no doubt Mr. Etty has no superior", and while "the representation of the ideas in the lines quoted [from The Bard] are beautifully and accurately expressed upon the canvas" they considered "the ulterior reference of the poet [to the destruction of Welsh culture and the decline of the House of Plantagenet] was entirely lost sight of, and that, if this be the nearest that Art can approach in conveying to the eye the happy exemplification of the subject which Gray intended, we fear we must give up the contest upon the merits of poetry and painting." Similar concerns were raised in The Times, which observed that it was "Full of beauty, rich in colouring, boldly and accurately drawn, and composed with a most graceful fancy; but the meaning of it, if it has any meaning, no man can tell", pointing out that although it was intended to illustrate Gray it "would represent almost as well any other poet's fancies." The Examiner, meanwhile, took issue with the cramped and overladen boat, pointing out that the characters "if not exactly jammed together like figs in a basket, are sadly constrained for want of room", and also complained that the boat would not in reality "float half the weight which is made to press upon it." Other reviewers were kinder; The Gentleman's Magazine praised Etty's ability to capture "the beauty of the proportion of the antique", noting that in the central figures "there is far more of classicality than is to be seen in almost any modern picture", and considered the overall composition "a most fortunate combination of the ideality of Poetry and the reality of Nature". The Athenæum considered it "a poetic picture from a very poetic passage", praising Etty for "telling a story which is very difficult to tell with the pencil". The greatest criticism of Youth and Pleasure came from The Morning Chronicle, a newspaper which had long disliked Etty's female nudes. It complained "no decent family can hang such sights against their wall", and condemned the painting as an "indulgence of what we once hoped a classical, but which are now convinced, is a lascivious mind", commenting "the course of [Etty's] studies should run in a purer channel, and that he should not persist, with an unhallowed fancy, to pursue Nature to her holy recesses. He is a laborious draughtsman, and a beautiful colourist; but he has not taste or chastity of mind enough to venture on the naked truth." The reviewer added "we fear that Mr. E will never turn from his wicked ways, and make himself fit for decent company." ## Legacy Youth on the Prow, and Pleasure at the Helm was purchased at the time of its exhibition by Robert Vernon for his important collection of British art. (The price Vernon paid for Youth and Pleasure is not recorded, although Etty's cashbook records a partial payment of £250—about £ in 2023 terms—so it is likely to have been a substantial sum.) Vernon later purchased John Constable's The Valley Farm, planning to hang it in the place then occupied by Youth and Pleasure. This decision caused Constable to comment "My picture is to go into the place—where Etty's "Bumboat" is at present—his picture with its precious freight is to be brought down nearer to the nose." Vernon presented his collection to the nation in 1847, and his 157 paintings, including Youth and Pleasure, entered the National Gallery. When Samuel Carter Hall was choosing works to illustrate his newly launched The Art Journal, he considered it important to promote new British artists, even if it meant illustrations which some readers considered pornographic or offensive. In 1849 Hall secured reproduction rights to the paintings Vernon had given to the nation and soon published and widely distributed an engraving of the painting under the title Youth and Pleasure, describing it as "of the very highest class". Needled by repeated attacks from the press on his supposed indecency, poor taste and lack of creativity, Etty changed his approach after the response to Youth on the Prow, and Pleasure at the Helm. He exhibited over 80 further paintings at the Royal Academy alone, and remained a prominent painter of nudes, but from this time made conscious efforts to reflect moral lessons. He died in November 1849 and, while his work enjoyed a brief boom in popularity, interest in him declined over time, and by the end of the 19th century all of his paintings had fallen below their original prices. In 1949 the painting was transferred from the National Gallery to the Tate Gallery, where it remains. Although Youth and Pleasure is one of Etty's best-known paintings, it remains controversial, and Dennis Farr's 1958 biography of Etty describes it as "singularly inept". It was one of five works by Etty chosen for Tate Britain's landmark Exposed: The Victorian Nude exhibition in 2001–02, and also formed part of a major retrospective of Etty's work at the York Art Gallery in 2011–12.
549,318
Macaroni penguin
1,167,698,802
Species of bird
[ "Birds described in 1837", "Birds of Antarctica", "Birds of Patagonia", "Birds of islands of the Atlantic Ocean", "Birds of subantarctic islands", "Birds of the Indian Ocean", "Eudyptes", "Fauna of Heard Island and McDonald Islands", "Fauna of the Crozet Islands", "Fauna of the Prince Edward Islands", "Penguins", "Vulnerable fauna of Australia" ]
The macaroni penguin (Eudyptes chrysolophus) is a species of penguin found from the Subantarctic to the Antarctic Peninsula. One of six species of crested penguin, it is very closely related to the royal penguin, and some authorities consider the two to be a single species. It bears a distinctive yellow crest that resembles a hairdo consisting of macaroni, from which its name is derived. Its face and upperparts are black and sharply delineated from the white underparts. Adults weigh on average 5.5 kg (12 lb) and are 70 cm (28 in) in length. The male and female are similar in appearance; the male is slightly larger and stronger with a relatively larger bill. Like all penguins, it is flightless, with a streamlined body and wings stiffened and flattened into flippers for a marine lifestyle. Its diet consists of a variety of crustaceans, mainly krill, as well as small fish and cephalopods; the species consumes more marine life annually than any other species of seabird. These birds moult once a year, spending about three to four weeks ashore, before returning to the sea. Numbering up to 100,000 individuals, the breeding colonies of the macaroni penguin are among the largest and densest of all penguin species. After spending the summer breeding, penguins disperse into the oceans for six months; a 2009 study found that macaroni penguins from Kerguelen travelled over 10,000 km (6,200 mi) in the central Indian Ocean. With about 18 million individuals, the macaroni penguin is the most numerous penguin species. Widespread declines in populations have been recorded since the mid-1970s and their conservation status is classified as vulnerable. ## Taxonomy The macaroni penguin was described from the Falkland Islands in 1837 by German naturalist Johann Friedrich von Brandt. It is one of six or so species in the genus Eudyptes, collectively known as crested penguins. The genus name is derived from the Ancient Greek words eu "good", and dyptes "diver". The specific name chrysolophus is derived from the Greek words chryse "golden", and lophos "crest". The common name was recorded from the early 19th century in the Falkland Islands. English sailors apparently named the species for its conspicuous yellow crest. It is similar to the then fashionable Macaroni. Molecular clock evidence using DNA suggests the macaroni penguin diverged from its closest relative, the royal penguin (Eudyptes schlegeli), around 1.5 million years ago. The two have generally been considered different species, but the close similarities of their DNA sequences has led some, such as Australian ornithologists Les Christidis and Walter Boles, to treat the royal penguin as a subspecies of the macaroni penguin. The two species are very similar in appearance; the royal penguin has a white face instead of the usually black face of the macaroni penguin. Interbreeding with the Indo-Pacific subspecies of the southern rockhopper penguin (E. chrysocome filholi) has been reported at Heard and Marion Islands, with three hybrid subspecies recorded there by a 1987–88 Australian National Antarctic Research Expedition. ## Description The macaroni penguin is a large, crested penguin, similar in appearance to other members of the genus Eudyptes. An adult bird has an average length of around 70 cm (28 in); the weight varies markedly depending on time of year and sex. Males average from 3.3 kg (7 lb) after incubating, or 3.7 kg (8 lb) after moult to 6.4 kg (14 lb) before moult, while females average 3.2 kg (7 lb) after to 5.7 kg (13 lb) before moult. Among standard measurements, the thick bill (from the gape) measures 7 to 8 cm (2.8 to 3.1 in), the culmen being around a centimetre less. The wing, from the shoulder to the tip, is around 20.4 cm (8.0 in) and the tail is 9–10 cm (3.5–3.9 in) long. The head, chin, throat, and upper parts are black and sharply demarcated against the white under parts. The black plumage has a bluish sheen when new and brownish when old. The most striking feature is the yellow crest that arises from a patch on the centre of the forehead, and extends horizontally backwards to the nape. The flippers are blue-black on the upper surface with a white trailing edge, and mainly white underneath with a black tip and leading edge. The large, bulbous bill is orange-brown. The iris is red and a patch of pinkish bare skin is found from the base of the bill to the eye. The legs and feet are pink. The male and female are similar in appearance; males tend to be slightly larger. Males also bear relatively larger bills, which average around 6.1 cm (2.4 in) compared to 5.4 cm (2.1 in) in females; this feature has been used to tell the sexes apart. Immature birds are distinguished by their smaller size, smaller, duller-brown bill, dark grey chin and throat, and absent or underdeveloped head plumes, often just a scattering of yellow feathers. The crest is fully developed in birds aged three to four years, a year or two before breeding age. Macaroni penguins moult once a year, a process in which they replace all of their old feathers. They spend around two weeks accumulating fat before moulting because they do not feed during the moult, as they cannot enter the water to forage for food without feathers. The process typically takes three to four weeks, which they spend sitting ashore. Once finished, they go back to sea and return to their colonies to mate in the spring. Overall survival rates are poorly known; the successful return of breeding adults at South Georgia Island varied between 49% and 78% over three years, and around 10% of those that did return did not breed the following year. ## Distribution and habitat A 1993 review estimated that the macaroni was the most abundant species of penguin, with a minimum of 11,841,600 pairs worldwide. Macaroni penguins range from the Subantarctic to the Antarctic Peninsula; at least 216 breeding colonies at 50 sites have been recorded. In South America, macaroni penguins are found in southern Chile, the Falkland Islands, South Georgia and the South Sandwich Islands, and South Orkney Islands. They also occupy much of Antarctica and the Antarctic Peninsula, including the northern South Shetland Islands, Bouvet Island, the Prince Edward and Marion islands, the Crozet Islands, the Kerguelen Islands, and the Heard and McDonald Islands. While foraging for food, groups will range north to the islands off Australia, New Zealand, southern Brazil, Tristan da Cunha, and South Africa. ## Ecology ### Feeding The diet of the macaroni penguin consists of a variety of crustaceans, squid and fish; the proportions that each makes up vary with locality and season. Krill, particularly Antarctic krill (Euphausia superba), account for over 90% of food during breeding season. Cephalopods and small fish such as the marbled rockcod (Notothenia rossii), painted notie (Lepidonotothen larseni), Champsocephalus gunneri, the lanternfish species Krefftichthys anderssoni, Protomyctophum tenisoni and P. normani become more important during chick-rearing. Like several other penguin species, the macaroni penguin sometimes deliberately swallows small (10– to 30-mm-diameter) stones; this behaviour has been speculated to aid in providing ballast for deep-sea diving, or to help grind food, especially the exoskeletons of crustaceans which are a significant part of its diet. Foraging for food is generally conducted on a daily basis, from dawn to dusk when they have chicks to feed. Overnight trips are sometimes made, especially as the chicks grow older; a 2008 study that used surgically implanted data loggers to track the movement of the birds showed the foraging trips become longer once the chick-rearing period is over. Birds venture out for 10–20 days during incubation and before the moult. Macaroni penguins are known to be the largest single consumer of marine resources among all of the seabirds, with an estimated take of 9.2 million tonnes of krill a year. Outside the breeding season, macaroni penguins tend to dive deeper, longer, and more efficiently during their winter migration than during the summer breeding season. Year round, foraging dives usually occur during daylight hours, but winter dives are more constrained by daylight due to the shorter days. Foraging distance from colonies has been measured at around 50 km (31 mi) at South Georgia, offshore over the continental shelf, and anywhere from 59 to 303 kilometres (37 to 188 mi) at Marion Island. Macaroni penguins normally forage at depths of 15 to 70 m (49 to 230 ft), but have been recorded diving down to 100 m (330 ft) on occasions. Some night foraging does occur, but these dives are much shallower, ranging from only 3 to 6 m (9.8 to 19.7 ft) in depth. Dives rarely exceed two minutes in duration. All dives are V-shaped, and no time is spent at the sea bottom; about half the time on a foraging trip is spent diving. Birds have been calculated as catching from 4 to 16 krill or 40 to 50 amphipods per dive. ### Predators The macaroni penguin's predators consist of birds and aquatic mammals. The leopard seal (Hydrurga leptonyx), Antarctic fur seal (Arctocephalus gazella), Subantarctic fur seal (A. tropicalis), and killer whale (Orcinus orca) hunt adult macaroni penguins in the water. Macaroni colonies suffer comparatively low rates of predation if undisturbed; predators generally only take eggs and chicks that have been left unattended or abandoned. Skua species, the snowy sheathbill (Chionis alba), and kelp gull (Larus dominicanus) prey on eggs, and skuas and giant petrels also take chicks and sick or injured adult birds. ## Life history Like most other penguin species, the macaroni penguin is a social animal in its nesting and its foraging behaviour; its breeding colonies are among the largest and most densely populated. Scientist Charles Andre Bost found that macaroni penguins nesting at Kerguelen dispersed eastwards over an area exceeding 3×10<sup>6</sup> km<sup>2</sup>. Fitted with geolocation sensors, the 12 penguins studied travelled over 10,000 km (6,200 mi) during the six- to seven-month study period and spent their time largely within a zone 47–49°S and 70–110°E in the central Indian Ocean, not coming ashore once. This area, known as the Polar Frontal Zone, was notable for the absence of krill. Living in colonies results in a high level of social interaction between birds, which has led to a large repertoire of visual, as well as vocal, displays. These behaviours peak early in the breeding period, and colonies particularly quieten when the male macaroni penguins are at sea. Agonistic displays are those which are intended to confront or drive off or, alternatively, appease and avoid conflict with other individuals. Macaroni penguins, particularly those on adjacent nests, may engage in 'bill-jousting'; birds lock bills and wrestle, each trying to unseat the other, as well as batter with flippers and peck or strike its opponent's nape. Submissive displays include the 'slender walk', where birds move through the colony with feathers flattened, flippers moved to the front of the body, and head and neck hunched, and general hunching of head and neck when incubating or standing at the nest. ### Courtship and breeding Female macaroni penguins can begin breeding at around five years of age, while the males do not normally breed until at least six years old. Females breed at a younger age because the male population is larger. The surplus of male penguins allows the female penguins to select more experienced male partners as soon as the females are physically able to breed. Commencing a few days after females arrive at the colony, sexual displays are used by males to attract partners and advertise their territory, and by pairs once together at the nest site and at changeover of incubation shifts. In the 'ecstatic display', a penguin bows forward, making loud throbbing sounds, and then extends its head and neck up until its neck and beak are vertical. The bird then waves its head from side to side, braying loudly. Birds also engage in mutual bowing, trumpeting, and preening. Monitoring of pair fidelity at South Georgia has shown around three-quarters of pairs will breed together again the following year. Adult macaroni penguins typically begin to breed late in October, and lay their eggs in early November. The nest itself is a shallow scrape in the ground which may be lined with some pebbles, stones, or grass, or nestled in a clump of tussock grass (on South Georgia Island). Nests are densely packed, ranging from around 66 cm apart in the middle of a colony to 86 cm at the edges. A fertile macaroni penguin will lay two eggs each breeding season. The first egg to be laid weighs 90–94 g (3.2–3.3 oz), 61–64% the size of the 145–155 g (5.1–5.5 oz) second, and is extremely unlikely to survive. The two eggs together weigh 4.8% of the mother's body weight; the composition of an egg is 20% yolk, 66% albumen, and 14% shell. Like those of other penguin species, the shell is relatively thick to minimise risk of breakage, and the yolk is large, which is associated with chicks born in an advanced stage of development. Some of the yolk remains at hatching and is consumed by the chick in its first few days. The fate of the first egg is mostly unknown, but studies on the related royal penguin and erect-crested penguin show the female tips the egg out when the larger second egg is laid. The task of incubating the egg is divided into three roughly equal sessions of around 12 days each over a five-week period. The first session is shared by both parents, followed by the male returning to sea, leaving the female alone to tend the egg. Upon the male's return, the female goes off to sea and does not return until the chick has hatched. Both sexes fast for a considerable period during breeding; the male fasts for 37 days after arrival until he returns to sea for around 10 days before fasting while incubating eggs and young for another 36 days, and the female fasts for 42 days from her arrival after the male until late in the incubation period. Both adults lose 36–40% of their body weight during this period. The second egg hatches around 34 days after it is laid. Macaroni penguins typically leave their breeding colony by April or May to disperse into the ocean. From the moment the egg is hatched, the male macaroni penguin cares for the newly hatched chick. For about 23 to 25 days, the male protects its offspring and helps to keep it warm, since only a few of its feathers have grown in by this time. The female brings food to the chick every one to two days. When they are not being protected by the adult male penguins, the chicks form crèches to keep warm and stay protected. Once their adult feathers have grown in at about 60 to 70 days, they are ready to go out to sea on their own. ## Conservation The population of macaroni penguins is estimated at around 18 million mature individuals; a substantial decline has been recorded in several locations. This includes a 50% reduction in the South Georgia population between the mid-1970s to mid-1990s, and the disappearance of the species from Isla Recalada in Southern Chile. This decline of the overall population in the last 30 years has resulted in the classification of the species as globally Vulnerable by the IUCN Red List of Threatened Species. Long-term monitoring programs are underway at a number of breeding colonies, and many of the islands that support breeding populations of this penguin are protected reserves. The Heard Islands and McDonald Islands are World Heritage Sites for the macaroni penguin. The macaroni penguin may be being impacted by commercial fishing and marine pollution. A 2008 study suggests the abilities of female penguins to reproduce may be negatively affected by climate- and fishing-induced reductions in krill density.
31,755,785
X-Cops
1,170,115,544
null
[ "2000 American television episodes", "Crossover television", "Found footage television episodes", "Reality television series parodies", "Television episodes about werewolves", "Television episodes set in Los Angeles", "Television episodes written by Vince Gilligan", "The X-Files (season 7) episodes" ]
"X-Cops" is the twelfth episode of the seventh season of the American science fiction television series The X-Files. Directed by Michael Watkins and written by Vince Gilligan, the installment serves as a "Monster-of-the-Week" story—a stand-alone plot unconnected to the overarching mythology of The X-Files. Originally aired in the United States by the Fox network on February 20, 2000, "X-Cops" received a Nielsen rating of 9.7 and was seen by 16.56 million viewers. The episode earned positive reviews from critics, largely due to its unique presentation, as well as its use of humor. Since its airing, the episode has been named among the best episodes of The X-Files by several reviewers. The X-Files centers on Federal Bureau of Investigation (FBI) special agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson), who work on cases linked to the paranormal, called X-Files. Mulder is a believer in the paranormal; the skeptical Scully was initially assigned to debunk his work, but the two have developed a deep friendship. In this episode, Mulder and Scully are interviewed for the Fox reality television program Cops during an X-Files investigation. Mulder, hunting what he believes to be a werewolf, discovers that the monster terrorizing people instead feeds on fear. While Mulder embraces the publicity of Cops, Scully is more uncomfortable about appearing on national television. "X-Cops" serves as a fictional crossover with Cops. Gilligan, who was inspired to write the script because he enjoyed Cops, pitched the idea several times to series creator Chris Carter and the series writing staff, receiving a mixed reception; when the crew felt that the show was nearing its end with the conclusion of the seventh season, Gilligan was given the green light because it was seen as an experiment. In the tradition of the real-life Cops program, the entire episode was shot on videotape and featured several members of the crew of Cops. The episode has been thematically analyzed for its use of postmodernism and its presentation as reality television. ## Plot The episode begins with the opening of Cops before cutting to Keith Wetzel (Judson Mills), a deputy with the Los Angeles County Sheriff's Department. He and the Cops film crew are at Willow Park, California, a fictional high-crime district of Los Angeles. Wetzel visits the home of Mrs. Guererro (Perla Walter), who has reported a monster in the neighborhood. Wetzel, expecting to find a dog, follows the creature around a corner but runs back screaming for the crew to flee. They return to Wetzel's police car, but before they can escape, it is overturned by an unseen entity. When backup arrives on the scene, an injured Wetzel claims that he encountered gang members. The police soon discover and surround Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson), believing them to be criminals, before they realize that the pair are FBI agents. Mulder and Scully claim that they are investigating an alleged werewolf that killed a man in the area during the last full moon. According to Mulder, the entity that they are tracking only comes out at night. Scully is irritated by the constant presence of the Cops crew, but Mulder is enthused at the prospect of paranormal proof being presented to a national television audience. The agents and the police interview Mrs. Guerrero, who describes the monster to Ricky (Solomon Eversol), a sketch artist. To Mulder's surprise, Mrs. Guerrero describes not a werewolf, but the horror movie villain Freddy Krueger. Ricky expresses a fear of being alone in the dangerous neighborhood and is found a short time later with serious slashes in his chest. Mulder and Scully find a pink fingernail at the scene. The group also meets Steve and Edy (J. W. Smith and Curtis C.), a couple who witnessed the incident but did not see Ricky's attacker, saying that it appeared he was being attacked by nothing. Scully shows the couple the fingernail, which they identify as belonging to Chantara Gomez (Maria Celedonio), a prostitute. When the agents track down Chantara, whose face is pixelated, she claims that her pimp attacked Ricky and fears that he will kill her. She pleads with the agents for protection. Mulder and Scully have Wetzel guard Chantara while they assist the police in the raid of a crack house. The two are drawn back outside when Wetzel encounters the entity, wildly shooting at it. Inside a police car, the agents find Chantara with her neck broken. When Mulder questions Wetzel, he admits that he thought he saw the "wasp man", a monster his older brother told him about when he was a kid. Though other deputies express skepticism, an officer finds flattened bullets; indicating they physically impacted something, though no trace is found of what they struck. Mulder formulates a theory that the entity changes its form to correspond with its victims' worst fears. Wetzel, Ricky, and Chantara all expressed fear shortly before their run-ins with the entity; it was visible to them, but not to others. The agents think that Steve and Edy may be the entity's next target because they were in the vicinity of Ricky's attack. They head to their house, only to find the couple in the middle of an argument. After Edy expresses fear of a separation from Steve, the couple reconciles. Based on this situation, Mulder proposes that the entity ignored Steve and Edy because they did not exhibit mortal fear. Mulder believes that the entity travels from victim to victim like a contagion. At his request, Scully performs an autopsy on Chantara's body at the morgue. During the procedure, a conversation between Scully and the coroner's assistant (Tara Karsian) causes the latter to panic about a Hantavirus outbreak. The entity suddenly kills her with the disease. When Mulder discusses the death with Scully, he realizes that Wetzel is in danger of being revisited by the entity. The agents and police return to the crack house, where the entity has trapped an injured Wetzel in an upstairs room. The agents are unable to enter the room until dawn comes when the entity disappears and spares Wetzel's life. After the incident is over, Scully expresses her sympathies to Mulder that being filmed by a national television crew did not provide the public exposure to paranormal phenomena that he had hoped. Mulder remains hopeful, noting that it all comes down to how the production crew edits the footage together. ## Production ### Conception and writing "X-Cops" was inspired by the Fox television program Cops, which Vince Gilligan (the writer of this episode) describes as a "great slice of Americana." Gilligan first pitched the idea during the show's fourth season to the X-Files writing staff and series creator Chris Carter, the latter of whom was concerned that the concept was too "goofy". Fellow writer and producer Frank Spotnitz concurred, although he was more uncomfortable with Gilligan's idea of using videotape instead of film; the show's production crew liked to use film to create "effective scares", and Spotnitz worried that shooting exclusively on videotape would be too challenging as the series would be unable to cut and edit the final product. During the show's seventh season, Carter relented. Many critics and fans believed, erroneously, that the seventh season of The X-Files would be the show's last. Similarly, Carter felt that the show had nearly run its course, and seeing the potential in Gilligan's idea, he decided to green-light the episode. Gilligan noted that "the longer we've been on the air, the more chances we've taken. We try to keep the show fresh ... I think [Carter] appreciates that". "X-Cops" was not Gilligan's first attempt at writing a cross-over. Almost three years before, he had developed a script that would have taken the form of an Unsolved Mysteries episode, with unknown actors playing Mulder and Scully and Robert Stack appearing in his role as narrator. This script was later aborted, and re-written as the fifth-season episode "Bad Blood". Gilligan reasoned that, because Mulder and Scully would appear on a nationally syndicated television series, the episode's main monster could not be shown, only "hinted at". Gilligan and the writing staff applied methods previously used in the psychological horror film The Blair Witch Project (1999) to show as little of the monster as possible while still making the episode scary. Michael Watkins, who directed the episode, hired several real Sheriff's deputies as extras for the episode. Casting director Rick Milikan later explained that the group needed "actors who could pull off the believability in just normal off-the-cuff conversation of cops on the job." During the crack house scene, real SWAT team members were hired to break down the doors. Actor Judson Mills later explained that, because there were few cameramen and owing to the manner in which the episode was filmed, "people just behaved as if we were [real] cops. I had other cops waving and giving their signals or heads-up the way they do amongst themselves. It was quite funny". ### Filming and post-production When members of The X-Files staff asked Cops producer John Langley about a potential cross-over, the crew of Cops liked the idea and "offered their total cooperation." Gilligan even attended the shooting of an episode. Inspired by Cops, Watkins' directing style was unique for this episode, and he even directly filmed some of the scenes himself. He also brought in Bertram van Munster, a cameraman for Cops, to shoot scenes to give the finished product an authentic feel. In an attempt at realism, other staff members from Cops participated in the production: Daniel Emmet and John Michael Vaughn, two Cops crew members, were featured during the episode's climax. During rehearsals, Watkins kept the cameras away from the set, so that when videotaping commenced, the cameramen's unfamiliarity would create the "unscripted" feel of a documentary. In addition, a Cops editor was brought in "to insert the trademark blur over the faces of innocent bystanders." "X-Cops" was filmed in Venice, Los Angeles and Long Beach, California. The episode was one of two X-Files episodes to take place in real time (that is, the events in the episode are presented at the same rate that the audience experiences them), with the other being the sixth season episode "Triangle". Due to the nature of the shooting schedule, the episode was relatively cheap to film and production moved at a quick pace. Initially, the actors struggled with the new cinéma vérité style of the episode, and several takes were needed for scenes during the first few days, but these problems receded as taping progressed. On one night, three-and-a-half pages of script were shot in only two hours; the normal rate for The X-Files was three to four pages a day. Both Watkins and Mills likened the filming process to live theater, with the former noting, "In a sense, we were doing theater: we were doing an act or half of a whole act in one take." Anderson called the performance "fun" to shoot, and highlighted "Scully getting pissed off at the camera crew" as her favorite part to play. She further noted that "it was interesting to make the adjustment to playing something more real than you might play for television." Although recorded to create the illusion that events occurred in real time, the episode employed several camera tricks and effects. For the opening shot, a "surreptitious cut" helped to replace actor Judson Mills with a stunt person when the cop car is overturned by the monster. Usually, an episode of The X-Files required editors to make between 800–1200 film cuts, but "X-Cops" only required 45. During post-production, a minor argument broke out between Vince Gilligan and the network. Originally, Gilligan did not want the X-Files logo to appear at any time during the episode. He stressed that he wanted "X-Cops" to feel like an "episode of Cops that happened to involve Mulder and Scully." The network, fearing that people would not understand that "X-Cops" was actually an episode of The X-Files, vetoed this idea. A compromise was eventually reached: the episode would open with the Cops theme song, but The X-Files credits would also appear after the opening scene. In addition, the commercial bumpers would feature red and blue lights flashing across The X-Files logo while dialogue is heard in the background, in a similar fashion to the Cops logo. The episode also features a disclaimer at the beginning informing viewers that the episode is a special installment of The X-Files to prevent watchers from thinking that the show "has been preempted this week by Cops". ## Themes Several critics, such as M. Keith Booker, have argued that "X-Cops" is an example of The X-Files delving into the postmodern school of thought. Postmodernism has been described as a "style and concept in the arts [that] is characterized by the self-conscious use of earlier styles and conventions [and the] mixing of different artistic styles and media". According to Booker, the episode helps to "identify the series as postmodern [due to its] cumulative summary of modern American culture", or, in this case, the show's merging with another popular television series. The episode also serves as an example of the series' "self-consciousness in terms of its status as a (fictional) television" show. According to Jeremy Butler's book Television Style, the episode, along with many other found footage-type movies and shows, helps to suggest that what is being promoted as "live TV", is actually a series of events that have already unfolded in the past. Even though the episode is "self-conscious", "reflexive", and humorous, the real-time aspects of "X-Cops" "heighten[s] the sense of realism within the episode", and makes the result come across as hyper-realistic. This sense of realism is further heightened by the near lack of music in the episode; aside from the title theme, Mark Snow's soundtrack is not to be heard. Sarah Stegall proposed that the episode works on two separate layers. On the top-most superficial layer, it functions as an outright parody, mimicking both the stylings of The X-Files as well as Cops. On the other layer, she notes that "it's a serious look at validation." Throughout the episode, Mulder is attempting to capture the monster on camera and expose it to a national audience. All of the witnesses to the monster function as unreliable narrators: a Hispanic woman with "a history of medications"; a black, homosexual "Drama Queen"; a prostitute with a drug problem; a "terrified morgue attendant", and Deputy Wetzel. Stegall argues that all of these characters are from "the wrong side of the tracks" and would not be accepted, let alone believed, by "a placid, middle-class society". In the end, the only reliable witness is the camera, but Stegall points out that "the camera, suspiciously, never quite manages to find [the monster]." Furthermore, she reasons that Mulder's biggest fear is not finding the monster responsible for the murders. To back this idea up, she points out that not only does Mulder fail to capture any evidence of the paranormal, but he also fails before a live audience on national television. ## Broadcast and reception "X-Cops" was first broadcast in the United States on the Fox network on February 20, 2000. Watched by 16.56 million viewers, according to the Nielsen ratings system, it was the second-highest rated episode of the season, after "The Sixth Extinction". It received a Nielsen rating of 9.7, with a 14 share among viewers, meaning that 9.7 percent of all households in the United States, and 14 percent of people watching television at that time, tuned into the episode. It originally aired in the United Kingdom on Sky1 on June 4, 2000, receiving 850,000 viewers, making it the channel's third-most watched program for that week. On May 13, 2003, "X-Cops" was released on DVD as part of the complete seventh-season box set. Initial critical reaction to the episode was generally positive, although a few reviewers felt that the episode was a gimmick. Eric Mink of the Daily News described it as "nifty" and "exceptionally clever." While noting that "The X-Files hasn't exactly smoked this season", Kinney Littlefield from the Orange County Register called "X-Cops" a stand-out episode from the seventh season. Stegall wrote of Vince Gilligan: "top honors must go to Vince Gilligan, whose work on The X-Files is consistently the sharpest and most consistent." Tom Kessenich, in his book Examinations, gave the episode a largely positive review. He called the entry "one of the most entertaining episodes of the season" and "60 minutes of pure fun". Rich Rosell from Digitally Obsessed awarded the episode 5 out of 5 stars and wrote that "some might view it as a stunt, but having Mulder and Scully be part of a spot-on Cops! parody (complete with full "Bad Boys, bad boys" intro) is just brilliant stuff". Not all reviews were positive. Kenneth Silber from Space.com gave the episode a negative review and wrote, "'X-Cops' is a wearisome episode. Watching the agents and police repeatedly run through the darkened streets of Los Angeles after an unseen—and uninteresting—foe evokes merely a sense of futility. The use of the format of the Fox TV show Cops provides some transient novelty but little drama or humor." Later reviews praised the episode as one of the show's best installments. Robert Shearman and Lars Pearson, in their book Wanting to Believe: A Critical Guide to The X-Files, Millennium & The Lone Gunmen, rated the episode four stars out of five. The two wrote that the episode was "funny, it's clever, and it's actually quite frightening". Shearman and Pearson also wrote positively of the faux documentary style, likening it to The Blair Witch Project. Zack Handlen of The A.V. Club awarded the episode an "A–" and called it "witty, inventive, and intermittently spooky". He argued that the episode was a late-series "gimmick episode" and compared it to the last few seasons of House; although he reasoned that House relied on gimmicks to prop itself up, "X-Cops" is "the work of a creative team which may be running out of ideas, but still has enough gas in the tank to get us where we need to go." Furthermore, Handlen felt that the show used the Cops format to the best of its ability and that many of the scenes were humorous, startling, or a combination of both. Since its airing, "X-Cops" has appeared on several best-of lists. Montreal's The Gazette named it the eighth best X-Files episode, writing that it "pushed the show to new post-modern heights." Rob Bricken from Topless Robot named it the fifth funniest X-Files episode, and Starpulse described it as the funniest X-Files episode, writing that when the series "did comedy, it was probably the funniest drama ever on television". UGO named the episode's main antagonist as one of the greatest "Top 11 X-Files Monsters," noting that the creature is a "perfect [Monster-of-the-Week] if only because the monster in question is a living, breathing metaphor, a never-seen specter that shifts to fit the fears of the person witnessing it." Narin Bahar from SFX named the episode one of the "Best Sci-Fi TV Mockumentaries" and wrote, "Whether you see this as a brilliantly post-modern merging of fact and fiction or shameless cross-promotion of two of the Fox Network's biggest TV shows, there's lots of nods to the real Cops show in this episode". Bahar praised the scene featuring the terrified lady telling Mulder that Freddy Krueger attacked her—calling the scene the "best in-joke"—and applauded the two series' cohesion.
20,260,944
L 20e α-class battleship
1,159,816,641
Cancelled battleship design of the German Imperial Navy
[ "Battleship classes", "Battleships of the Imperial German Navy", "Proposed ships of Germany", "World War I battleships of Germany" ]
L 20e α was a design for a class of battleships to be built in 1918 for the German Kaiserliche Marine (Imperial Navy) during World War I. Design work on the class of battleship to succeed the Bayern-class battleships began in 1914, but the outbreak of World War I in July 1914 led to these plans being shelved. Work resumed in early 1916 and lessons from the Battle of Jutland, fought later that year, were incorporated into the design. Reinhard Scheer, the commander of the fleet, wanted larger main guns and a higher top speed than earlier vessels, to combat the latest ships in the British Royal Navy. A variety of proposals were submitted, with armament ranging from the same eight 38 cm (15 in) guns of the Bayern class to eight 42 cm (16.5 in) guns. Work on the design was completed by September 1918, but by then there was no chance for them to be built. Germany's declining war situation and the reallocation of resources to support the U-boat campaign meant the ships would never be built. The ships would have been significantly larger than the preceding Bayern-class battleships, at 238 m (780 ft 10 in) long, compared to 180 m (590 ft 7 in) for the preceding ships. The L 20e α class would have been significantly faster, with a top speed of 26 knots (48 km/h; 30 mph), compared to the 21-knot (39 km/h; 24 mph) maximum of the Bayerns and would have been the first German warships to have mounted guns larger than 38 cm. ## Background Just before the start of the 20th century, Germany embarked on a naval expansion to challenge British control of the seas, under the direction of Vizeadmiral (Vice Admiral) Alfred von Tirpitz. Over the following decade, Germany built some two dozen pre-dreadnought battleships over the Brandenburg, Kaiser Friedrich III, Wittelsbach, Braunschweig and Deutschland classes. The dreadnought revolution disrupted German plans but Tirpitz nevertheless continued his program, securing the construction of a further twenty-one dreadnought battleships by 1914, with the Nassau, Helgoland, Kaiser, König, and Bayern classes. Beginning before World War I broke out in July 1914, the German Kaiserliche Marine (Imperial Navy) began planning for the battleship design for the 1916 construction program, which would follow the Bayern-class battleships that were then under construction. The Bayerns were armed with a main battery of 38-centimeter (15 in) guns in four twin-gun turrets. The British had begun building the similarly-armed Queen Elizabeth and Revenge-class battleships and the Germans intended the 1916 battleship design to be superior to these and designs were drawn up with an armament of ten or twelve 38 cm guns. The designs included versions with the standard twin-gun turrets favored by the German navy, along with variants with both twin and quadruple turrets similar to the French Normandie-class battleships that had been laid down in 1913. The outbreak of war led to the abandonment of the plans. By 1916, work had resumed on new battleship designs and, in April, the first three proposals were submitted: the L 1, L 2 and L 3 designs, which were similar to the Ersatz Yorck-class battlecruisers then also under development. The battleships were the same size as the battlecruisers and L 1 and L 3 had the same armament of eight 38 cm guns (L 2 would have mounted ten of those guns) but they would have had a top speed of 25 to 26 knots (46 to 48 km/h; 29 to 30 mph) compared to the 29 to 29.5 knots (53.7 to 54.6 km/h; 33.4 to 33.9 mph) speeds of the Ersatz Yorcks and heavier armor. Work on the designs continued at a slow pace, with thought given to armament alternatives, including batteries of eight or ten 38 cm or eight 42 cm (17 in) guns. ## Development and cancellation In January 1916, Vizeadmiral Reinhard Scheer became commander in chief of the High Seas Fleet. Following the Battle of Jutland on 31 May – 1 June 1916, Scheer pushed for new, more powerful battleships, which were in concert with Kaiser Wilhelm II's call for what he referred to as the "Einheitsschiff" (unified ship) that combined the armor and firepower of battleships and the high speed of battlecruisers. Another faction in the naval command, led by Admiral Eduard von Capelle, the State Secretary of the Reichsmarineamt (RMA—Imperial Navy Office), opposed the idea and favored traditional, differentiated capital ship designs. Scheer demanded that the new ships should have guns of 42 cm caliber, an armored belt 350 mm (14 in) thick and be capable of speeds of up to 32 knots (59 km/h; 37 mph), all on a displacement of up to 50,000 metric tons (49,000 long tons). The new 42 cm gun was designed by 29 December 1916 and was approved on 11 September 1918, though none were built. By the end of 1916, design work on three proposals to meet Scheer's specifications was complete, all of which displaced around 42,000 metric tons (41,000 long tons). L 20b, L 21b and L 22c; L 20b would have eight 42 cm guns, L 21b and L 22c ten or eight 38 cm guns, respectively. After the beginning of unrestricted submarine warfare in February 1917, Capelle argued that capital ship construction should not be halted in favor of U-boat construction. Work on L 20b continued, as the naval command preferred the 42 cm gun variant, with a refined version submitted on 21 August 1917 as L 20e; a new design, L 24, was also submitted, which was similar to L 20e but was slightly longer, faster by 1.5 knots (2.8 km/h; 1.7 mph), had two extra boilers and a correspondingly wider funnel. It also differed in the placement of the torpedo armament. The L 20 design placed them in the hull below the waterline, while the L 24 proposal used above-water launchers. Displacement for the designs was fixed at 45,000 t (44,000 long tons). Both ships had a top speed of only 23 knots (43 km/h; 26 mph), which was unacceptable to Scheer. By October 1917, the L 20e and L 24e designs were refined into the L 20e α and L 24e α versions; these displaced 44,500 t (43,800 long tons) and 45,000 t respectively. Secondary batteries were reduced to twelve guns, compared to the sixteen guns of the Bayern class. L 24e α also had an additional pair of torpedo tubes, mounted above the waterline, compared to L 20e α. The armor layout for both designs was similar to that of the Bayern class. The proposals were submitted to the naval command in January 1918; Wilhelm II continued to stress the importance of the "Einheitsschiff" concept and he suggested that the speed of the design might be significantly increased by removing the forward superfiring turret and the submerged torpedo tubes. For his part, Scheer asked whether triple or quadruple turrets might be used to save enough weight for speed to be increased to 30 knots (56 km/h; 35 mph), which delayed completion of the design until mid-1918. By that time, the studies that had been completed suggested that the weight savings would be minimal and that the more crowded triple or quadruple turrets would reduce the rate of fire too much. Two more proposals were completed in mid-1918; the first was almost the same as the L 20e α variant and the second was similar but had only six main battery guns and a top speed of 28 knots (52 km/h; 32 mph). By 11 September 1918, the L 20e α variant was selected as the basis for the next battleship to be built. During the design process, it was decided that the utmost concern was that the ships could be built and placed into service quickly. The ships were to discard the use of broadside belt armor below the waterline, the attachment of which was an extremely time-consuming process. It was believed that the higher speed of the class—26 knots (48 km/h; 30 mph)—would make up for the vulnerability to torpedo attack and make the armor unnecessary. The ships were never built, primarily because the shipyard capacity available that late in the war had largely been diverted to support the U-boat campaign. The work that would have been necessary to design and test the new 42 cm turret clashed with U-boat construction, which had become the priority of the Navy. Krupp, the firm that had been awarded the contract to conduct the testing, informed the RMA that design work on the new turret would have to wait and Capelle accepted the news without much objection. The RMA filed a report dated 1 February 1918, which stated that capital ship construction had stopped, primarily due to the shifting priorities to the U-boat war. Though the ships of the class were never built, the naval historian Timothy Mulligan notes that with "the unresolved dilemma of conflicting design concepts and overly ambitious demands in battleship characteristics ..." that the L 20 α design represented, "... the Imperial Navy bequeathed a dubious legacy to its successors". ## Characteristics ### General characteristics and machinery The L 20e α design was 238 m (781 ft) long at the waterline, with a beam of 33.5 m (110 ft) and a draft of 9 m (30 ft). Displacement was to be approximately 44,500 metric tons (43,800 long tons) as designed and up to 49,500 metric tons (48,700 long tons) fully loaded. The ships were intended to have the typical single tripod foremast mounted atop the large, forward superstructure and a lighter pole main mast aft of the funnel. They to have been powered by either two or four sets of steam turbines driving four shafts, which were to have a combined output of 100,000 shaft horsepower (75,000 kW). The steam plant consisted of six oil-fired and sixteen coal-fired boilers trunked into a large funnel. Bunkerage was 3,000 metric tons (2,953 long tons) of coal and 2,000 metric tons (1,968 long tons) of fuel oil. Externally, the ships were similar to the Ersatz Yorck-class battlecruisers. ### Armament The main battery was arranged in four twin-gun turrets, as in the preceding Bayern class, in a superfiring arrangement on the center line; the aft pair of turrets were separated by engine rooms. The four turrets each mounted two 42 cm SK L/45 guns, for a total of eight guns on the broadside. The 42 cm gun fired a 1,000-kilogram (2,200 lb) shell out to 33,000 m (36,000 yd) at the maximum elevation of 30 degrees. The estimated muzzle velocity was 800 meters per second (2,600 ft/s) The ships were to have been armed with a secondary battery of twelve 15 cm (5.9 in) SK L/45 guns mounted in casemates in the main deck around the superstructure. The anti-aircraft battery was to have consisted of either eight 8.8 cm (3.5 in) SK L/45 guns or eight 10.5 cm (4.1 in) SK L/45 guns. Four of these would have been mounted on either side of the forward conning tower on the upper deck and the other four would have been abreast of the rear superfiring turret on the main deck. The design was to have been equipped with three submerged torpedo tubes, either 60 or 70 cm (23.6 or 27.6 in) in diameter. One tube was placed in the bow, the other two on either beam to the rear of the engine rooms. ### Armor The ships had a 350 mm (13.8 in) armored belt running from slightly forward of the fore barbette to slightly aft of the fourth barbette. Aft of the rearmost turret the belt was reduced to 300 mm (11.8 in), though it did not extend all the way to the stern. In the forward part of the ship, the belt was reduced to 250 mm (9.8 in) and the bow received only splinter protection in the form of 30 mm (1.2 in) thick plate. The belt began 35 cm (13.8 in) below the waterline and extended to 195 cm (76.8 in) above it. Directly above the main belt was a 250 mm thick strake of armor plating which extended up to the upper deck. The ships' armored deck was to have been 50 mm (2 in) thick forward, increased to 50–60 mm (2.4 in) amidships and 50 to 120 mm (4.7 in) aft. Additional horizontal protection forward consisted of a forecastle deck that was 20 to 40 mm (0.8 to 1.6 in) thick. The ships were also protected by a torpedo bulkhead that was 50–60 mm thick. A sloped 30 mm thick splinter bulkhead to protect against shell fragments, extended from the top of the torpedo bulkhead up to the upper deck. The barbettes were also 350 mm thick on the front and sides, decreasing to 250 mm on the rear. Their lower portions, which were protected by the belt armor, were significantly reduced to 100 mm (3.9 in). The main gun turrets had 350 mm faces, 250 mm sides, 305 mm (12 in) rears, and 150 to 250 mm (5.9 to 9.8 in) roofs. The secondary guns were protected with 170 mm (6.7 in) of armor plate. The forward conning tower had 350 to 400 mm (13.8 to 15.7 in) of armor protection and the aft conning tower received just 250 mm of side protection.
15,139,813
Robert Peake the Elder
1,169,064,385
English painter (c. 1551–1619)
[ "1550s births", "1619 deaths", "16th-century English painters", "17th-century English painters", "Court painters", "English male painters", "People from Lincolnshire" ]
Robert Peake the Elder (c. 1551–1619) was an English painter active in the later part of Elizabeth I's reign and for most of the reign of James I. In 1604, he was appointed picture maker to the heir to the throne, Prince Henry; and in 1607, serjeant-painter to King James I – a post he shared with John De Critz. Peake was the only English-born painter of a group of four artists whose workshops were closely connected. The others were De Critz, Marcus Gheeraerts the Younger, and the miniature painter Isaac Oliver. Between 1590 and about 1625, they specialised in brilliantly coloured, full-length "costume pieces" that are unique to England at this time. It is not always possible to attribute authorship between Peake, De Critz, Gheeraerts and their assistants with certainty. ## A family of painters Peake married Elizabeth Beckwith, probably in 1579. He is often called "the elder", to distinguish him from his son, the painter and print seller William Peake (c. 1580–1639) and from his grandson, Sir Robert Peake (c. 1605–67), who followed his father into the family print-selling business. In the accounts for Prince Henry's funeral, Robert Peake is called "Mr Peake the elder painter", and William Peake, "Mr Peake the younger painter". Peake's grandson Sir Robert Peake (sometimes wrongly called his son) was knighted by King Charles I during the English Civil War. The Parliamentarians captured him after their siege of Basing House, which was under his command. ## Career ### Early life and work Peake was born to a Lincolnshire family in about 1551. He began his training on 30 April 1565 under Laurence Woodham, who lived at the sign of "The Key" in Goldsmith's Row, Westcheap. He was apprenticed, three years after the miniaturist Nicholas Hilliard, to the Goldsmiths' Company in London. He became a freeman of the company on 20 May 1576. His son William later followed in his father's footsteps as a freeman of the Goldsmiths' Company and a portrait painter. Peake's training would have been similar to that of John de Critz and Marcus Gheeraerts the Younger, who may have been pupils of the Flemish artist Lucas de Heere. Peake is first recorded as a painter in 1576 in the pay of the Office of the Revels, the department that oversaw court festivities for Elizabeth I. When Peake began practising as a portrait painter is uncertain. According to art historian Roy Strong, he was "well established" in London by the late 1580s, with a "fashionable clientele". Payments made to him for portraits are recorded in the Rutland accounts at Belvoir in the 1590s. A signed portrait from 1593, known as the "Military Commander", shows Peake's early style. Other portraits have been grouped with it on the basis of similar lettering. Its three-quarter-length portrait format is typical of the time. ### Painter to Prince Henry In 1607, after the death of Leonard Fryer, Peake was appointed serjeant-painter to King James I, sharing the office with John De Critz, who had held the post since 1603. The role entailed the painting of original portraits and their reproduction as new versions, to be given as gifts or sent to foreign courts, as well as the copying and restoring of portraits by other painters in the royal collection. In addition to copying and restoring portraits, the serjeant-painters also undertook decorative tasks, such as the painting of banners and stage scenery. Parchment rolls of the Office of the Works record that De Critz oversaw the decorating of royal houses and palaces. Since Peake's work is not recorded there, it seems as if De Critz took responsibility for the more decorative tasks, while Peake continued his work as a royal portrait painter. However, Peake and Paul Isackson painted the cabins, carvings, and armorials on the ship the Prince Royal in 1611. In 1610, Peake was described as "painter to Prince Henry", the sixteen-year-old prince who was gathering around him a significant cultural salon. Peake commissioned a translation of Books I-V of Sebastiano Serlio's Architettura, which he dedicated to the prince in 1611. Scholars have deduced from payments made to Peake that his position as painter to Prince Henry led to his appointment as serjeant-painter to the king. The payments were listed by the Prince's household officer Sir David Murray as disbursements from the Privy Purse to "Mr Peck". On 14 October 1608, Peake was paid £7 for "pictures made by His Highness’ command"; and on 14 July 1609, he was paid £3 "for a picture of His Highness which was given in exchange for the King’s picture". At about the same time, Isaac Oliver was paid £5.10s.0d. for each of three miniatures of the prince. Murray's accounts reveal, however, that the prince was paying more for tennis balls than for any picture. Peake is also listed in Sir David Murray's accounts for the period between 1 October 1610 and 6 November 1612, drawn up to the day on which Henry, Prince of Wales, died, possibly of typhoid fever, at the age of eighteen: "To Mr Peake for pictures and frames £12; two great pictures of the Prince in arms at length sent beyond the seas £50; and to him for washing, scouring and dressing of pictures and making of frames £20.4s.0d". Peake is listed in the accounts for Henry's funeral under "Artificers and officers of the Works" as "Mr Peake the elder painter". He was allotted seven yards of mourning cloth, plus four for his servant. Also listed is "Mr Peake the younger painter", meaning Robert's son William, who was allotted four yards of mourning cloth. After the prince's death, Peake moved on to the household of Henry's brother, Charles, Duke of York, the future Charles I of England. Accounts for 1616 call Peake the Prince's painter, recording that he was paid £35 for "three several pictures of his Highness". On 10 July 1613, he was paid £13.6s.8d. by the vice-chancellor of the University of Cambridge, "in full satisfaction for Prince Charles his picture", for a full-length portrait which is still in the Cambridge University Library. ### Death Peake died in 1619, in the middle of October, as his will shows. Until relatively recently, it was believed that Peake died later. Erna Auerbach, put his death at around 1625, and the catalogue for the 1972 The Age of Charles I exhibition at the Tate Gallery suggested Peake was active as late as 1635. His will was made on 10 October 1619 and proved on the 16th. The date of his burial is unknown because the registers of his parish church, St Sepulchre-without-Newgate, were destroyed in the Great Fire of London. This was a time of several deaths in the artistic community. Nicholas Hilliard had died in January 1619; Anne of Denmark, who had done so much to patronise the arts, in March; and the painter William Larkin, Peake's neighbour, in April or May. Though James I reigned until 1625, art historian Roy Strong considers that the year 1619 "can satisfactorily be accepted as the terminal date of Jacobean painting". ## Paintings It is difficult to attribute and date portraits of this period because painters rarely signed their work, and their workshops produced portraits en masse, often sharing standard portrait patterns. Some paintings, however, have been attributed to Peake on the basis of the method of inscribing the year and the sitter's age on his documented portrait of a "military commander" (1592), which reads: "M.BY.RO.\| PEAKE" ("made by Robert Peake"). Art historian Ellis Waterhouse, however, suspected that the letterer may have worked for more than one studio. ### Procession Picture The painting known as Queen Elizabeth going in procession to Blackfriars in 1601, or simply The Procession Picture (see illustration), is now often accepted as the work of Peake. The attribution was made by Roy Strong, who called it "one of the great visual mysteries of the Elizabethan age". It is an example of the convention, prevalent in the later part of her reign, of painting Elizabeth as an icon, portraying her as much younger and more triumphant than she was. As Strong puts it, "[t]his is Gloriana in her sunset glory, the mistress of the set piece, of the calculated spectacular presentation of herself to her adoring subjects". George Vertue, the eighteenth-century antiquarian, called the painting "not well nor ill done". Strong reveals that the procession was connected to the marriage of Henry Somerset, Lord Herbert, and Lady Anne Russell, one of the queen's six maids of honour, on 16 June 1600. He identifies many of the individuals portrayed in the procession and shows that instead of a litter, as was previously assumed, Queen Elizabeth is sitting on a wheeled cart or chariot. Strong also suggests that the landscape and castles in the background are not intended to be realistic. In accordance with Elizabethan stylistic conventions, they are emblematic, here representing the Welsh properties of Edward Somerset, Earl of Worcester, to which his son Lord Herbert was the heir. The earl may have commissioned the picture to celebrate his appointment as Master of the Queen's Horse in 1601. Peake clearly did not paint the queen, or indeed the courtiers, from life but from the "types" or standard portraits used by the workshops of the day. Portraits of the queen were subject to restrictions, and from about 1594 there seems to have been an official policy that she always be depicted as youthful. In 1594, the Privy council ordered that unseemly portraits of the queen be found and destroyed, since they caused Elizabeth "great offence". The famous Ditchley portrait (c. 1592), by Marcus Gheeraerts the Younger, was used as a type, sometimes called the "Mask of Youth" face-pattern, for the remainder of the reign. It is clear that Gheeraerts' portrait provided the pattern for the queen's image in the procession picture. Other figures also show signs of being traced from patterns, leading to infelicities of perspective and proportion. ### Full-length portraits At the beginning of the 1590s, the full-length portrait came into vogue and artistic patrons among the nobles began to add galleries of such paintings to their homes as a form of cultural ostentation. Peake was one of those who met the demand. He was also among the earliest English painters to explore the full-length individual or group portrait with active figures placed in a natural landscape, a style of painting that became fashionable in England. As principal painter to Prince Henry, Peake seems to have been charged with showing his patron as a dashing young warrior. In 1603, he painted a double portrait, now in the Metropolitan Museum, New York, of the prince and his boyhood friend John Harington, son of Lord Harington of Exton (see above). The double portrait is set outdoors, a style introduced by Gheeraerts in the 1590s, and Peake's combination of figures with animals and landscape also foreshadows the genre of the sporting picture. The country location and recreational subject lend the painting an air of informality. The action is natural to the setting, a fenced deer-park with a castle and town in the distance. Harington holds a wounded stag by the antlers as Henry draws his sword to deliver the coup de grâce. The prince wears at his belt a jewel of St George slaying the dragon, an allusion to his role as defender of the realm. His sword is an attribute of kingship, and the young noble kneels in his service. The stag is a fallow deer, a non-native species kept at that time in royal parks for hunting. A variant of this painting in the Royal Collection, painted c. 1605, features Robert Devereux, 3rd Earl of Essex, in the place of John Harington and displays the Devereux arms. In the same year, Peake also painted his first portrait of James I's only surviving daughter, Elizabeth. This work, like the double portrait, for which it might be a companion piece, appears to have been painted for the Harington family, who acted as Elizabeth's guardians from 1603 to 1608. In the background of Elizabeth's portrait is a hunting scene echoing that of the double portrait, and two ladies sit on an artificial mound of a type fashionable in garden design at the time. Peake again painted Henry outdoors in about 1610. In this portrait, now at the Royal Palace of Turin, the prince looks hardly older than in the 1603 double portrait; but his left foot rests on a shield bearing the three-feathers device of the Prince of Wales, a title he did not hold until 1610. Henry is portrayed as a young man of action, about to draw a jewel-encrusted sword from its scabbard. The portrait was almost certainly sent to Savoy in connection with a marriage proposed in January 1611 between Henry and the Infanta Maria, daughter of Charles Emmanuel I, Duke of Savoy. James I's daughter Elizabeth was also a valuable marriage pawn. She too was offered to Savoy, as a bride for the Prince of Piedmont, the heir of Charles Emanuel. The exchange of portraits as part of royal marriage proposals was the practice of the day and provided regular work for the royal painters and their workshops. Prince Henry commissioned portraits from Peake to send them to the various foreign courts with which marriage negotiations were underway. The prince's accounts show, for example, that the two portraits Peake painted of him in arms in 1611–12 were "sent beyond the seas". A surviving portrait from this time shows the prince in armour, mounted on a white horse and pulling the winged figure of Father Time by the forelock. Art historian John Sheeran suggests this is a classical allusion that signifies opportunity. The old man carries Henry's lance and plumed helmet; and scholar Chris Caple points out that his pose is similar to that of Albrecht Dürer's figure of death in Knight, Death and the Devil (1513). He also observes that the old man was painted later than other components of the painting, since the bricks of the wall show through his wings. When the painting was restored in 1985, the wall and the figure of time were revealed to modern eyes for the first time, having been painted over at some point in the seventeenth century by other hands than Peake's. The painting has also been cut down, the only original canvas edge being that on the left.[^1] ### Lady Elizabeth Pope Peake's portrait of Lady Elizabeth Pope may have been commissioned by her husband, Sir William Pope, to commemorate their marriage in 1615. Lady Elizabeth is portrayed with her hair loose, a symbol of bridal virginity. She wears a draped mantle—embroidered with seed pearls in a pattern of ostrich plumes—and a matching turban. The mantle knotted on one shoulder was worn in Jacobean court masques, as the costume designs of Inigo Jones indicate. The painting's near-nudity, however, makes the depiction of an actual masque costume unlikely. Loose hair and the classical draped mantle also figure in contemporary personifications of abstract concepts in masques and paintings. Yale art historian Ellen Chirelstein argues that Peake is portraying Lady Elizabeth as a personification of America, since her father, Sir Thomas Watson, was a major shareholder in the Virginia Company. ### Assessment In 1598, Francis Meres, in his Palladis Tamia, included Peake on a list of the best English artists. In 1612, Henry Peacham wrote in The Gentleman's Exercise that his "good friend Mr Peake", along with Marcus Gheeraerts, was outstanding "for oil colours". Ellis Waterhouse suggested that the genre of elaborate costume pieces was as much a decorative as a plastic art. He notes that these works, the "enamelled brilliance" of which has become apparent through cleaning, are unique in European art and deserve respect. They were produced chiefly by the workshops of Peake, Gheeraerts the Younger, and De Critz. Sheeran detects the influence of Hilliard's brightly patterned and coloured miniatures in Peake's work and places Peake firmly in the "iconic tradition of late Elizabethan painting". He employed techniques from European Mannerism and followed the artificial and decorative style characteristic of Elizabethan painting. By the time he was appointed serjeant-painter in 1607, his compelling and semi-naive style was somewhat old fashioned compared with De Critz and other contemporaries. However, Peake's portraits of Prince Henry are the first to show his subject in ‘action’ poses. Sheeran believes that Peake's creativity waned into conservatism, his talent "dampened by mass production". He describes Peake's Cambridge portrait, Prince Charles, as Duke of York'' as poorly drawn, with a lifeless pose, in a stereotyped composition that "confirms the artist's reliance on a much repeated formula in his later years". Art historian and curator Karen Hearn, on the other hand, praises the work as "magnificent" and draws attention to the naturalistically rendered note pinned to the curtain. Peake painted the portrait to mark Charles's visit to Cambridge on 3 and 4 March 1613, during which he was awarded an M.A.—four months after the death of his brother. Depicting Prince Charles wearing the Garter and Lesser George, Peake here reverts to a more formal, traditional style of portraiture. The note pinned to a curtain of cloth of gold, painted in trompe-l'œil fashion, commemorates Charles's visit in Latin. X-rays of the portrait reveal that Peake painted it over another portrait. Pentimenti, or signs of alteration, can be detected: for example, Charles's right hand originally rested on his waist. ## Gallery ## See also - Artists of the Tudor court [^1]: Caple, Objects, 88–91. • Unrestored version of Henry, Prince of Wales, on Horseback
986,986
Hurricane Hattie
1,173,748,043
Category 5 Atlantic hurricane in 1961
[ "1961 Atlantic hurricane season", "Belize City", "Category 5 Atlantic hurricanes", "Hurricanes in Belize", "Hurricanes in Colombia", "Hurricanes in Guatemala", "Hurricanes in Honduras", "Retired Atlantic hurricanes" ]
Hurricane Hattie was the strongest and deadliest tropical cyclone of the 1961 Atlantic hurricane season, reaching peak intensity as a Category 5 hurricane. The ninth tropical storm, seventh hurricane, fifth major hurricane, and second Category 5 of the season, Hattie originated from an area of low pressure that strengthened into a tropical storm over the southwestern Caribbean Sea on October 27. Moving generally northward, the storm quickly became a hurricane and later major hurricane the following day. Hattie then turned westward west of Jamaica and strengthened into a Category 5 hurricane, with maximum sustained winds of 165 mph (270 km/h). It weakened to Category 4 before making landfall south of Belize City on October 31. The storm turned southwestward and weakened rapidly over the mountainous terrain of Central America, dissipating on November 1. Hattie first affected the southwestern Caribbean, where it produced hurricane-force winds and caused one death on San Andres Island. It was initially forecast to continue north and strike Cuba, prompting evacuations on the island. While turning west, Hattie dropped heavy rainfall of up to 11.5 in (290 mm) on Grand Cayman. The country of Belize, at the time known as British Honduras, sustained the worst damage from the hurricane. The former capital, Belize City, was buffeted by strong winds and flooded by a powerful storm surge. The territory governor estimated that 70% of the buildings in the city had been damaged, leaving more than 10,000 people homeless. The destruction was so severe that it prompted the government to relocate inland to a new city, Belmopan. Overall, Hattie caused about \$60 million in losses and 307 deaths in the territory. Although damage from Hattie was heavier than a hurricane in 1931 that killed 2,000 people, the death toll from Hattie was considerably lower as a result of early warnings. Elsewhere in Central America, Hattie killed 11 people. ## Meteorological history For a few days toward the end of October 1961, a low-pressure area persisted in the western Caribbean Sea, north of the Panama Canal Zone. On October 25, an upper-level anticyclone moved over the low; the next day, a trough over the western Gulf of Mexico provided favorable outflow for the disturbance. At 0000 UTC on October 27, a ship nearby reported southerly winds of 46 mph (74 km/h). Later that day, the airport on San Andres Island reported easterly winds of 60 mph (97 km/h). The two observations confirmed the presence of a closed wind circulation, centered about 70 miles (110 km) southeast of San Andres, or 155 mi (249 km) east of the Nicaraguan coast; as a result, the Miami Weather Bureau began issuing advisories on the newly formed Tropical Storm Hattie. After being classified, Hattie moved steadily northward, passing very near or over San Andres Island. A station on the island recorded a pressure of 991 mbar (29.3 inHg) and sustained winds of 80 mph (130 km/h), which indicated that Hattie had reached hurricane status. Late on October 28, a Hurricane Hunters flight encountered a much stronger hurricane, with winds of 125 mph (201 km/h) in a small area near the center. At the time, gale-force winds extended outward 140 mi (230 km) to the northeast and 70 miles (110 km) to the southwest. Early on October 29, a trough extending from Nicaragua to Florida was expected to allow Hattie to continue northward, based on climatology for similar hurricanes. Later that day, Hattie was forecast to be an imminent threat to the Cayman Islands and western Cuba. Around that time, a strengthening ridge to the north turned the hurricane northwestward, which spared the Greater Antilles but increased the threat to Central America. With the strengthening ridge to its north, Hattie began restrengthening after retaining the same intensity for about 24 hours. Initially, forecasters at the Miami Weather Bureau predicted the storm to turn northward again. Late on October 29, the center of the hurricane passed about 90 miles (140 km) southwest of Grand Cayman, at which time the interaction between Hattie and the ridge to its north produced squally winds of around 30 mph (48 km/h) across Florida. Early on October 30, the Hurricane Hunters confirmed the increase in intensity, reporting winds of 140 mph (230 km/h). The storm's minimum central pressure continued to drop throughout the day, reaching 924 mbar (27.3 inHg) by 1300 UTC; a lower pressure of 920 mbar (27 inHg) was computed at 1700 UTC that day, based on a flight-level reading from the Hurricane Hunters. Hattie later curved toward the west-southwest, passing between the Cayman Islands and the Swan Islands. Late on October 30, Hattie attained peak winds of 165 mph (266 km/h), concomitantly with a minimum central pressure of 914 mbar (27.0 inHg), about 190 mi (310 km) east of the border of Mexico and British Honduras. This made Hattie a Category 5 hurricane on the Saffir-Simpson Hurricane Scale, making it the latest hurricane on record to reach the status until a reanalysis of the 1932 season revealed that Hurricane Fourteen had a similar intensity on November 5, six days after Hattie. Additionally, Hattie was the strongest October hurricane in the northwest Caribbean until Hurricane Mitch in 1998. Hattie maintained much of its intensity as it continued toward the coast of British Honduras. After moving through several small islands offshore, the hurricane made landfall a short distance south of Belize City on October 31, with an eyewall of about 25 miles (40 km) in diameter. Based on a post-season analysis, it was determined that Hattie had weakened to winds of 150 mph (240 km/h) before moving ashore. During landfall, a ship anchored between Belize City and Stann Creek registered a minimum central pressure of 924 mbar (27.3 inHg). The hurricane deteriorated rapidly over land, dissipating on November 1 as it moved into the mountains of Guatemala. During its dissipation, Tropical Storm Simone was developing off the Pacific coast of Guatemala, however, later analysis concluded that Simone was not a tropical cyclone at all. Later, Tropical Storm Inga formed from a complex interaction with the remnants of Hattie and nearby disturbed weather. ## Preparations Upon initiating advisories on Hattie, the Miami Weather Bureau noted the potential for heavy rainfall and flash flooding in the southwestern Caribbean. The advisories recommended for small vessels to remain at harbor across the region. Initially, the hurricane was predicted to move near or through the Cayman Islands, Jamaica, and Cuba. As a result, Cuban officials advised residents in low-lying areas to evacuate. Hurricane Hattie first posed a threat to the Yucatán Peninsula and British Honduras on October 30 when it turned toward the area. Officials at the Miami Weather Bureau warned of the potential for high tides, strong winds, and torrential rainfall. The warnings allowed for extensive evacuations in high-risk areas. Most people in the capital, Belize City, were evacuated or moved to shelters, and a school was operated as a refuge. A hospital in the city was evacuated, and over 75% of the population of Stann Creek fled to safer locations. After Hattie made landfall, officials in Mexico ordered the closure of ports along the Isthmus of Tehuantepec. ## Impact Despite predictions for heavy rainfall in the southwestern Caribbean, the hurricane's movement was more northerly than expected, resulting in less precipitation along the Central American coast than anticipated. In its early developmental stages, Hattie struck San Andrés Island, located offshore eastern Nicaragua, with maximum sustained winds of 80 mph (130 km/h) and gusts of 104 mph (167 km/h). As the hurricane neared the island, the airport was closed due to tropical-storm-force winds. Rough seas and winds damaged private property and two hotels. Many palm tree plantations were devastated. The schooner Admirar, anchored in one of the island's bays, capsized during the storm. Overall, Hattie resulted in one death, fifteen injuries, and \$300,000 in damage (1961 USD) in San Andrés. The hurricane was the fourth on record to strike the island, and of the four was the only to approach from the south. In the northwestern Caribbean, Hattie passed close to Grand Cayman with heavy rainfall. At least 11.5 inches (290 mm) of rain were reported on the island, including 7.8 inches (200 mm) in six hours. Winds on Grand Cayman were below hurricane force, and only minor damage occurred due to the rain. The interaction between Hattie and the ridge of high pressure to its north produced sustained winds of 20 mph (32 km/h) across most of Florida, with a gust of 72 mph (116 km/h) reported at Hillsboro Inlet Light; the winds caused some beach erosion in the state. The U.S. Weather Bureau issued a small craft warning for the west and east Florida coastlines, as well as northward to Brunswick, Georgia. Later, Hattie impacted various countries in Central America with flash floods, causing 11 deaths in Guatemala and one fatality in Honduras. The Swan Islands reported wind gusts just below hurricane force, resulting in minor damage and one injury. ### British Honduras Hurricane Hattie moved ashore in British Honduras with a storm tide of up to 14 feet (4.3 m) near Belize City, a city of 31,000 people located at sea level; its only defenses against the storm tide were a small seawall and a strip of swamp lands. The capital experienced high waves and a 10 ft (3 m) storm tide along its waterfront that reached the third story of some buildings. A trained observer estimated winds of over 150 mph (240 km/h), and winds in the territory were unofficially estimated as strong as 200 mph (320 km/h). When Hattie affected the area, most buildings in Belize City were wooden, and most of this type were destroyed. Offshore, the hurricane heavily damaged 80% of the Belize Barrier Reef, although the reef recovered after the storm. High winds caused a power outage, downed trees across the region, and destroyed the roofs of many buildings. Governor Colin Thornley estimated that more than 70% of the buildings in the territory were damaged, and more than 10,000 people were left homeless. Some shelters set up before the storm were destroyed in the hurricane. The hurricane destroyed the wall at an insane asylum, which allowed the residents to escape. High waves damaged a prison, prompting officials to institute a "daily parole" program for the inmates. Hattie also flooded the Government House, washing away all records. All of Belize City was coated in a layer of mud and debris, and majority of the city was destroyed or severely damaged, as was nearby Stann Creek. The hurricane left significant crop damage across the region, including \$2 million in citrus fruits and similar losses to timber, cocoa, and bananas. The year's production of sugar cane was also heavily damaged. About 70% of the territory's mahogany trees were downed, as were most citrus and grapefruit trees. The hurricane damaged several factories and oil rigs in the region. Damage throughout the territory totaled \$60 million (1961 USD), and a total of 307 deaths were reported; more than 100 of the fatalities were in Belize City, including 36 who evacuated to a British administration building that was later destroyed in the storm. The government of British Honduras considered Hurricane Hattie more damaging than a hurricane in 1931 that killed 2,000 people; the lower death toll of Hattie was due to advance warning. ## Aftermath After Hattie struck, officials in Belize City declared martial law. A manager of United Press International described Belize City as "nothing but a huge pile of matchsticks," and many roads were either flooded for days or covered with mud. Doctors provided typhoid vaccinations to 12,000 residents in two days to prevent the spread of the disease. Due to the high death toll, officials ordered mass cremations to stop additional disease from spreading. At the city's police station, workers provided fresh water and rice to storm victims. Many residents throughout British Honduras donated supplies to the storm victims, such that an airlines manager described it as "taxing ... manpower and facilities." One airline allowed donations to be flown to Belize City at no cost. The city's three newspapers were unable to operate due to lack of power after the storm. By November 5, Belize City's post office reopened on a limited basis, and all business initially remained closed. About 4,000 homeless residents from Stann Creek were moved by boat to the northern portion of the territory. Many homeless people from the Belize City area set up a tent city on bushland about 16 mi (26 km) inland, which was initially intended to be temporary. In December 1961, barracks were erected near a Red Cross Hospital to house the homeless in the camp. The site was named Hattieville and became a proper city, with utilities installed in the subsequent decade. About 200 British soldiers arrived from Jamaica to quell looting and maintain order. At least 20 people were arrested in the day after Hattie struck. The British government sent flights of aid to the territory containing food, clothing, and medical supplies. The House of Commons quickly passed a bill to provide £10,000 in aid. The Save the Children fund sent £1,000 to British Honduras, and the Mexican government sent three flights with food and medicine to the territory. Two American destroyers arrived in the country by November 2, reporting the need for assistance. The USS Antietam remained at port for weeks after the storm with six medical officers and six Marine helicopters. Four other ships sailed to the territory to provide 458,000 pounds (208,000 kg) of food. The United States government allocated about \$300,000 in assistance through the International Development Association. The Canadian government provided C\$75,000 worth of aid, including food, blankets, and medical supplies. In 1962, Jimmy Cliff released his breakthrough single, "Hurricane Hattie". By Hattie's one year anniversary, private and public workers repaired and rebuilt buildings affected by the storm. New hotels were constructed, and many stores were reopened. Prime Minister George Cadle Price successfully appealed for assistance from the British government, which ultimately provided £20 million in loans. In the days after the storm, the government announced plans to relocate the capital of British Honduras farther inland on higher ground. Work on the new capital, Belmopan, was completed in 1970. On the 44th anniversary of the hurricane in 2005, the government of Belize unveiled a monument in Belize City to recognize the victims of the hurricane. Due to the destruction and loss of life attributed to the hurricane, the name Hattie was retired by the World Meteorological Organization and will never again be used for an Atlantic hurricane; the name was replaced by Holly in 1965. ## See also - List of retired Atlantic hurricane names - List of Category 5 Atlantic hurricanes - List of wettest tropical cyclones in the Cayman Islands
20,914,714
Political history of Mysore and Coorg (1565–1760)
1,170,299,324
History of west-central peninsular India
[ "16th century in India", "16th century in politics", "17th century in India", "17th century in politics", "18th century in India", "18th century in politics", "Coorg", "History of Mysore", "Political history of Karnataka" ]
The political history of the region on the Deccan Plateau in west-central peninsular India (Map 1) that was later divided into Mysore state and Coorg province saw many changes after the fall of the Hindu Vijayanagara Empire in 1565. The rise of Sultan Haidar Ali in 1761 introduced a new period. At the height of the Vijayanagara Empire, the Mysore and Coorg region was ruled by diverse chieftains, or rajas ("little kings"). Each raja had the right to govern a small region, but also an obligation to supply soldiers and annual tribute for the empire's needs. After the empire's fall and the subsequent eastward move of the diminished ruling family, many chieftains tried to loosen their imperial bonds and expand their realms. Sensing opportunity amidst the new uncertainty, various powers from the north invaded the region. Among these were the Sultanate of Bijapur to the northwest, the Sultanate of Golconda to the northeast, the newly-formed Maratha empire farther northwest, and the major contemporary empire of India, the Mughal, which bounded all on the north. For much of the 17th century the tussles between the little kings and the big powers, and amongst the little kings, culminated in shifting sovereignties, loyalties, and borders. By the turn of the 18th century, the political landscape had become better defined: the northwestern hills were being ruled by the Nayaka rulers of Ikkeri, the southwestern—in the Western Ghats—by the Rajas of Coorg, the southern plains by the Wodeyar rulers of Mysore, all of which were Hindu dynasties; and the eastern and northeastern regions by the Muslim Nawabs of Arcot and Sira. Of these, Ikkeri and Coorg were independent, Mysore, although much-expanded, was formally a Mughal dependency, and Arcot and Sira, Mughal subahs (or provinces). Mysore's expansions had been based on unstable alliances. When the alliances began to unravel, as they did during the next half-century, decay set in, presided over by politically and militarily inept kings. The Mughal governor, Nawab of Arcot, in a display of the still remaining reach of a declining Mughal empire, raided the Mysore capital, Seringapatam, to collect unpaid taxes. The Raja of Coorg began a war of attrition over territory in Mysore's western bordering regions. The Maratha empire invaded and exacted concessions of land. In the chaotic last decade of this period, a little-known Muslim cavalryman, Haidar Ali, seized power in Mysore. Under him, and in the decades following, Mysore was to expand again. It was to match all of southern India in size, and to pose the last serious threat to the new rising power on the subcontinent, the English East India Company. A common feature of all large regimes in the region during the period 1565–1760 is increased military fiscalism. This mode of creating income for the state consisted of extraction of tribute payments from local chiefs under threat of military action. It differed both from the more segmentary modes of preceding regimes and the more absolutist modes of succeeding ones—the latter achieved through direct tax collection from citizens. Another common feature of these regimes is the fragmentary historiography devoted to them, making broad generalizations difficult. ## Poligars of Vijayanagara, 1565–1635 On 23 January 1565 the last Hindu empire in South India, the Vijayanagara Empire, was defeated by the combined forces of the Muslim states of Bijapur, Golconda, and Ahmadnagar in the Battle of Talikota. The battle was fought on the doab (interfluve, or tongue of land) between the Kistna river and its major left bank tributary, the Bhima, 100 miles (160 km) to the north of the imperial capital, Vijayanagara (Map 2). The invaders from the north later destroyed the capital, and the ruler's family escaped to Penukonda, 125 miles (201 km) to the southeast, where they established their new capital. Later, they moved another 175 miles (282 km) east-southeast to Chandragiri, not far from the coast, and survived there until 1635, their dwindling empire concentrating its resources on its eastern Tamil and Telugu speaking realms. According to historian Sanjay Subrahmanyam: " ... in the ten years following 1565, the imperial centre of Vijayanagara effectively ceased to be a power as far as the western reaches of the peninsula were concerned, leaving a vacuum that was eventually filled by Ikkeri and Mysore." In the heyday of their rule, the kings of Vijayanagara had granted tracts of land in their realm to vassal chiefs on the stipulation of an annual tribute and of military service during times of war. The chiefs in the richer, more distant, southern provinces were not controlled easily, and only a fraction of the tribute was collected from them. Overseen by a viceroy—titled Sri Ranga Raya and based in the island town of Seringapatam on the river Kaveri, 200 miles (320 km) south of the capital, the southern chiefs bore various titles. These included the Nayaka, assumed by the chiefs of Keladi in the northwestern hills, Basavapatna, and Chitaldroog in the north, Belur in the west, and Hegalvadi in the centre; the title Gowda, assumed by the chiefs of Ballapur and Yelahanka in the centre, and Sugatur in the east; and Wodeyar, assumed by the rulers of Mysore in the south. (Map 2.) The southern chiefs (sometimes called rajas, or "little kings") resisted on moral and political grounds as well. According to historian Burton Stein: > 'Little kings', or rajas, never attained the legal independence of an aristocracy from both monarchs and the local people whom they ruled. The sovereign claims of would-be centralizing, South Indian rulers and the resources demanded in the name of that sovereignty diminished the resources which local chieftains used as a kind of royal largess; thus centralizing demands were opposed on moral as well as on political grounds by even quite modest chiefs. These chiefs came to be called poligars, a British corruption of "Palaiyakkarar," Tamil: holder of "palaiya" or "baronial estate;" Kannada: palagararu. In 1577, more than a decade after the Battle of Talikota, Bijapur forces attacked again and overwhelmed all opposition along the western coast. They easily took Adoni, a former Vijayanagara stronghold, and subsequently attempted to take Penukonda, the new Vijayanagara capital. (Map 3).) They were, however, repulsed by an army led by the Vijayanagara ruler's father-in-law, Jagadeva Raya, who had travelled north for the engagement from his base in Baramahal. For his services, his territories within the crumbling empire were expanded out to the Western Ghats, the mountain range running along the southwestern coast of India; a new capital was established in Channapatna (Map 6.) Soon the Wodeyars of Mysore (present-day Mysore district) began to more openly disregard the Vijayanagara monarch, annexing small states in their vicinity. (Map 3) The chiefs of Ummattur attempted to do the same despite punitive raids by the Vijayanagara armies. Eventually, as a compromise, the son of an Ummattur chief was appointed the viceroy at Seringapatam. In 1644, Mysore Wodeyars unseated the powerful Changalvas of Piriyapatna, becoming the dominant presence in the southern regions. (Map 6.) By this time the Vijayanagara empire was on its last legs. ## Bijapur, Marathas, Mughals, 1636–1687 In 1636, nearly 60 years after their defeat at Penukonda, the Sultans of Bijapur regrouped and invaded the kingdoms to their south. They did so with the blessing of the Mughal empire of northern India whose tributary states they had newly become. They had the help also of a chieftain of the Maratha uplands of western India, Shahaji Bhonsle, who was on the lookout for rewards of jagir land in the conquered territories the taxes on which he could collect as an annuity. In the western-central poligar regions, the Nayakas of Keladi were easily defeated, but were able to buy back their lands from their Bijapur invaders. (Map 4.) Eastward, the Bijapur-Shahji forces took the gold-rich Kolar district in 1639, and Bangalore—a city founded a century earlier by Kempe Gowda I. Advancing down the Eastern Ghats, the mountains rising behind the coastal plains of southeastern India, they captured the historic towns of Vellore and Gingee. Returning north through the east-central maidan plain (average elevation 600 m (2,000 ft)), they gained possession of the towns of Ballapur, Sira, and the hill fortress of Chitaldroog. (See Map 4.) A new province, Caranatic-Bijapur-Balaghat, incorporating Kolar, Hoskote, Bangalore, and Sira, and situated above (or westwards of) the Eastern Ghats range, was added to the Sultanate of Bijapur and granted to Shahji as a jagir. The possessions below the Ghats, such as Gingee and Vellore became part of another province, Carnatic-Bijapur-Payanghat, and Shahji was appointed its first governor. When Shahji died in 1664, his son Venkoji from his second wife, who had become the ruler of Tanjore much farther down the peninsula, inherited these territories. This did not sit well with Shahji's eldest son, from his first wife, Shivaji Bhonsle—a chieftain back in the Maratha uplands—who swiftly led an expedition southwards to claim his share. His quick victories resulted in a partition, whereby both the Carnatic-Bijapur provinces became his jagirs, and Tanjore was retained by Venkoji. (See Map 4.) The successes of Bijapur and Shivaji were being watched with some alarm by their suzrain, the Mughals. Emperor Aurengzeb, who had usurped the Mughal throne in 1659, soon set himself upon destroying the remaining Deccan sultanates. In 1686, the Mughals took Bijapur and, the following year, Golconda, capturing the latter's diamond mines. Before long, fast-moving Mughal armies were bearing down on all the former Vijayanagara lands. Bangalore, quickly taken by the Mughals from the Marathas, was sold to the Wodeyar of Mysore for 300,000 rupees. In 1687, a new Mughal province (or suba), Province of Sira, was created with capital at Sira city. Qasim Khan was appointed the first Mughal Faujdar Diwan (literally, "military governor"). ## Wodeyars of Mysore, 1610–1760 Although their own histories date the origins of the Wodeyars of Mysore (also "Odeyar", "Udaiyar", "Wodiyar", "Wadiyar", or "Wadiar", and, literally, "chief") to 1399, records of them go back no earlier than the early 16th century. These poligars are first mentioned in a Kannada language literary work from the early 16th century. A petty chieftain, Chamaraja (now Chamaraja III), who ruled from 1513 to 1553 over a few villages not far from the Kaveri river, is said to have constructed a small fort and named it, Mahisuranagara ("Buffalo Town"), from which Mysore gets its name. (Map 5.) The Wodeyar clan issued its first inscription during the chieftaincy of Timmaraja (now Timmaraja II) who ruled from 1553 to 1572. Towards the end of his rule, he is recorded to have owned 33 villages and fielded an army of 300 men. By the time of the short-lived incumbency of Timmaraja II's son, Chama Raja IV—who, well into his 60s, ruled from 1572 to 1576—the Vijayanagara Empire had been dealt its fatal blow. Before long, Chama Raja IV withheld payment of the annual tribute to the empire's viceroy at Seringapatam. The viceroy responded by attempting to arrest Chamaraja IV, failing, and letting the taxes remain unpaid. An outright military challenge to the empire would have to await the incumbency of Raja I, Chama Raja IV's eldest son, who became the Wodeyar in 1578. Raja I captured Seringapatam and, in a matter of days, moved his capital there on 8 February 1610. (Map 5.) During his rule, according to Burton Stein, his "chiefdom expanded into a major principality". In 1638, the reins of power fell into the hands of the 23-year-old Kanthirava Narasaraja I, who had been adopted a few months earlier by the widow of Raja I. Kanthirava was the first wodeyar of Mysore to create the symbols of royalty such as a royal mint, and coins named Kanthiraya (corrupted to "Canteroy") after himself. These remained a part of Mysore's "current national money" well into the 18th century. Catholic missionaries, who had arrived in the coastal areas of southern India—the southwestern Malabar coast, the western Kanara coast, and the southeastern Coromandel coast (also "Carnatic")—early in the 16th century, were not active in land-locked Mysore until halfway through the 17th. (Map 5). The Mysore mission was established in Seringapatam in 1649 by Leonardo Cinnami, an Italian Jesuit from Goa. Expelled a few years later from Mysore on account of opposition in Kanthirava's court, Cinnami returned, toward the end of Kanthirava's rule, to establish missions in half a dozen locations. During his second stay Cinnami obtained permission to convert Kanthirava's subjects to Christianity. He was successful mostly in the regions which were to become a part of the Madras Presidency of British India. According to , "Of a reported 1700 converts in the Mysore mission in the mid-1660s, a mere quarter were Kannadigas (Kannada language speakers), the rest being Tamil speakers from the western districts of modern-day Tamil Nadu, ..." After an unremarkable period of rule by short-lived incumbents, Kanthirava's 27-year-old great nephew, Chikka Devaraja, became the new wodeyar in 1672. During his rule, centralized military power increased to an unprecedented degree for the region. (Map 5 and Map 7.) Although he introduced various mandatory taxes on peasant-owned land, Chikka Devaraja exempted his soldiers' land from these payments. The apprehended inequity of this action, the unusually high taxes, and the intrusive nature of his regime, created widespread protests which had the support of the wandering Jangama ascetics in the monasteries of the Lingayats, a monotheistic religious order that emphasizes a personal relationship with the Hindu god Shiva. According to D. R. Nagaraja a slogan of the protests was: > Basavanna the Bull tills the forest land; Devendra gives the rains; > Why should we, the ones who grow crops through hard labour, pay taxes to the king? The king used the stratagem of inviting over 400 monks to a grand feast at the famous Shaiva centre of Nanjanagudu. Upon its conclusion, he presented them with gifts and directed them to exit one at a time through a narrow lane where they were strangled by royal wrestlers who had been awaiting them. Around 1687 Chikka Devaraja purchased the city of Bangalore for Rs. 300,000 from Qasim Khan, the new Mughal governor of the Province of Sira. Continual strife with the Marathas led to an alliance with the Mughal emperor Aurengzeb (reigned 1658–1707), who elaborately praised the Mysore king for the pursuit of their mutual enemy. Lands below the Eastern Ghats around Baramahal and Salem, less the objects of Mughal interest, were annexed to Mysore, as were those below the Baba Budan mountains on the western edge of the Deccan Plateau. When the Raja died on 16 November 1704, his dominions extended from Midagesi in the north to Palni Hills and Anaimalai in the south, and from Coorg in the west to Dharmapuri district in the east. (Map 5 and Map 7.) According to Sanjay Subrahmanyam, the polity that Chikka Devaraja left for his son was "at one and the same time a strong and a weak" one. Although it had uniformly expanded in size from the mid-17th century to the early 18th century, it had done so as a result of alliances that tended to hinder the very stability of the expansions. Some of the southeastern conquests (such as that of Salem), although involving regions that were not of direct interest to the Mughals, were the result of alliances with the Mughal governor of Sira and with Venkoji, the Maratha ruler of Tanjore. The siege of Tiruchirapalli had to be abandoned because the alliance had begun to rupture. (Map 7.) Similarly, in addition to allegedly receiving a signet ring and a Royal State Sword or Sword of State from Aurangzeb in 1700, Chikka Devaraja accepted an unspoken subordination to Mughal authority and a requirement to pay annual taxes. There is evidence also that the administrative reforms Chikka Devaraja had instituted might have been a direct result of Mughal influence. The early 18th century ushered in the rule of Kanthirava Narasaraja II, who being both hearing- and speech-impaired ruled under the regency of a series of army chiefs (Delavoys), all hailing from a single family from the village of Kalale in the Nanjangud taluk (or sub-district) of Mysore. Upon the ruler's death in 1714 at the age of 41, his son, Dodda Krishnaraja I, still two weeks shy of his 12th birthday, succeeded him. According to E. J. Rice, the ruler's lack of interest in the affairs of state, led two ministers, Devaraja, the army chief (or delavayi), and his cousin, Nanjaraja, who was both the revenue minister (the sarvadhikari) and the privy councilor (pradhana), to wield all authority in the kingdom. After Dodda Krishnaraja's death in 1736, the ministers appointed "pageant rajas", and effectively ruled Mysore until the rise of Haidar Ali in 1760. ## Nayakas of Ikkeri and Kanara trade, 1565–1763 In the northwestern regions, according to Stein, > an even more impressive chiefly house arose in Vijayanagara times and came to enjoy an extensive sovereignty. These were the Keladi chiefs who later founded the Nayaka kingdom of Ikkeri. At its greatest, the Ikkeri rajas controlled a territory nearly as large as the Vijayanagara heartland, some 20,000 square miles, extending about 180 miles south from Goa along the trade-rich Kanara coast. When Vasco da Gama landed in Calicut on the southwestern Malabar coast of India in 1498, the Vijayanagara empire was about to reach its apex. The Portuguese pursued their pepper trade farther south on the Malabar coast. In the decade after the fall of the empire, they decided as a commercial strategy to hedge their bets and to commence purchasing pepper from the Kanara region. During 1568–1569, they took possession of the coastal towns of Onor (now Honavar), Barcelor (now Basrur), and Mangalore and constructed fortresses and factories at each location. (Map 1 and Map 8.) Onor (Modern Honnavar) was located on the banks of the Sharavathi River, where the river widened into a lake, two miles (3 km) upstream from its mouth. Built strategically on a cliff, the Portuguese fort contained homes for thirty casados (married settlers). A natural sandbank kept out the large ocean-going ships, leaving the harbour accessible only to small craft. Approximately, 35 miles (56 km) farther upstream, the Portuguese maintained a weighing station at Gersoppa, where they purchased the pepper. During the latter part of the 16th century and the first half of the 17th, Onor not only became the principal port for the export of Kanara pepper, but also the most important Portuguese supply point for pepper in all of Asia. Located some 50 miles (80 km) south of Onor, and a few miles up the Coondapoor estuary (now Varahi), was the town of Barcelore (now Basrur). Building their fortress downstream of the existing Hindu town in order to control any approaches from the sea, the Portuguese provided accommodation for 30 casados within its walls; another 35 casados and their families lived in a walled compound at a stone's throw. Barcelore became a busy trading centre which exported rice, local textiles, saltpetre, and iron from the interior regions and imported corals, exotic yard goods and horses.(Map 1 and Map 8.) Fifty miles south of Barcelore was Mangalore, the last of the Portuguese strongholds in Kanara; it was situated on the mouth of the Netravati River. There too the Portuguese built a fortress and alongside it a walled town with accommodation for 35 casados families. Both Barcelore and Mangalore became principal ports for the export of rice, and during the first half of the 17th century supplied the other strategic fortalezas of significance to the Estado da India, the Portuguese Asian empire. These included, Goa, Malacca, Muscat, Mozambique and Mombasa. (Map 1.) As a ready source of rice, pepper, and teak, the Kanara coast was important to the Estado. For much of the 16th century, Portuguese had been able to negotiate favourable terms of trade with the weak principalities that constituted the Kanara coast. Towards the end of the century, the Nayaka ruler of Keladi (and Ikkeri), Venkatappa Nayaka (r. 1592–1629), and his successors, Virabhadra Nayaka (r. 1629–1645) and Shivappa Nayaka (r. 1645–1660) forced a revision of the previous trade treaties. By the 1630s, the Portuguese had agreed to buy pepper at market rates and the rulers of Ikkeri had been permitted two voyages per year without the purchase of a cartaz (a pass for Portuguese protection) as well as annual importation of twelve duty-free horses. When the last king of Vijayanagara sought refuge in his realms, Shivappa Nayaka set him up at Belur and Sakkarepatna, and later mounted an unsuccessful siege of Seringapatam on the latter's behalf. By the 1650s, he had driven the Portuguese out of the three fortalezas at Onor, Barcelore, and Mangalore. After his death in 1660, his successor Somashker Nayaka, however, sent an embassy to Goa for reestablishing the Portuguese trading posts in Kanara. By 1671, a treaty had been agreed to which was very favorable once again to the Portuguese. (Map 8 and Map 9.) Before the treaty could be implemented, though, Somashkar Nayaka died and was succeeded by an infant grandson Basava Nayaka, his succession disputed by the Queen Mother, who favoured another claimant, Timmaya Nayaka. The 1671 treaty languished amidst the succession struggle until 1678, when yet another treaty was negotiated with Basava Nayaka who had emerged as the victor. As both parties in the succession struggle had been interested in purchasing European artillery from the Portuguese, the eventual treaty of 1678 was even more favourable to the latter. Under it, Basava agreed to pay 30,000 xerafins in Portuguese war-charges for the decade-long conflict with the Dutch (whom the Nayakas of Ikkeri had supported), to provide construction material for the factory at Mangalore, to provide 1,500 sacks of clean rice annually, to pay a yearly tribute for Mangalore and Barcelore, to destroy the factories of the Omani Arabs on the Kanara coast, and to allow Catholic churches to be built at a number of locations in Kanara. With the treaty in place, Portuguese power returned to Kanara after an interregnum of almost half a century. ## Subahdars of Sira, 1689–1760 A Mughal province which comprised the Carnatic region south of the Tungabhadra river, and which was to exist for seventy years, was established in 1687 with its capital at Sira (in Tumkur District). ( Map 10.) The Province of Sira (also Carnatic-Balaghat) was composed of seven parganas (districts): Basavapatna, Budihal, Sira, Penukonda, Dod-Ballapur, Hoskote, and Kolar; in addition, Harpanahalli, Kondarpi, Anegundi, Bednur, Chitaldroog, and Mysore were considered by the Mughals to be tributary states of the province. Qasim Khan (also, Khasim Khan or Kasim Khan) was appointed the first Subahdar (governor) and Faujdar (military governor) of the province in 1689. Having displayed "energy and success" both in controlling the province and in developing it, he died in 1694, killed either by Maratha raiders from the northwest, or killing himself in disgrace after these raiders seized a treasure under his care. Most Subhahdars who governed after him were replaced in a year or two by a successor. The instability continued until Dilavar Khan was appointed governor in 1726, his term lasting until 1756. In 1757, Sira was overrun by the Marathas, but was restored to the Mughals in 1759. In 1761 future ruler Haidar Ali, whose father had been the Mughal military governor (or Faujdar) of Kolar district in the province, captured Sira, and soon conferred on himself the title of "Nawab of Sira". However, the defection of his brother five years later caused the province to be lost again to the Marathas, who retained it until Haidar's son, Tipu Sultan, recaptured it for his father in 1774. The capital of the province, Sira town, prospered most under Dilavar Khan and expanded in size to accommodate 50,000 homes. (Map 10.) Palaces and public monuments of Sira became models for other future constructions; both Haidar Ali's palace in Bangalore and Tipu Sultan's in Seringapatam, built during the period 1761–1799 of their rule, were modelled after Dilavar Khan's palace in Sira. Likewise, according to Rice, Bangalore's Lal Bagh as well as Bangalore fort may have been designed after Sira's Khan Bagh gardens and Sira fort. Sira's civil servants, though, could not be as readily reproduced. After Tipu Sultan had succeeded his father as Sultan of Mysore in 1782, he deported 12,000 families, mainly of city officials, from Sira to Shahr Ganjam, a new capital he founded on Seringapatam island. Earlier, after the Moghul armies had overrun the Mysore table-land in 1689, twelve parganas (or sub-districts) were annexed to the newly formed province (subah) of Sira. The other regions were allowed to remain under the poligars, who continued to collect taxes from the cultivators, but were now required to pay annual tribute to the provincial government in Sira. In the annexed regions, an elaborate system of officials collected and managed revenue. Most offices had existed under the previous Bijapur Sultanate administration, and consisted of Deshmūks, Deshpāndes, Majmūndārs, and Kānungoyas. The Deshmūks "settled accounts" with the village headmen (or patels); the Deshpāndes verified the account-books of the village registrars (or kārnāms); the Kānungoyas entered the official regulations in the village record-books and also explained decrees and regulations to the village governing officers and residents. Lastly, the Majmūndārs prepared the final documents of the "settlement" (i.e. the assessment and payment of tax) and promulgated it. Until the mid-17th century, both village- and district (taluq) accounts had been prepared in the language and script of Kannada, the region's traditional language. However, after the Bijapur invasions, Maratha chieftains came to wield authority in the region and brought in various officials who introduced the Marathi language and script into the "public accounts". The new language even found its way into lands ruled by some poligar chiefs. After the province of Sira was created, Persian, the official language of the Moghul empire, came to be used. ## Rajas of Coorg, mid-16th century – 1768 Although, Rājendranāme, a "royal" genealogy of the rulers of Coorg written in 1808, makes no mention of the origin of the lineage, its reading by historian Lewis Rice led him to conclude that the princely line was established by a member of the Ikkeri Nayaka family. Having moved south to the town of Haleri in northern Coorg in the disguise of a wandering Jangama monk, he began to attract followers. With their help, or acquiescence, he took possession of the town, and in such manner eventually came to rule the country. (Map 11.) According to the genealogy, the Coorg rajas who ruled from the mid-16th century to the mid-18th century were: By the late 17th century, the rajas of Coorg had created an "aggressive and independent" state. Muddu Raja, the Coorg ruler from 1633 to 1687, moved his capital to Mercara, fortifying it and building a palace there in 1681. During the rule of his successor, Dodda Virappa (1687–1736), the army of neighbouring Mysore, then being ruled by the Wodeyar, Chikka Devaraja, attacked and seized Piriyapatna. This was a territory abutting Coorg being ruled by a kinsman of Dodda Virappa (Map 11). Uplifted by the victory, the Mysore army attacked Coorg. It had advanced but a short distance when, camping overnight on the Palupare plain, it was surprised by a Coorg ambush. In the ensuing massacre, 15,000 Mysore soldiers were killed, the survivors beating a hasty retreat. For the next two decades, the western reaches of Mysore remained vulnerable to attacks by the Coorg army. In the border district of Yelusavira, for example, the Coorg and Mysore forces fought to a stalemate and, in the end, had to work out a tax-sharing arrangement. In 1724, major hostilities resumed between Coorg and Mysore. Changing his modus operandi of guerrilla skirmishes in the hilly Coorg jungle, Dodda Virappa took to open field warfare against the Mysore army. Catching it off guard, he took in rapid succession six fortresses from Piriyapatna to Arkalgud. The loss of revenue, some 600,000 gold pagodas, was felt in Mysore, and several months later, in August or September 1724, a large army was sent from the Mysore capital Seringapatam to Coorg. At the army's arrival in the western region, the Coorg forces returned to guerrilla warfare, retreating into the woods. Emboldened by the lack of resistance, the Mysore forces attacked the Coorg hills but met no resistance. A few days into their unopposed advance, haunted by the 1690s' ambush, the Mysore forces panicked, retreating during the night. The Coorg army went back to attacking the Mysore outposts. The back-and-forth continued until the Mysore army was recalled to Seringapatam, leaving the region vulnerable to Coorg raids. According to historian Sanjay Subrahmanyam, > The entire episode yields a rare insight into one aspect of war in the 18th century: the (Coorg) forces, lacking cavalry, with a minimum of firearms, lost every major battle, but won the war by dint of two factors. First, the terrain, and the possibility of retreating periodically into the wooded hillside, favoured them, in contrast to their relatively clumsy opponents. Second, the Mysore army could never maintain a permanent presence in the region, given the fact that the Wodeyar kingdom had several open frontiers. More than a century earlier, Lewis Rice, had written: > Dodda Virappa evinced throughout his long and vigorous reign an unconquerable spirit, and though surrounded by powerful neighbours, neither the number nor the strength of this enemies seems to have relaxed his courage or damped his enterprise. He died in 1736, 78 years old. Two of his wives ascended the funeral pile with the dead body of the Raja. ## Assessment: the period and its historiography From the mid-15th century to the mid-18th century rulers of states in southern India commenced financing wars on a different footing than had their predecessors. According to historian Burton Stein, all the rulers of the Mysore and Coorg region—the Vijayanagara emperors, the Wodeyars of Mysore, the Nayakas of Ikkeri, the Subahdars of Sira, and the Rajas of Coorg—fall to some degree under this category. A similar political system, referred to as "military fiscalism" by French historian Martin Wolfe, took hold in Europe between the 15th and 17th centuries. During this time, according to Wolfe, most regimes in Western Europe emerged from the aristocracy to become absolute monarchies; they simultaneously reduced their dependence on the aristocracy by expanding the tax base and developing an extensive tax collection structure. In Stein's words, > Previously resistant aristocracies were eventually won over in early modern Europe by being offered state offices and honours and by being protected in their patrimonial wealth, but this was only after monarchies had proven their ability to defeat antiquated feudal forces and had found alternative resources in cities and from trade. In southern India, none of the pre-1760 regimes were able to achieve the "fiscal absolutism" of their European contemporaries. Local chieftains, who had close ties with their social groups, and who had only recently risen from them, opposed the excessive monetary demands of a more powerful regional ruler. Consequently, the larger states of this period in southern India, were not able to entirely change their mode of creating wealth from one of extracting tribute payments, which were seldom regular, to that of direct collection of taxes by government officials. Extorting tribute under threat of military action, according to Stein, is not true "military fiscalism," although it is a means of approaching it. This partial or limited military fiscalism began during the Vijayanagara Empire, setting the latter apart from the more "segmentary" regimes that had preceded it, and was a prominent feature of all regimes during the period 1565–1760; true military fiscalism was not achieved in the region until the rule of Tipu Sultan in the 1780s. Stein's formulation has been criticized by historian Sanjay Subrahmanyam on account of the lack of extensive historiography for the period. The 18th-century Wodeyar rulers of Mysore—in contrast to their contemporaries in Rajputana, Central India, Maratha Deccan, and Tanjavur—left little or no record of their administrations. Surveying the historiography, Subrahmanyam, says: > A major problem attendant on such generalisations by modern historians concerning pre-1760 Mysore is, however, the paucity of documentation on this older 'Old Regime'. The first explicit History of Mysore in English is Historical Sketches of the South of India, in an attempt to trace the History of Mysoor by Mark Wilks. Wilks claimed to have based his history on various Kannada language documents, many of which have not survived. According to, all subsequent histories of Mysore have borrowed heavily from Wilks's book for their pre-1760 content. These include Lewis Rice's well-known Gazetteer of 1897 and C. Hayavadana Rao's major revision of the Gazetteer half a century later, and many modern spin-offs of these two works. In Subrahmanyam's words, "Wilks's work is an important one therefore, not only for its own sake, but for its having been regurgitated and reproduced time and again with minor variations." A Wodeyar dynasty genealogy, the Chikkadevaräya Vamśävali of Tirumalarya, was composed in Kannada during the period 1710–1715, and was claimed to be based on all the then-extant inscriptions in the region. Another genealogy, Kalale Doregala Vamśävali, of the Delvoys, the near-hereditary chief ministers of Mysore, was composed around the turn of the 19th century. However, neither manuscript provides information about administration, economy or military capability. The ruling dynasty's origins, especially as expounded in later palace genealogies, are also of doubtful accuracy; this is, in part, because the Wodeyars, who were reinstated by the British on the Mysore gaddi in 1799, to preside over a fragile sovereignty, "obsessively" attempted to demonstrate their "unbroken" royal lineage, to bolster their then uncertain status. The earliest manuscript offering clues to governance and military conflict in the pre-1760 Mysore, seems to be Dias , an annual letter written in Portuguese by a Mysore-based Jesuit missionary, Joachim Dias, and addressed to his Provincial superior. After the East India Company's final 1799 victory over Tipu Sultan, official Company records began to be published as well; these include a collection of Anglo-Mysore Wars-related correspondence between the Company's officials in India and Court of Directors in London, and the first report on the new Princely State of Mysore by its resident, Mark Wilks. Around this time, French accounts of the Anglo-Mysore wars appeared as well and included a history of the wars by Joseph-François Michaud, another Jesuit priest. The first attempt at including a comprehensive history of Mysore in an English language work is an account of a survey of South India conducted at Lord Richard Wellesley's request, by Francis Buchanan, a Scottish physician and geographer. By the end of the period of British Commissionership of Mysore (1831–1881), many English language works had begun to appear on a variety of Mysore-related subjects. These included a book of English translations of Kannada language inscriptions by Lewis Rice, and William Digby's two-volume critique of British famine policy during the Great Famine of 1876–78, which devastated Mysore for many years afterwards., ## See also - Political history of Mysore and Coorg (1761–1799) - Political history of Mysore and Coorg (1800–1947) - Company rule in India - Princely state ## Sources used ### Secondary sources ### Primary sources [16th century in India](Category:16th_century_in_India "wikilink") [17th century in India](Category:17th_century_in_India "wikilink") [18th century in India](Category:18th_century_in_India "wikilink") [16th century in politics](Category:16th_century_in_politics "wikilink") [17th century in politics](Category:17th_century_in_politics "wikilink") [18th century in politics](Category:18th_century_in_politics "wikilink") [Political history of Karnataka](Category:Political_history_of_Karnataka "wikilink") [History of Mysore](Category:History_of_Mysore "wikilink") [Coorg](Category:Coorg "wikilink")
12,242,207
SMS Kaiser Wilhelm der Grosse
1,158,103,233
Battleship of the German Imperial Navy
[ "1899 ships", "Kaiser Friedrich III-class battleships", "Ships built in Kiel", "World War I battleships of Germany" ]
SMS Kaiser Wilhelm der Grosse ("HMS Emperor William the Great") was a German pre-dreadnought battleship of the Kaiser Friedrich III class, built around the turn of the 20th century. The ship was one of the first battleships built by the German Imperial Navy (Kaiserliche Marine) as part of a program of naval expansion under Kaiser Wilhelm II. Kaiser Wilhelm der Grosse was built in Kiel at the Germaniawerft shipyard. She was laid down in January 1898, launched in June 1899, and completed in May 1901. The ship was armed with a main battery of four 24-centimeter (9.4 in) guns in two twin turrets. Kaiser Wilhelm der Grosse served in the main fleet—the Heimatflotte (Home Fleet) and later the Hochseeflotte (High Seas Fleet)—for the first seven years of her career. She participated in several of the fleet's training cruises and maneuvers, primarily in the North and Baltic Seas. Her peacetime career was relatively uneventful and she suffered no accidents. She was decommissioned for a major reconstruction in 1908–10, after which she was assigned to the Reserve Division with her four sister ships, all of which were essentially obsolete by that time. At the outbreak of World War I in 1914, the battleship and her sisters were placed back in active service as V Battle Squadron of the High Seas Fleet and deployed to coastal defense in the North Sea. They were also deployed briefly to the Baltic but saw no action. In 1915, the ships were again withdrawn from service and relegated to secondary duties. Kaiser Wilhelm der Grosse was used as a depot ship in Kiel and eventually a torpedo target ship. After the war, the Treaty of Versailles greatly reduced the size of the German Navy. The vessel was sold for scrap to a German company and broken up in 1920. ## Design After the German Kaiserliche Marine (Imperial Navy) ordered the four Brandenburg-class battleships in 1889, a combination of budgetary constraints, opposition in the Reichstag (Imperial Diet), and a lack of a coherent fleet plan delayed the acquisition of further battleships. The former Secretary of the Reichsmarineamt (Imperial Navy Office), Leo von Caprivi became the Chancellor of Germany in 1890, and Vizeadmiral (Vice Admiral) Friedrich von Hollmann became the new Secretary of the Reichsmarineamt. Hollmann requested the first Kaiser Friedrich III-class pre-dreadnought battleship in 1892, but the Franco-Russian Alliance, signed the year before, put the government's attention on expanding the Army's budget. Parliamentary opposition forced Hollmann to delay until the following year, when Caprivi spoke in favor of the project, noting that Russia's recent naval expansion threatened Germany's Baltic Sea coastline. In late 1893, Hollmann presented the Navy's estimates for the 1894–1895 budget year, and now the Reichstag approved the new ship; a second member of the class followed in early 1896, and the third ship, Kaiser Wilhelm der Grosse, was authorized for the following year's budget. Kaiser Wilhelm der Grosse was 125.3 m (411 ft 1 in) long overall and had a beam of 20.4 m (66 ft 11 in) and a draft of 7.89 m (25 ft 11 in) forward and 8.25 m (27 ft 1 in) aft. She displaced 11,097 t (10,922 long tons) as designed and up to 11,785 t (11,599 long tons) at full load. The ship was powered by three 3-cylinder vertical triple-expansion steam engines that drove three screw propellers. Steam was provided by four Marine-type and eight cylindrical boilers, all of which burned coal and were vented through a pair of tall funnels. Kaiser Wilhelm der Grosse's powerplant was rated at 13,000 metric horsepower (12,820 ihp; 9,560 kW), which generated a top speed of 17.5 knots (32.4 km/h; 20.1 mph). She had a cruising radius of 3,420 nautical miles (6,330 km; 3,940 mi) at a speed of 10 knots (19 km/h; 12 mph). She had a normal crew of 39 officers and 612 enlisted men. The ship's armament consisted of a main battery of four 24 cm (9.4 in) SK L/40 guns in twin turrets, one fore and one aft of the central superstructure on the centerline. Her secondary armament consisted of eighteen 15 cm (5.9 inch) SK L/40 guns carried in a mix of turrets and casemates. Close-range defense against torpedo boats was provided by a battery of twelve 8.8 cm (3.5 in) SK L/30 quick-firing guns all mounted in casemates. She also carried twelve 3.7 cm (1.5 in) machine cannon. Six 45 cm (17.7 in) torpedo tubes were mounted in above-water swivel mounts. The ship's belt armor was 300 mm (12 in) thick, and the main armor deck was 65 mm (2.6 in) thick. The conning tower and main battery turrets were protected with 250 mm (9.8 in) of armor, and the secondary casemates received 150 mm (5.9 in) of protection. ## Service history ### Construction and early service Kaiser Wilhelm II, the emperor of Germany, believed a strong navy was necessary for the country to expand its influence outside continental Europe. He initiated a program of naval expansion in the late 1880s; the first battleships built under this program were the four Brandenburg-class ships. These were immediately followed by the five Kaiser Friedrich III-class battleships, of which Kaiser Wilhelm der Grosse was the third. Her keel was laid on 22 January 1898 at the Germaniawerft shipyard in Kiel, as construction number 22. She was ordered under the contract name Ersatz König Wilhelm, to replace the obsolete armored frigate König Wilhelm. Her scheduled launching on 29 April 1899 was delayed to 1 June after a large fire at the shipyard damaged the slipway. Louise, the Grand Duchess of Baden, christened the ship after her father Wilhelm I of Germany, the ship's namesake. Wilhelm II gave the launching speech for the ship commemorating his grandfather. After completing fitting-out work, dockyard sea trials began on 19 February 1901, followed by acceptance trials beginning 18 March. These were completed by May, and she was formally commissioned on 5 May. That year, Erich Raeder—who went on to command the Kriegsmarine in World War II—was promoted to serve as a watch officer aboard her. After commissioning in 1901, Kaiser Wilhelm der Grosse joined her sister ships in I Squadron of the Heimatflotte (Home Fleet). After her sister Kaiser Friedrich III ran aground and had to be docked for repairs, Kaiser Wilhelm der Grosse replaced her as the I Squadron flagship, which was commanded by Prince Heinrich, the brother of Wilhelm II. She held this post until 24 October, when Kaiser Friedrich III returned to service. In the meantime, Kaiser Wilhelm der Grosse was present for the Kiel Week sailing regatta in June and the dedication of a monument at the Marineakademie (Naval Academy) in Kiel. At the end of July, she led the squadron on a cruise to Spanish waters, and while docked in Cádiz, they rendezvoused with the Brandenburg-class battleships returning from East Asian waters. I Squadron was back in Kiel by 11 August, though the late arrival of the Brandenburgs delayed the participation of I Squadron in the annual autumn fleet training. The maneuvers began with exercises in the German Bight, followed by a mock attack on the fortifications in the lower Elbe. Gunnery drills took place in Kiel Bay before the fleet steamed to Danzig Bay, where the maneuvers concluded on 15 September. Kaiser Wilhelm der Grosse and the rest of I Squadron went on their normal winter cruise to Norway in December, which included a stop at Oslo from 7 to 12 December. On 13 December, the new pre-dreadnought battleship Wittelsbach ran aground off Korsør; Kaiser Wilhelm der Grosse took her under tow back to port. I Squadron went on a short cruise in the western Baltic Sea, then embarked on a major cruise around the British Isles, which lasted from 25 April to 28 May. Individual and squadron maneuvers took place from June to August, interrupted only by a cruise to Norway in July. The annual fleet maneuvers began in August in the Baltic and concluded in the North Sea with a fleet review in the Jade Bight. During the exercises, Kaiser Wilhelm der Grosse was assigned to the "hostile" force, as were several of her sister ships. The "hostile" force was first tasked with preventing the "German" squadron from passing through the Great Belt into the Baltic. Kaiser Wilhelm der Grosse and several other battleships were then tasked with forcing an entry into the mouth of the Elbe River, where the Kaiser Wilhelm Canal and Hamburg could be seized. The "hostile" flotilla accomplished these tasks within three days. The regular winter cruise followed during 1–12 December. In 1903, the fleet, which was composed of only one squadron of battleships, was reorganized as the "Active Battle Fleet". Kaiser Wilhelm der Grosse remained in I Squadron along with her sister ships and the newest Wittelsbach-class battleships, while the older Brandenburg-class ships were placed in reserve to be rebuilt. The first quarter of 1903 followed the usual pattern of training exercises. The squadron went on a training cruise in the Baltic, followed by a voyage to Spain from 7 May to 10 June. In July, I Squadron went on its annual cruise to Norway. The autumn maneuvers consisted of a blockade exercise in the North Sea, a cruise of the entire fleet first to Norwegian waters and then to Kiel in early September, and finally a mock attack on Kiel. The exercises concluded on 12 September. The winter cruise began on 23 November in the eastern Baltic and continued into the Skagerrak on 1 December. ### 1904–1914 I Squadron held its first exercise of 1904 in the Skagerrak from 11 to 21 January. Further squadron exercises followed from 8 to 17 March. A major fleet exercise took place in the North Sea in May, and in July I Squadron and I Scouting Group visited Britain, stopping at Plymouth on 10 July. The German fleet departed on 13 July, bound for the Netherlands; I Squadron anchored in Vlissingen the following day. There, the ships were visited by Queen Wilhelmina. I Squadron remained in Vlissingen until 20 July, when it departed for a cruise in the northern North Sea with the rest of the fleet. The squadron stopped in Molde, Norway, on 29 July, while the other units went to other ports. The fleet reassembled on 6 August and steamed back to Kiel, where it conducted a mock attack on the harbor on 12 August. The fleet then began preparations for the autumn maneuvers, which began on 29 August in the Baltic. The fleet moved to the North Sea on 3 September, where it took part in a major amphibious landing exercise, after which the ships took the ground troops from IX Corps that participated in the exercise to Altona for a parade reviewed by Wilhelm II. The ships then conducted their own parade for the Kaiser off the island of Helgoland on 6 September. Three days later, the fleet returned to the Baltic via the Kaiser Wilhelm Canal, where it participated in further landing exercises with IX Corps and the Guards Corps. On 15 September, the maneuvers came to an end. I Squadron went on its winter training cruise, this time to the eastern Baltic, from 22 November to 2 December. The ships of I Squadron went on a pair of training cruises during 9–19 January and 27 February – 16 March 1905. Individual ship and squadron training followed, with an emphasis on gunnery drills. On 12 July, the fleet began a major training exercise in the North Sea. The fleet then cruised through the Kattegat and stopped in Copenhagen and Stockholm. The summer cruise ended on 9 August; the autumn maneuvers that would normally have begun shortly thereafter were delayed by a visit from the British Channel Fleet that month. The British fleet stopped in Danzig, Swinemünde, and Flensburg, where it was greeted by units of the German Navy; Kaiser Wilhelm der Grosse and the main German fleet were anchored at Swinemünde for the occasion. The visit's impact was lessened by the ongoing Anglo-German naval arms race. As a result of the British visit, the 1905 autumn maneuvers were shortened considerably, from 6 to 13 September, and consisted of only exercises in the North Sea. The first exercise presumed a naval blockade in the German Bight, and the second envisioned a hostile fleet attempting to force the defenses of the Elbe. During the exercises, Kaiser Wilhelm der Grosse won the Kaiser's Schiesspreis (Shooting Prize) for excellent gunnery in I Squadron. In October, the ship was reassigned to I Division of II Squadron. In early December, I and II Squadrons went on their regular winter cruise, this time to Danzig, where they arrived on 12 December. On the return trip to Kiel, the fleet conducted tactical exercises. Over the winter of 1906–1907, Kaiser Wilhelm der Grosse underwent a major overhaul in Kiel, which was completed by the end of April. By this time, the newest Deutschland-class battleships were coming into service; along with the Braunschweig-class battleships, these provided enough modern battleships to create two full battle squadrons. As a result, the Heimatflotte was renamed the Hochseeflotte (High Seas Fleet). Starting on 13 May, major fleet exercises took place in the North Sea and lasted until 8 June with a cruise around the Skagen into the Baltic. The fleet began its usual summer cruise to Norway in mid-July. The fleet was present for the birthday of Norwegian King Haakon VII on 3 August. The German ships departed the following day for Helgoland, to join exercises being conducted there. The fleet was back in Kiel by 15 August, where preparations for the autumn maneuvers began. On 22–24 August, the fleet took part in landing exercises in Eckernförde Bay outside Kiel. The maneuvers were paused from 31 August to 3 September when the fleet hosted vessels from Denmark and Sweden, along with a Russian squadron from 3 to 9 September in Kiel. The maneuvers resumed on 8 September and lasted five more days. A shorter period of dockyard work took place from 7 December to 27 January 1908. She returned to the fleet for the normal peacetime routine of training exercises, and after the conclusion of the autumn maneuvers, Kaiser Wilhelm der Grosse was decommissioned in Kiel on 21 September. She was taken into the Kaiserliche Werft shipyard in Kiel for an extensive modernization that lasted until 1910. During the refit, four of the ship's 15 cm guns and the stern-mounted torpedo tube were removed. Two 8.8 cm guns were added and the arrangement of the 8.8 cm battery was modified. Her superstructure was also cut down to reduce the ship's tendency to roll excessively, and the ship's funnels were lengthened. After reconstruction, the ship was assigned to the Reserve Division in the Baltic, along with her sister ships. She was reactivated on 31 July 1911 and assigned to III Squadron during the annual fleet exercises, then returned on 15 September to the Reserve Division. She remained there for the rest of her peacetime career. ### World War I As a result of the outbreak of World War I, Kaiser Wilhelm der Grosse and her sisters were brought out of reserve and mobilized as V Battle Squadron on 5 August 1914. The ships were prepared for war very slowly, and were not ready for service in the North Sea until the end of August. They were initially tasked with coastal defense, but they served in this capacity for only a very short time. In mid-September, V Squadron was transferred to the Baltic, under the command of Prince Heinrich. He initially planned to launch a major amphibious assault against the Russians at Windau, but a shortage of transports forced a revision of the plan. Instead, V Squadron was to carry the landing force, but this too was cancelled after Heinrich received false reports of British warships having entered the Baltic on 25 September. Kaiser Wilhelm der Grosse and her sisters returned to Kiel the following day, disembarked the landing force, and then proceeded to the North Sea, where they resumed guard ship duties. Before the end of the year, V Squadron was once again transferred to the Baltic. Prince Heinrich ordered a foray toward Gotland. On 26 December 1914, the battleships rendezvoused with the Baltic cruiser division in the Bay of Pomerania and then departed on the sortie. Two days later, the fleet arrived off Gotland to show the German flag, and was back in Kiel by 30 December. The squadron returned to the North Sea for guard duties, but was withdrawn from front-line service in February 1915. Shortages of trained crews in the High Seas Fleet, coupled with the risk of operating older ships in wartime, necessitated the deactivation of Kaiser Wilhelm der Grosse and her sisters. Kaiser Wilhelm der Grosse first went to Hamburg, where her crew was reduced on 5 March. She was moved to Kiel on 30 April, where the rest of her crew were removed. She was disarmed and thereafter used as a depot ship. The following year, the ship was used as a torpedo target ship. The Armistice at Compiègne ended the fighting in November 1918; according to Article 181 of the Treaty of Versailles (which formally ended the war) Germany was permitted to retain only six battleships of the "Deutschland or Lothringen types". On 6 December 1919, the vessel was struck from the naval list and sold to a shipbreaking firm based in Berlin. The following year, Kaiser Wilhelm der Grosse was broken up for scrap metal in Kiel-Nordmole.
4,614
Boeing 747
1,172,367,032
American wide-body long-range commercial jet aircraft
[ "1960s United States airliners", "Aircraft first flown in 1969", "Boeing 747", "Double-deck aircraft", "Quadjets" ]
The Boeing 747 is a large, long-range wide-body airliner designed and manufactured by Boeing Commercial Airplanes in the United States between 1968 and 2023. After introducing the 707 in October 1958, Pan Am wanted a jet 2+1⁄2 times its size, to reduce its seat cost by 30%. In 1965, Joe Sutter left the 737 development program to design the 747, the first twin-aisle airliner. In April 1966, Pan Am ordered 25 Boeing 747-100 aircraft, and in late 1966, Pratt & Whitney agreed to develop the JT9D engine, a high-bypass turbofan. On September 30, 1968, the first 747 was rolled out of the custom-built Everett Plant, the world's largest building by volume. The first flight took place on February 9, 1969, and the 747 was certified in December of that year. It entered service with Pan Am on January 22, 1970. The 747 was the first airplane called a "Jumbo Jet" as the first wide-body airliner. The 747 is a four-engined jet aircraft, initially powered by Pratt & Whitney JT9D turbofan engines, then General Electric CF6 and Rolls-Royce RB211 engines for the original variants. With a ten-abreast economy seating, it typically accommodates 366 passengers in three travel classes. It has a pronounced 37.5° wing sweep, allowing a Mach 0.85 (490 kn; 900 km/h) cruise speed, and its heavy weight is supported by four main landing gear legs, each with a four-wheel bogie. The partial double-deck aircraft was designed with a raised cockpit so it could be converted to a freighter airplane by installing a front cargo door, as it was initially thought that it would eventually be superseded by supersonic transports. Boeing introduced the -200 in 1971, with more powerful engines for a heavier maximum takeoff weight (MTOW) of 833,000 pounds (378 t) from the initial 735,000 pounds (333 t), increasing the maximum range from 4,620 to 6,560 nautical miles [nmi] (8,560 to 12,150 km; 5,320 to 7,550 mi). It was shortened for the longer-range 747SP in 1976, and the 747-300 followed in 1983 with a stretched upper deck for up to 400 seats in three classes. The heavier 747-400 with improved RB211 and CF6 engines or the new PW4000 engine (the JT9D successor), and a two-crew glass cockpit, was introduced in 1989 and is the most common variant. After several studies, the stretched 747-8 was launched on November 14, 2005, with new General Electric GEnx engines, and was first delivered in October 2011. The 747 is the basis for several government and military variants, such as the VC-25 (Air Force One), E-4 Emergency Airborne Command Post, Shuttle Carrier Aircraft, and some experimental testbeds such as the YAL-1 and SOFIA airborne observatory. Initial competition came from the smaller trijet widebodies: the Lockheed L-1011 (introduced in 1972), McDonnell Douglas DC-10 (1971) and later MD-11 (1990). Airbus competed with later variants with the heaviest versions of the A340 until surpassing the 747 in size with the A380, delivered between 2007 and 2021. Freighter variants of the 747 remain popular with cargo airlines. The final 747 was delivered to Atlas Air in January 2023 after a 54-year production run, with 1,574 aircraft built. As of January 2023, 64 Boeing 747s (4.1%) have been lost in accidents and incidents, in which a total of 3,746 people have died. ## Development ### Background In 1963, the United States Air Force started a series of study projects on a very large strategic transport aircraft. Although the C-141 Starlifter was being introduced, officials believed that a much larger and more capable aircraft was needed, especially to carry cargo that would not fit in any existing aircraft. These studies led to initial requirements for the CX-Heavy Logistics System (CX-HLS) in March 1964 for an aircraft with a load capacity of 180,000 pounds (81.6 t) and a speed of Mach 0.75 (430 kn; 800 km/h), and an unrefueled range of 5,000 nautical miles (9,300 km; 5,800 mi) with a payload of 115,000 pounds (52.2 t). The payload bay had to be 17 feet (5.18 m) wide by 13.5 feet (4.11 m) high and 100 feet (30 m) long with access through doors at the front and rear. The desire to keep the number of engines to four required new engine designs with greatly increased power and better fuel economy. In May 1964, airframe proposals arrived from Boeing, Douglas, General Dynamics, Lockheed, and Martin Marietta; engine proposals were submitted by General Electric, Curtiss-Wright, and Pratt & Whitney. Boeing, Douglas, and Lockheed were given additional study contracts for the airframe, along with General Electric and Pratt & Whitney for the engines. The airframe proposals shared several features. As the CX-HLS needed to be able to be loaded from the front, a door had to be included where the cockpit usually was. All of the companies solved this problem by moving the cockpit above the cargo area; Douglas had a small "pod" just forward and above the wing, Lockheed used a long "spine" running the length of the aircraft with the wing spar passing through it, while Boeing blended the two, with a longer pod that ran from just behind the nose to just behind the wing. In 1965, Lockheed's aircraft design and General Electric's engine design were selected for the new C-5 Galaxy transport, which was the largest military aircraft in the world at the time. Boeing carried the nose door and raised cockpit concepts over to the design of the 747. ### Airliner proposal The 747 was conceived while air travel was increasing in the 1960s. The era of commercial jet transportation, led by the enormous popularity of the Boeing 707 and Douglas DC-8, had revolutionized long-distance travel. In this growing jet age, Juan Trippe, president of Pan Am, one of Boeing's most important airline customers, asked for a new jet airliner 2+1⁄2 times size of the 707, with a 30% lower cost per unit of passenger-distance and the capability to offer mass air travel on international routes. Trippe also thought that airport congestion could be addressed by a larger new aircraft. In 1965, Joe Sutter was transferred from Boeing's 737 development team to manage the design studies for the new airliner, already assigned the model number 747. Sutter began a design study with Pan Am and other airlines to better understand their requirements. At the time, many thought that long-range subsonic airliners would eventually be superseded by supersonic transport aircraft. Boeing responded by designing the 747 so it could be adapted easily to carry freight and remain in production even if sales of the passenger version declined. In April 1966, Pan Am ordered 25 Boeing 747-100 aircraft for US\$525 million (equivalent to \$ billion in dollars). During the ceremonial 747 contract-signing banquet in Seattle on Boeing's 50th Anniversary, Juan Trippe predicted that the 747 would be "...a great weapon for peace, competing with intercontinental missiles for mankind's destiny". As launch customer, and because of its early involvement before placing a formal order, Pan Am was able to influence the design and development of the 747 to an extent unmatched by a single airline before or since. ### Design effort Ultimately, the high-winged CX-HLS Boeing design was not used for the 747, although technologies developed for their bid had an influence. The original design included a full-length double-deck fuselage with eight-across seating and two aisles on the lower deck and seven-across seating and two aisles on the upper deck. However, concern over evacuation routes and limited cargo-carrying capability caused this idea to be scrapped in early 1966 in favor of a wider single deck design. The cockpit was, therefore, placed on a shortened upper deck so that a freight-loading door could be included in the nose cone; this design feature produced the 747's distinctive "hump". In early models, what to do with the small space in the pod behind the cockpit was not clear, and this was initially specified as a "lounge" area with no permanent seating. (A different configuration that had been considered to keep the flight deck out of the way for freight loading had the pilots below the passengers, and was dubbed the "anteater".) One of the principal technologies that enabled an aircraft as large as the 747 to be drawn up was the high-bypass turbofan engine. This engine technology was thought to be capable of delivering double the power of the earlier turbojets while consuming one-third less fuel. General Electric had pioneered the concept but was committed to developing the engine for the C-5 Galaxy and did not enter the commercial market until later. Pratt & Whitney was also working on the same principle and, by late 1966, Boeing, Pan Am and Pratt & Whitney agreed to develop a new engine, designated the JT9D to power the 747. The project was designed with a new methodology called fault tree analysis, which allowed the effects of a failure of a single part to be studied to determine its impact on other systems. To address concerns about safety and flyability, the 747's design included structural redundancy, redundant hydraulic systems, quadruple main landing gear and dual control surfaces. Additionally, some of the most advanced high-lift devices used in the industry were included in the new design, to allow it to operate from existing airports. These included Krueger flaps running almost the entire length of the wing's leading edge, as well as complex three-part slotted flaps along the trailing edge of the wing. The wing's complex three-part flaps increase wing area by 21% and lift by 90% when fully deployed compared to their non-deployed configuration. Boeing agreed to deliver the first 747 to Pan Am by the end of 1969. The delivery date left 28 months to design the aircraft, which was two-thirds of the normal time. The schedule was so fast-paced that the people who worked on it were given the nickname "The Incredibles". Developing the aircraft was such a technical and financial challenge that management was said to have "bet the company" when it started the project. Due to its massive size, Boeing subcontracted the assembly of subcomponents to other manufacturers, most notably Northrop and Grumman (later merged into Northrop Grumman in 1994) for fuselage parts and trailing edge flaps respectively, Fairchild for tailplane ailerons, and Ling-Temco-Vought (LTV) for the empennage. ### Production plant As Boeing did not have a plant large enough to assemble the giant airliner, they chose to build a new plant. The company considered locations in about 50 cities, and eventually decided to build the new plant some 30 miles (50 km) north of Seattle on a site adjoining a military base at Paine Field near Everett, Washington. It bought the 780-acre (320 ha) site in June 1966. Developing the 747 had been a major challenge, and building its assembly plant was also a huge undertaking. Boeing president William M. Allen asked Malcolm T. Stamper, then head of the company's turbine division, to oversee construction of the Everett factory and to start production of the 747. To level the site, more than four million cubic yards (three million cubic meters) of earth had to be moved. Time was so short that the 747's full-scale mock-up was built before the factory roof above it was finished. The plant is the largest building by volume ever built, and has been substantially expanded several times to permit construction of other models of Boeing wide-body commercial jets. ### Flight testing Before the first 747 was fully assembled, testing began on many components and systems. One important test involved the evacuation of 560 volunteers from a cabin mock-up via the aircraft's emergency chutes. The first full-scale evacuation took two and a half minutes instead of the maximum of 90 seconds mandated by the Federal Aviation Administration (FAA), and several volunteers were injured. Subsequent test evacuations achieved the 90-second goal but caused more injuries. Most problematic was evacuation from the aircraft's upper deck; instead of using a conventional slide, volunteer passengers escaped by using a harness attached to a reel. Tests also involved taxiing such a large aircraft. Boeing built an unusual training device known as "Waddell's Wagon" (named for a 747 test pilot, Jack Waddell) that consisted of a mock-up cockpit mounted on the roof of a truck. While the first 747s were still being built, the device allowed pilots to practice taxi maneuvers from a high upper-deck position. In 1968, the program cost was US\$1 billion (equivalent to \$ billion in dollars). On September 30, 1968, the first 747 was rolled out of the Everett assembly building before the world's press and representatives of the 26 airlines that had ordered the airliner. Over the following months, preparations were made for the first flight, which took place on February 9, 1969, with test pilots Jack Waddell and Brien Wygle at the controls and Jess Wallick at the flight engineer's station. Despite a minor problem with one of the flaps, the flight confirmed that the 747 handled extremely well. The 747 was found to be largely immune to "Dutch roll", a phenomenon that had been a major hazard to the early swept-wing jets. ### Issues, delays and certification During later stages of the flight test program, flutter testing showed that the wings suffered oscillation under certain conditions. This difficulty was partly solved by reducing the stiffness of some wing components. However, a particularly severe high-speed flutter problem was solved only by inserting depleted uranium counterweights as ballast in the outboard engine nacelles of the early 747s. This measure caused anxiety when these aircraft crashed, for example El Al Flight 1862 at Amsterdam in 1992 with 622 pounds (282 kg) of uranium in the tailplane (horizontal stabilizer). The flight test program was hampered by problems with the 747's JT9D engines. Difficulties included engine stalls caused by rapid throttle movements and distortion of the turbine casings after a short period of service. The problems delayed 747 deliveries for several months; up to 20 aircraft at the Everett plant were stranded while awaiting engine installation. The program was further delayed when one of the five test aircraft suffered serious damage during a landing attempt at Renton Municipal Airport, the site of Boeing's Renton factory. The incident happened on December 13, 1969, when a test aircraft was flown to Renton to have test equipment removed and a cabin installed. Pilot Ralph C. Cokely undershot the airport's short runway and the 747's right, outer landing gear was torn off and two engine nacelles were damaged. However, these difficulties did not prevent Boeing from taking a test aircraft to the 28th Paris Air Show in mid-1969, where it was displayed to the public for the first time. Finally, in December 1969, the 747 received its FAA airworthiness certificate, clearing it for introduction into service. The huge cost of developing the 747 and building the Everett factory meant that Boeing had to borrow heavily from a banking syndicate. During the final months before delivery of the first aircraft, the company had to repeatedly request additional funding to complete the project. Had this been refused, Boeing's survival would have been threatened. The firm's debt exceeded \$2 billion, with the \$1.2 billion owed to the banks setting a record for all companies. Allen later said, "It was really too large a project for us." Ultimately, the gamble succeeded, and Boeing held a monopoly in very large passenger aircraft production for many years. ### Entry into service On January 15, 1970, First Lady of the United States Pat Nixon christened Pan Am's first 747 at Dulles International Airport (later Washington Dulles International Airport) in the presence of Pan Am chairman Najeeb Halaby. Instead of champagne, red, white, and blue water was sprayed on the aircraft. The 747 entered service on January 22, 1970, on Pan Am's New York–London route; the flight had been planned for the evening of January 21, but engine overheating made the original aircraft unusable. Finding a substitute delayed the flight by more than six hours to the following day when Clipper Victor was used. The 747 enjoyed a fairly smooth introduction into service, overcoming concerns that some airports would not be able to accommodate an aircraft that large. Although technical problems occurred, they were relatively minor and quickly solved. After the aircraft's introduction with Pan Am, other airlines that had bought the 747 to stay competitive began to put their own 747s into service. Boeing estimated that half of the early 747 sales were to airlines desiring the aircraft's long range rather than its payload capacity. While the 747 had the lowest potential operating cost per seat, this could only be achieved when the aircraft was fully loaded; costs per seat increased rapidly as occupancy declined. A moderately loaded 747, one with only 70 percent of its seats occupied, used more than 95 percent of the fuel needed by a fully occupied 747. Nonetheless, many flag-carriers purchased the 747 due to its prestige "even if it made no sense economically" to operate. During the 1970s and 1980s, over 30 regularly scheduled 747s could often be seen at John F. Kennedy International Airport. The recession of 1969–1970, despite having been characterized as relatively mild, greatly affected Boeing. For the year and a half after September 1970, it only sold two 747s in the world, both to Irish flag carrier Aer Lingus. No 747s were sold to any American carrier for almost three years. When economic problems in the US and other countries after the 1973 oil crisis led to reduced passenger traffic, several airlines found they did not have enough passengers to fly the 747 economically, and they replaced them with the smaller and recently introduced McDonnell Douglas DC-10 and Lockheed L-1011 TriStar trijet wide bodies (and later the 767 and A300/A310 twinjets). Having tried replacing coach seats on its 747s with piano bars in an attempt to attract more customers, American Airlines eventually relegated its 747s to cargo service and in 1983 exchanged them with Pan Am for smaller aircraft; Delta Air Lines also removed its 747s from service after several years. Later, Delta acquired 747s again in 2008 as part of its merger with Northwest Airlines, although it retired the Boeing 747-400 fleet in December 2017. International flights bypassing traditional hub airports and landing at smaller cities became more common throughout the 1980s, thus eroding the 747's original market. Many international carriers continued to use the 747 on Pacific routes. In Japan, 747s on domestic routes were configured to carry nearly the maximum passenger capacity. ### Improved 747 versions After the initial 747-100, Boeing developed the -100B, a higher maximum takeoff weight (MTOW) variant, and the -100SR (Short Range), with higher passenger capacity. Increased maximum takeoff weight allows aircraft to carry more fuel and have longer range. The -200 model followed in 1971, featuring more powerful engines and a higher MTOW. Passenger, freighter and combination passenger-freighter versions of the -200 were produced. The shortened 747SP (special performance) with a longer range was also developed, and entered service in 1976. The 747 line was further developed with the launch of the 747-300 on June 11, 1980, followed by interest from Swissair a month later and the go-ahead for the project. The 300 series resulted from Boeing studies to increase the seating capacity of the 747, during which modifications such as fuselage plugs and extending the upper deck over the entire length of the fuselage were rejected. The first 747-300, completed in 1983, included a stretched upper deck, increased cruise speed, and increased seating capacity. The -300 variant was previously designated 747SUD for stretched upper deck, then 747-200 SUD, followed by 747EUD, before the 747-300 designation was used. Passenger, short range and combination freighter-passenger versions of the 300 series were produced. In 1985, development of the longer range 747-400 began. The variant had a new glass cockpit, which allowed for a cockpit crew of two instead of three, new engines, lighter construction materials, and a redesigned interior. Development costs soared, and production delays occurred as new technologies were incorporated at the request of airlines. Insufficient workforce experience and reliance on overtime contributed to early production problems on the 747-400. The -400 entered service in 1989. In 1991, a record-breaking 1,087 passengers were flown in a 747 during a covert operation to airlift Ethiopian Jews to Israel. Generally, the 747-400 held between 416 and 524 passengers. The 747 remained the heaviest commercial aircraft in regular service until the debut of the Antonov An-124 Ruslan in 1982; variants of the 747-400 surpassed the An-124's weight in 2000. The Antonov An-225 Mriya cargo transport, which debuted in 1988, remains the world's largest aircraft by several measures (including the most accepted measures of maximum takeoff weight and length); one aircraft has been completed and was in service until 2022. The Scaled Composites Stratolaunch is currently the largest aircraft by wingspan. ### Further developments After the arrival of the 747-400, several stretching schemes for the 747 were proposed. Boeing announced the larger 747-500X and -600X preliminary designs in 1996. The new variants would have cost more than US\$5 billion to develop, and interest was not sufficient to launch the program. In 2000, Boeing offered the more modest 747X and 747X stretch derivatives as alternatives to the Airbus A3XX. However, the 747X family was unable to attract enough interest to enter production. A year later, Boeing switched from the 747X studies to pursue the Sonic Cruiser, and after the Sonic Cruiser program was put on hold, the 787 Dreamliner. Some of the ideas developed for the 747X were used on the 747-400ER, a longer range variant of the 747-400. After several variants were proposed but later abandoned, some industry observers became skeptical of new aircraft proposals from Boeing. However, in early 2004, Boeing announced tentative plans for the 747 Advanced that were eventually adopted. Similar in nature to the 747-X, the stretched 747 Advanced used technology from the 787 to modernize the design and its systems. The 747 remained the largest passenger airliner in service until the Airbus A380 began airline service in 2007. On November 14, 2005, Boeing announced it was launching the 747 Advanced as the Boeing 747-8. The last 747-400s were completed in 2009. As of 2011, most orders of the 747-8 were for the freighter variant. On February 8, 2010, the 747-8 Freighter made its maiden flight. The first delivery of the 747-8 went to Cargolux in 2011. The first 747-8 Intercontinental passenger variant was delivered to Lufthansa on May 5, 2012. The 1,500th Boeing 747 was delivered in June 2014 to Lufthansa. In January 2016, Boeing stated it was reducing 747-8 production to six a year beginning in September 2016, incurring a \$569 million post-tax charge against its fourth-quarter 2015 profits. At the end of 2015, the company had 20 orders outstanding. On January 29, 2016, Boeing announced that it had begun the preliminary work on the modifications to a commercial 747-8 for the next Air Force One presidential aircraft, then expected to be operational by 2020. On July 12, 2016, Boeing announced that it had finalized an order from Volga-Dnepr Group for 20 747-8 freighters, valued at \$7.58 billion at list prices. Four aircraft were delivered beginning in 2012. Volga-Dnepr Group is the parent of three major Russian air-freight carriers – Volga-Dnepr Airlines, AirBridgeCargo Airlines and Atran Airlines. The new 747-8 freighters would replace AirBridgeCargo's current 747-400 aircraft and expand the airline's fleet and will be acquired through a mix of direct purchases and leasing over the next six years, Boeing said. ### End of production On July 27, 2016, in its quarterly report to the Securities and Exchange Commission, Boeing discussed the potential termination of 747 production due to insufficient demand and market for the aircraft. With a firm order backlog of 21 aircraft and a production rate of six per year, program accounting had been reduced to 1,555 aircraft. In October 2016, UPS Airlines ordered 14 -8Fs to add capacity, along with 14 options, which it took in February 2018 to increase the total to 28 -8Fs on order. The backlog then stood at 25 aircraft, though several of these were orders from airlines that no longer intended to take delivery. On July 2, 2020, it was reported that Boeing planned to end 747 production in 2022 upon delivery of the remaining jets on order to UPS and the Volga-Dnepr Group due to low demand. On July 29, 2020, Boeing confirmed that the final 747 would be delivered in 2022 as a result of "current market dynamics and outlook" stemming from the COVID-19 pandemic, according to CEO David Calhoun. The last aircraft, a 747-8F for Atlas Air, rolled off the production line on December 6, 2022, and was delivered on January 31, 2023. Boeing hosted an event at the Everett factory for thousands of workers as well as industry executives to commemorate the delivery. ## Design The Boeing 747 is a large, wide-body (two-aisle) airliner with four wing-mounted engines. Its wings have a high sweep angle of 37.5° for a fast, efficient cruise speed of Mach 0.84 to 0.88, depending on the variant. The sweep also reduces the wingspan, allowing the 747 to use existing hangars. Its seating capacity is over 366 with a 3–4–3 seat arrangement (a cross section of three seats, an aisle, four seats, another aisle, and three seats) in economy class and a 2–3–2 layout in first class on the main deck. The upper deck has a 3–3 seat arrangement in economy class and a 2–2 layout in first class. Raised above the main deck, the cockpit creates a hump. This raised cockpit allows front loading of cargo on freight variants. The upper deck behind the cockpit provides space for a lounge and/or extra seating. The "stretched upper deck" became available as an alternative on the 747-100B variant and later as standard beginning on the 747-300. The upper deck was stretched more on the 747-8. The 747 cockpit roof section also has an escape hatch from which crew can exit during the events of an emergency if they cannot do so through the cabin. The 747's maximum takeoff weight ranges from 735,000 pounds (333 t) for the -100 to 970,000 pounds (440 t) for the -8. Its range has increased from 5,300 nautical miles (9,800 km; 6,100 mi) on the -100 to 8,000 nautical miles (15,000 km; 9,200 mi) on the -8I. The 747 has redundant structures along with four redundant hydraulic systems and four main landing gears each with four wheels; these provide a good spread of support on the ground and safety in case of tire blow-outs. The main gear are redundant so that landing can be performed on two opposing landing gears if the others are not functioning properly. The 747 also has split control surfaces and was designed with sophisticated triple-slotted flaps that minimize landing speeds and allow the 747 to use standard-length runways. For transportation of spare engines, the 747 can accommodate a non-functioning fifth-pod engine under the aircraft's port wing between the inner functioning engine and the fuselage. The fifth engine mount point is also used by Virgin Orbit's LauncherOne program to carry an orbital-class rocket to cruise altitude where it is deployed. ## Variants The 747-100 with a range of 4,620 nautical miles (8,556 km), was the original variant launched in 1966. The 747-200 soon followed, with its launch in 1968. The 747-300 was launched in 1980 and was followed by the 747-400 in 1985. Ultimately, the 747-8 was announced in 2005. Several versions of each variant have been produced, and many of the early variants were in production simultaneously. The International Civil Aviation Organization (ICAO) classifies variants using a shortened code formed by combining the model number and the variant designator (e.g. "B741" for all -100 models). ### 747-100 The first 747-100s were built with six upper deck windows (three per side) to accommodate upstairs lounge areas. Later, as airlines began to use the upper deck for premium passenger seating instead of lounge space, Boeing offered an upper deck with ten windows on either side as an option. Some early -100s were retrofitted with the new configuration. The -100 was equipped with Pratt & Whitney JT9D-3A engines. No freighter version of this model was developed, but many 747-100s were converted into freighters as 747-100(SF). The first 747-100(SF) was delivered to Flying Tiger Line in 1974. A total of 168 747-100s were built; 167 were delivered to customers, while Boeing kept the prototype, City of Everett. In 1972, its unit cost was US\$24M (M today). #### 747SR Responding to requests from Japanese airlines for a high-capacity aircraft to serve domestic routes between major cities, Boeing developed the 747SR as a short-range version of the 747-100 with lower fuel capacity and greater payload capability. With increased economy class seating, up to 498 passengers could be carried in early versions and up to 550 in later models. The 747SR had an economic design life objective of 52,000 flights during 20 years of operation, compared to 24,600 flights in 20 years for the standard 747. The initial 747SR model, the -100SR, had a strengthened body structure and landing gear to accommodate the added stress accumulated from a greater number of takeoffs and landings. Extra structural support was built into the wings, fuselage, and the landing gear along with a 20% reduction in fuel capacity. The initial order for the -100SR – four aircraft for Japan Air Lines (JAL, later Japan Airlines) – was announced on October 30, 1972; rollout occurred on August 3, 1973, and the first flight took place on August 31, 1973. The type was certified by the FAA on September 26, 1973, with the first delivery on the same day. The -100SR entered service with JAL, the type's sole customer, on October 7, 1973, and typically operated flights within Japan. Seven -100SRs were built between 1973 and 1975, each with a 520,000-pound (240 t) MTOW and Pratt & Whitney JT9D-7A engines derated to 43,000 pounds-force (190 kN) of thrust. Following the -100SR, Boeing produced the -100BSR, a 747SR variant with increased takeoff weight capability. Debuting in 1978, the -100BSR also incorporated structural modifications for a high cycle-to-flying hour ratio; a related standard -100B model debuted in 1979. The -100BSR first flew on November 3, 1978, with first delivery to All Nippon Airways (ANA) on December 21, 1978. A total of 20 -100BSRs were produced for ANA and JAL. The -100BSR had a 600,000 pounds (270 t) MTOW and was powered by the same JT9D-7A or General Electric CF6-45 engines used on the -100SR. ANA operated this variant on domestic Japanese routes with 455 or 456 seats until retiring its last aircraft in March 2006. In 1986, two -100BSR SUD models, featuring the stretched upper deck (SUD) of the -300, were produced for JAL. The type's maiden flight occurred on February 26, 1986, with FAA certification and first delivery on March 24, 1986. JAL operated the -100BSR SUD with 563 seats on domestic routes until their retirement in the third quarter of 2006. While only two -100BSR SUDs were produced, in theory, standard -100Bs can be modified to the SUD certification. Overall, 29 Boeing 747SRs were built. #### 747-100B The 747-100B model was developed from the -100SR, using its stronger airframe and landing gear design. The type had an increased fuel capacity of 48,070 US gal (182,000 L), allowing for a 5,000-nautical-mile (9,300 km; 5,800 mi) range with a typical 452-passenger payload, and an increased MTOW of 750,000 lb (340 t) was offered. The first -100B order, one aircraft for Iran Air, was announced on June 1, 1978. This version first flew on June 20, 1979, received FAA certification on August 1, 1979, and was delivered the next day. Nine -100Bs were built, one for Iran Air and eight for Saudi Arabian Airlines. Unlike the original -100, the -100B was offered with Pratt & Whitney JT9D-7A, CF6-50, or Rolls-Royce RB211-524 engines. However, only RB211-524 (Saudia) and JT9D-7A (Iran Air) engines were ordered. The last 747-100B, EP-IAM was retired by Iran Air in 2014, the last commercial operator of the 747-100 and -100B. ### 747SP The development of the 747SP stemmed from a joint request between Pan American World Airways and Iran Air, who were looking for a high-capacity airliner with enough range to cover Pan Am's New York–Middle Eastern routes and Iran Air's planned Tehran–New York route. The Tehran–New York route, when launched, was the longest non-stop commercial flight in the world. The 747SP is 48 feet 4 inches (14.73 m) shorter than the 747-100. Fuselage sections were eliminated fore and aft of the wing, and the center section of the fuselage was redesigned to fit mating fuselage sections. The SP's flaps used a simplified single-slotted configuration. The 747SP, compared to earlier variants, had a tapering of the aft upper fuselage into the empennage, a double-hinged rudder, and longer vertical and horizontal stabilizers. Power was provided by Pratt & Whitney JT9D-7(A/F/J/FW) or Rolls-Royce RB211-524 engines. The 747SP was granted a type certificate on February 4, 1976, and entered service with launch customers Pan Am and Iran Air that same year. The aircraft was chosen by airlines wishing to serve major airports with short runways. A total of 45 747SPs were built, with the 44th 747SP delivered on August 30, 1982. In 1987, Boeing re-opened the 747SP production line after five years to build one last 747SP for an order by the United Arab Emirates government. In addition to airline use, one 747SP was modified for the NASA/German Aerospace Center SOFIA experiment. Iran Air is the last civil operator of the type; its final 747-SP (EP-IAC) was to be retired in June 2016. ### 747-200 While the 747-100 powered by Pratt & Whitney JT9D-3A engines offered enough payload and range for medium-haul operations, it was marginal for long-haul route sectors. The demand for longer range aircraft with increased payload quickly led to the improved -200, which featured more powerful engines, increased MTOW, and greater range than the -100. A few early -200s retained the three-window configuration of the -100 on the upper deck, but most were built with a ten-window configuration on each side. The 747-200 was produced in passenger (-200B), freighter (-200F), convertible (-200C), and combi (-200M) versions. The 747-200B was the basic passenger version, with increased fuel capacity and more powerful engines; it entered service in February 1971. In its first three years of production, the -200 was equipped with Pratt & Whitney JT9D-7 engines (initially the only engine available). Range with a full passenger load started at over 5,000 nmi (9,300 km; 5,800 mi) and increased to 6,000 nmi (11,000 km; 6,900 mi) with later engines. Most -200Bs had an internally stretched upper deck, allowing for up to 16 passenger seats. The freighter model, the 747-200F, had a hinged nose cargo door and could be fitted with an optional side cargo door, and had a capacity of 105 tons (95.3 tonnes) and an MTOW of up to 833,000 pounds (378 t). It entered service in 1972 with Lufthansa. The convertible version, the 747-200C, could be converted between a passenger and a freighter or used in mixed configurations, and featured removable seats and a nose cargo door. The -200C could also be outfitted with an optional side cargo door on the main deck. The combi aircraft model, the 747-200M (originally designated 747-200BC), could carry freight in the rear section of the main deck via a side cargo door. A removable partition on the main deck separated the cargo area at the rear from the passengers at the front. The -200M could carry up to 238 passengers in a three-class configuration with cargo carried on the main deck. The model was also known as the 747-200 Combi. As on the -100, a stretched upper deck (SUD) modification was later offered. A total of 10 747-200s operated by KLM were converted. Union de Transports Aériens (UTA) also had two aircraft converted. After launching the -200 with Pratt & Whitney JT9D-7 engines, on August 1, 1972, Boeing announced that it had reached an agreement with General Electric to certify the 747 with CF6-50 series engines to increase the aircraft's market potential. Rolls-Royce followed 747 engine production with a launch order from British Airways for four aircraft. The option of RB211-524B engines was announced on June 17, 1975. The -200 was the first 747 to provide a choice of powerplant from the three major engine manufacturers. In 1976, its unit cost was US\$39M (M today). A total of 393 of the 747-200 versions had been built when production ended in 1991. Of these, 225 were -200B, 73 were -200F, 13 were -200C, 78 were -200M, and 4 were military. Iran Air retired the last passenger 747-200 in May 2016, 36 years after it was delivered. As of July 2019, five 747-200s remain in service as freighters. ### 747-300 The 747-300 features a 23-foot-4-inch-longer (7.11 m) upper deck than the -200. The stretched upper deck (SUD) has two emergency exit doors and is the most visible difference between the -300 and previous models. After being made standard on the 747-300, the SUD was offered as a retrofit, and as an option to earlier variants still in-production. An example for a retrofit were two UTA -200 Combis being converted in 1986, and an example for the option were two brand-new JAL -100 aircraft (designated -100BSR SUD), the first of which was delivered on March 24, 1986. The 747-300 introduced a new straight stairway to the upper deck, instead of a spiral staircase on earlier variants, which creates room above and below for more seats. Minor aerodynamic changes allowed the -300's cruise speed to reach Mach 0.85 compared with Mach 0.84 on the -200 and -100 models, while retaining the same takeoff weight. The -300 could be equipped with the same Pratt & Whitney and Rolls-Royce powerplants as on the -200, as well as updated General Electric CF6-80C2B1 engines. Swissair placed the first order for the 747-300 on June 11, 1980. The variant revived the 747-300 designation, which had been previously used on a design study that did not reach production. The 747-300 first flew on October 5, 1982, and the type's first delivery went to Swissair on March 23, 1983. In 1982, its unit cost was US\$83M (M today). Besides the passenger model, two other versions (-300M, -300SR) were produced. The 747-300M features cargo capacity on the rear portion of the main deck, similar to the -200M, but with the stretched upper deck it can carry more passengers. The 747-300SR, a short range, high-capacity domestic model, was produced for Japanese markets with a maximum seating for 584. No production freighter version of the 747-300 was built, but Boeing began modifications of used passenger -300 models into freighters in 2000. A total of 81 747-300 series aircraft were delivered, 56 for passenger use, 21 -300M and 4 -300SR versions. In 1985, just two years after the -300 entered service, the type was superseded by the announcement of the more advanced 747-400. The last 747-300 was delivered in September 1990 to Sabena. While some -300 customers continued operating the type, several large carriers replaced their 747-300s with 747-400s. Air France, Air India, Pakistan International Airlines, and Qantas were some of the last major carriers to operate the 747-300. On December 29, 2008, Qantas flew its last scheduled 747-300 service, operating from Melbourne to Los Angeles via Auckland. In July 2015, Pakistan International Airlines retired their final 747-300 after 30 years of service. As of July 2019, only two 747-300s remain in commercial service, with Mahan Air (1) and TransAVIAexport Airlines (1). ### 747-400 The 747-400 is an improved model with increased range. It has wingtip extensions of 6 ft (1.8 m) and winglets of 6 ft (1.8 m), which improve the type's fuel efficiency by four percent compared to previous 747 versions. The 747-400 introduced a new glass cockpit designed for a flight crew of two instead of three, with a reduction in the number of dials, gauges and knobs from 971 to 365 through the use of electronics. The type also features tail fuel tanks, revised engines, and a new interior. The longer range has been used by some airlines to bypass traditional fuel stops, such as Anchorage. A 747-400 loaded with 126,000 lb of fuel flying 3,500 statute miles consumes an average of five gallons per mile. Powerplants include the Pratt & Whitney PW4062, General Electric CF6-80C2, and Rolls-Royce RB211-524. As a result of the Boeing 767 development overlapping with the 747-400's development, both aircraft can use the same three powerplants and are even interchangeable between the two aircraft models. The -400 was offered in passenger (-400), freighter (-400F), combi (-400M), domestic (-400D), extended range passenger (-400ER), and extended range freighter (-400ERF) versions. Passenger versions retain the same upper deck as the -300, while the freighter version does not have an extended upper deck. The 747-400D was built for short-range operations with maximum seating for 624. Winglets were not included, but they can be retrofitted. Cruising speed is up to Mach 0.855 on different versions of the 747-400. The passenger version first entered service in February 1989 with launch customer Northwest Airlines on the Minneapolis to Phoenix route. The combi version entered service in September 1989 with KLM, while the freighter version entered service in November 1993 with Cargolux. The 747-400ERF entered service with Air France in October 2002, while the 747-400ER entered service with Qantas, its sole customer, in November 2002. In January 2004, Boeing and Cathay Pacific launched the Boeing 747-400 Special Freighter program, later referred to as the Boeing Converted Freighter (BCF), to modify passenger 747-400s for cargo use. The first 747-400BCF was redelivered in December 2005. In March 2007, Boeing announced that it had no plans to produce further passenger versions of the -400. However, orders for 36 -400F and -400ERF freighters were already in place at the time of the announcement. The last passenger version of the 747-400 was delivered in April 2005 to China Airlines. Some of the last built 747-400s were delivered with Dreamliner livery along with the modern Signature interior from the Boeing 777. A total of 694 of the 747-400 series aircraft were delivered. At various times, the largest 747-400 operator has included Singapore Airlines, Japan Airlines, and British Airways. As of July 2019, 331 Boeing 747-400s were in service; there were only 10 Boeing 747-400s in passenger service as of September 2021. #### 747 LCF Dreamlifter The 747-400 Dreamlifter (originally called the 747 Large Cargo Freighter or LCF) is a Boeing-designed modification of existing 747-400s into a larger outsize cargo freighter configuration to ferry 787 Dreamliner sub-assemblies. Evergreen Aviation Technologies Corporation of Taiwan was contracted to complete modifications of 747-400s into Dreamlifters in Taoyuan. The aircraft flew for the first time on September 9, 2006, in a test flight. Modification of four aircraft was completed by February 2010. The Dreamlifters have been placed into service transporting sub-assemblies for the 787 program to the Boeing plant in Everett, Washington, for final assembly. The aircraft is certified to carry only essential crew and not passengers. ### 747-8 Boeing announced a new 747 variant, the 747-8, on November 14, 2005. Referred to as the 747 Advanced prior to its launch, the 747-8 uses similar General Electric GEnx engines and cockpit technology to the 787. The variant is designed to be quieter, more economical, and more environmentally friendly. The 747-8's fuselage is lengthened from 232 feet (71 m) to 251 feet (77 m), marking the first stretch variant of the aircraft. The 747-8 Freighter, or 747-8F, has 16% more payload capacity than its predecessor, allowing it to carry seven more standard air cargo containers, with a maximum payload capacity 154 tons (140 tonnes) of cargo. As on previous 747 freighters, the 747-8F features a flip up nose-door, a side-door on the main deck, and a side-door on the lower deck ("belly") to aid loading and unloading. The 747-8F made its maiden flight on February 8, 2010. The variant received its amended type certificate jointly from the FAA and the European Aviation Safety Agency (EASA) on August 19, 2011. The -8F was first delivered to Cargolux on October 12, 2011. The passenger version, named 747-8 Intercontinental or 747-8I, is designed to carry up to 467 passengers in a 3-class configuration and fly more than 8,000 nautical miles (15,000 km; 9,200 mi) at Mach 0.855. As a derivative of the already common 747-400, the 747-8I has the economic benefit of similar training and interchangeable parts. The type's first test flight occurred on March 20, 2011. The 747-8 has surpassed the Airbus A340-600 as the world's longest airliner, a record it would hold until the 777X, which first flew in 2020. The first -8I was delivered in May 2012 to Lufthansa. The 747-8 has received 155 total orders, including 106 for the -8F and 47 for the -8I as of June 2021. The final 747-8F was delivered to Atlas Air on January 31, 2023. ### Government, military, and other variants - VC-25 – This aircraft is the U.S. Air Force very important person (VIP) version of the 747-200B. The U.S. Air Force operates two of them in VIP configuration as the VC-25A. Tail numbers 28000 and 29000 are popularly known as Air Force One, which is technically the air-traffic call sign for any United States Air Force aircraft carrying the U.S. President. Partially completed aircraft from Everett, Washington, were flown to Wichita, Kansas, for final outfitting by Boeing Military Airplane Company. Two new aircraft, based around the 747-8, are being procured which will be designated as VC-25B. - E-4B – This is an airborne command post designed for use in nuclear war. Three E-4As, based on the 747-200B, with a fourth aircraft, with more powerful engines and upgraded systems delivered in 1979 as a E-4B, with the three E-4As upgraded to this standard. Formerly known as the National Emergency Airborne Command Post (referred to colloquially as "Kneecap"), this type is now referred to as the National Airborne Operations Center (NAOC). - YAL-1 – This was the experimental Airborne Laser, a planned component of the U.S. National Missile Defense. - Shuttle Carrier Aircraft (SCA) – Two 747s were modified to carry the Space Shuttle orbiter. The first was a 747-100 (N905NA), and the other was a 747-100SR (N911NA). The first SCA carried the prototype Enterprise during the Approach and Landing Tests in the late 1970s. The two SCA later carried all five operational Space Shuttle orbiters. - C-33 – This aircraft was a proposed U.S. military version of the 747-400F intended to augment the C-17 fleet. The plan was canceled in favor of additional C-17s. - KC-25/33 – A proposed 747-200F was also adapted as an aerial refueling tanker and was bid against the DC-10-30 during the 1970s Advanced Cargo Transport Aircraft (ACTA) program that produced the KC-10 Extender. Before the 1979 Iranian Revolution, Iran bought four 747-100 aircraft with air-refueling boom conversions to support its fleet of F-4 Phantoms. There is a report of the Iranians using a 747 Tanker in H-3 airstrike during Iran–Iraq War. It is unknown whether these aircraft remain usable as tankers. Since then there have been proposals to use a 747-400 for that role. - 747F Airlifter – Proposed US military transport version of the 747-200F intended as an alternative to further purchases of the C-5 Galaxy. This 747 would have had a special nose jack to lower the sill height for the nose door. System tested in 1980 on a Flying Tiger Line 747-200F. - 747 CMCA – This "Cruise Missile Carrier Aircraft" variant was considered by the U.S. Air Force during the development of the B-1 Lancer strategic bomber. It would have been equipped with 50 to 100 AGM-86 ALCM cruise missiles on rotary launchers. This plan was abandoned in favor of more conventional strategic bombers. - 747 AAC – A Boeing study under contract from the USAF for an "airborne aircraft carrier" for up to 10 Boeing Model 985-121 "microfighters" with the ability to launch, retrieve, re-arm, and refuel. Boeing believed that the scheme would be able to deliver a flexible and fast carrier platform with global reach, particularly where other bases were not available. Modified versions of the 747-200 and Lockheed C-5A were considered as the base aircraft. The concept, which included a complementary 747 AWACS version with two reconnaissance "microfighters", was considered technically feasible in 1973. - Evergreen 747 Supertanker – A Boeing 747-200 modified as an aerial application platform for fire fighting using 20,000 US gallons (76,000 L) of firefighting chemicals. - Stratospheric Observatory for Infrared Astronomy (SOFIA) – A former Pan Am Boeing 747SP modified to carry a large infrared-sensitive telescope, in a joint venture of NASA and DLR. High altitudes are needed for infrared astronomy, to rise above infrared-absorbing water vapor in the atmosphere. - A number of other governments also use the 747 as a VIP transport, including Bahrain, Brunei, India, Iran, Japan, Kuwait, Oman, Pakistan, Qatar, Saudi Arabia and United Arab Emirates. Several Boeing 747-8s have been ordered by Boeing Business Jet for conversion to VIP transports for several unidentified customers. ### Undeveloped variants Boeing has studied a number of 747 variants that have not gone beyond the concept stage. #### 747 trijet During the late 1960s and early 1970s, Boeing studied the development of a shorter 747 with three engines, to compete with the smaller Lockheed L-1011 TriStar and McDonnell Douglas DC-10. The center engine would have been fitted in the tail with an S-duct intake similar to the L-1011's. Overall, the 747 trijet would have had more payload, range, and passenger capacity than both of them. However, engineering studies showed that a major redesign of the 747 wing would be necessary. Maintaining the same 747 handling characteristics would be important to minimize pilot retraining. Boeing decided instead to pursue a shortened four-engine 747, resulting in the 747SP. #### 747-500 In January 1986, Boeing outlined preliminary studies to build a larger, ultra-long haul version named the 747-500, which would enter service in the mid- to late-1990s. The aircraft derivative would use engines evolved from unducted fan (UDF) (propfan) technology by General Electric, but the engines would have shrouds, sport a bypass ratio of 15–20, and have a propfan diameter of 10–12 feet (3.0–3.7 m). The aircraft would be stretched (including the upper deck section) to a capacity of 500 seats, have a new wing to reduce drag, cruise at a faster speed to reduce flight times, and have a range of at least 8,700 nmi; 16,000 km, which would allow airlines to fly nonstop between London, England and Sydney, Australia. #### 747 ASB Boeing announced the 747 ASB (Advanced Short Body) in 1986 as a response to the Airbus A340 and the McDonnell Douglas MD-11. This aircraft design would have combined the advanced technology used on the 747-400 with the foreshortened 747SP fuselage. The aircraft was to carry 295 passengers over a range of 8,000 nmi (15,000 km; 9,200 mi). However, airlines were not interested in the project and it was canceled in 1988 in favor of the 777. #### 747-500X, -600X, and -700X Boeing announced the 747-500X and -600X at the 1996 Farnborough Airshow. The proposed models would have combined the 747's fuselage with a new wing spanning 251 feet (77 m) derived from the 777. Other changes included adding more powerful engines and increasing the number of tires from two to four on the nose landing gear and from 16 to 20 on the main landing gear. The 747-500X concept featured a fuselage length increased by 18 feet (5.5 m) to 250 feet (76 m), and the aircraft was to carry 462 passengers over a range up to 8,700 nautical miles (16,100 km; 10,000 mi), with a gross weight of over 1.0 Mlb (450 tonnes). The 747-600X concept featured a greater stretch to 279 feet (85 m) with seating for 548 passengers, a range of up to 7,700 nmi (14,300 km; 8,900 mi), and a gross weight of 1.2 Mlb (540 tonnes). A third study concept, the 747-700X, would have combined the wing of the 747-600X with a widened fuselage, allowing it to carry 650 passengers over the same range as a 747-400. The cost of the changes from previous 747 models, in particular the new wing for the 747-500X and -600X, was estimated to be more than US\$5 billion. Boeing was not able to attract enough interest to launch the aircraft. #### 747X and 747X Stretch As Airbus progressed with its A3XX study, Boeing offered a 747 derivative as an alternative in 2000; a more modest proposal than the previous -500X and -600X that retained the 747's overall wing design and add a segment at the root, increasing the span to 229 ft (69.8 m). Power would have been supplied by either the Engine Alliance GP7172 or the Rolls-Royce Trent 600, which were also proposed for the 767-400ERX. A new flight deck based on the 777's would be used. The 747X aircraft was to carry 430 passengers over ranges of up to 8,700 nmi (16,100 km; 10,000 mi). The 747X Stretch would be extended to 263 ft (80.2 m) long, allowing it to carry 500 passengers over ranges of up to 7,800 nmi (14,400 km; 9,000 mi). Both would feature an interior based on the 777. Freighter versions of the 747X and 747X Stretch were also studied. Like its predecessor, the 747X family was unable to garner enough interest to justify production, and it was shelved along with the 767-400ERX in March 2001, when Boeing announced the Sonic Cruiser concept. Though the 747X design was less costly than the 747-500X and -600X, it was criticized for not offering a sufficient advance from the existing 747-400. The 747X did not make it beyond the drawing board, but the 747-400X being developed concurrently moved into production to become the 747-400ER. #### 747-400XQLR After the end of the 747X program, Boeing continued to study improvements that could be made to the 747. The 747-400XQLR (Quiet Long Range) was meant to have an increased range of 7,980 nmi (14,780 km; 9,180 mi), with improvements to boost efficiency and reduce noise. Improvements studied included raked wingtips similar to those used on the 767-400ER and a sawtooth engine nacelle for noise reduction. Although the 747-400XQLR did not move to production, many of its features were used for the 747 Advanced, which was launched as the 747-8 in 2005. ## Operators In 1979, Qantas became the first airline in the world to operate an all Boeing 747 fleet, with seventeen aircraft. As of July 2019, there were 462 Boeing 747s in airline service, with Atlas Air and British Airways being the largest operators with 33 747-400s each. The last US passenger Boeing 747 was retired from Delta Air Lines in December 2017, after it flew for every American major carrier since its 1970 introduction. Delta flew three of its last four aircraft on a farewell tour, from Seattle to Atlanta on December 19 then to Los Angeles and Minneapolis/St Paul on December 20. As the IATA forecast an increase in air freight from 4% to 5% in 2018 fueled by booming trade for time-sensitive goods, from smartphones to fresh flowers, demand for freighters is strong while passenger 747s are phased out. Of the 1,544 produced, 890 are retired; as of 2018, a small subset of those which were intended to be parted-out got \$3 million D-checks before flying again. Young -400s were sold for 320 million yuan (\$50 million) and Boeing stopped converting freighters, which used to cost nearly \$30 million. This comeback helped the airframer financing arm Boeing Capital to shrink its exposure to the 747-8 from \$1.07 billion in 2017 to \$481 million in 2018. In July 2020, British Airways announced that it was retiring its 747 fleet. The final British Airways 747 flights departed London Heathrow on October 8, 2020. ### Orders and deliveries Boeing 747 orders and deliveries (cumulative, by year): Orders and deliveries through to the end of February 2023. ### Model summary Orders and deliveries through to the end of February 2023. ## Accidents and incidents As of January 2023, the 747 has been involved in 173 aviation accidents and incidents, including 64 hull loss accidents causing 3,746 fatalities. There have been several hijackings of Boeing 747s, such as Pan Am Flight 73, a 747-100 hijacked by four terrorists, causing 20 deaths. Few crashes have been attributed to 747 design flaws. The Tenerife airport disaster resulted from pilot error and communications failure, while the Japan Airlines Flight 123 and China Airlines Flight 611 crashes stemmed from improper aircraft repair. United Airlines Flight 811, which suffered an explosive decompression mid-flight on February 24, 1989, led the National Transportation Safety Board (NTSB) to issue a recommendation that the Boeing 747-100 and 747-200 cargo doors similar to those on the Flight 811 aircraft be modified to those featured on the Boeing 747-400. Korean Air Lines Flight 007 was shot down by a Soviet fighter aircraft in 1983 after it had strayed into Soviet territory, causing US President Ronald Reagan to authorize the then-strictly-military global positioning system (GPS) for civilian use. Accidents due to design deficiencies included TWA Flight 800, where a 747-100 exploded in mid-air on July 17, 1996, probably due to sparking electrical wires inside the fuel tank. This finding led the FAA to adopt a rule in July 2008 requiring installation of an inerting system in the center fuel tank of most large aircraft, after years of research into solutions. At the time, the new safety system was expected to cost US\$100,000 to \$450,000 per aircraft and weigh approximately 200 pounds (91 kg). El Al Flight 1862 crashed after the fuse pins for an engine broke off shortly after take-off due to metal fatigue. Instead of simply dropping away from the wing, the engine knocked off the adjacent engine and damaged the wing. ## Aircraft on display As increasing numbers of "classic" 747-100 and 747-200 series aircraft have been retired, some have been used for other uses such as museum displays. Some older 747-300s and 747-400s were later added to museum collections. - 20235/001 – 747-121 registration N7470 City of Everett, the first 747 and prototype, is at the Museum of Flight, Seattle, Washington. - 19651/025 – 747-121 registration N747GE at the Pima Air & Space Museum, Tucson, Arizona, US. - 19778/027 – 747-151 registration N601US nose at the National Air and Space Museum, Washington, D.C. - 19661/070 – 747-121(SF) registration N681UP preserved at a plaza on Jungong Road, Shanghai, China. - 19896/072 – 747-132(SF) registration N481EV at the Evergreen Aviation & Space Museum, McMinnville, Oregon, US. - 20107/086 – 747-123 registration N905NA, a NASA Shuttle Carrier Aircraft, at the Johnson Space Center, Houston, Texas. - 20269/150 – 747-136 registration G-AWNG nose at Hiller Aviation Museum, San Carlos, California. - 20239/160 – 747-244B registration ZS-SAN nicknamed Lebombo, at the South African Airways Museum Society, Rand Airport, Johannesburg, South Africa. - 20541/200 – 747-128 registration F-BPVJ at Musée de l'Air et de l'Espace, Paris, France. - 20770/213 – 747-2B5B registration HL7463 at Jeongseok Aviation Center, Jeju, South Korea. - 20713/219 - 747-212B(SF) registration N482EV at the Evergreen Aviation & Space Museum, McMinnville, Oregon, US. - 21134/288 – 747SP-44 registration ZS-SPC at the South African Airways Museum Society, Rand Airport, Johannesburg, South Africa. - 21549/336 – 747-206B registration PH-BUK at the Aviodrome, Lelystad, Netherlands. - 21588/342 – 747-230B(M) registration D-ABYM preserved at Technik Museum Speyer, Germany. - 21650/354 – 747-2R7F/SCD registration G-MKGA preserved at Cotswold Airport as an event space. - 22145/410 – 747-238B registration VH-EBQ at the Qantas Founders Outback Museum, Longreach, Queensland, Australia. - 22455/515 – 747-256BM registration EC-DLD Lope de Vega nose at the National Museum of Science and Technology, A Coruña, Spain. - 23223/606 – 747-338 registration VH-EBU at Melbourne Avalon Airport, Avalon, Victoria, Australia. VH-EBU is an ex-Qantas airframe formerly decorated in the Nalanji Dreaming livery, currently in use as a training aircraft and film set. - 23719/696 – 747-451 registration N661US at the Delta Flight Museum, Atlanta, Georgia, US. This particular plane was the first 747-400 in service, as well as the prototype. - 24354/731 – 747-438 registration VH-OJA at Shellharbour Airport, Albion Park Rail, New South Wales, Australia. - 21441/306 - SOFIA - 747SP-21 registration N747NA at Pima Air and Space Museum in Tucson, Arizona. Former Pan Am and United Airlines 747SP bought by NASA and converted into a flying telescope, for astronomy purposes. Named Clipper Lindbergh. ### Other uses Upon its retirement from service, the 747 which was number two in the production line was dismantled and shipped to Hopyeong, Namyangju, Gyeonggi-do, South Korea where it was re-assembled, repainted in a livery similar to that of Air Force One and converted into a restaurant. Originally flown commercially by Pan Am as N747PA, Clipper Juan T. Trippe, and repaired for service following a tailstrike, it stayed with the airline until its bankruptcy. The restaurant closed by 2009, and the aircraft was scrapped in 2010. A former British Airways 747-200B, G-BDXJ, is parked at the Dunsfold Aerodrome in Surrey, England and has been used as a movie set for productions such as the 2006 James Bond film, Casino Royale. The airplane also appears frequently in the television series Top Gear, which is filmed at Dunsfold. The Jumbo Stay hostel, using a converted 747-200 formerly registered as 9V-SQE, opened at Arlanda Airport, Stockholm in January 2009. A former Pakistan International Airlines 747-300 was converted into a restaurant by Pakistan's Airports Security Force in 2017. It is located at Jinnah International Airport, Karachi. The wings of a 747 have been repurposed as roofs of a house in Malibu, California. In 2023, a 747-200B originally operated by Lufthansa as a combi aircraft bearing the registration D-ABYW and named Berlin, and later by Lufthansa Cargo and other airlines as a full freighter, was opened as a Coach outlet store at Freeport A'Famosa Outlet Mall in Malacca, Malaysia. ## Specifications ## Cultural impact Following its debut, the 747 rapidly achieved iconic status. The aircraft entered the cultural lexicon as the original Jumbo Jet, a term coined by the aviation media to describe its size, and was also nicknamed Queen of the Skies. Test pilot David P. Davies described it as "a most impressive aeroplane with a number of exceptionally fine qualities", and praised its flight control system as "truly outstanding" because of its redundancy. Appearing in over 300 film productions, the 747 is one of the most widely depicted civilian aircraft and is considered by many as one of the most iconic in film history. It has appeared in film productions such as the disaster films Airport 1975 and Airport '77, as well as Air Force One, Die Hard 2, and Executive Decision. ## See also
246,333
Mayfly
1,172,273,751
Aquatic insects of the order Ephemeroptera
[ "Aquatic insects", "Extant Pennsylvanian first appearances", "Featured articles", "Mayflies" ]
Mayflies (also known as shadflies or fishflies in Canada and the upper Midwestern United States, as Canadian soldiers in the American Great Lakes region, and as up-winged flies in the United Kingdom) are aquatic insects belonging to the order Ephemeroptera. This order is part of an ancient group of insects termed the Palaeoptera, which also contains dragonflies and damselflies. Over 3,000 species of mayfly are known worldwide, grouped into over 400 genera in 42 families. Mayflies have ancestral traits that were probably present in the first flying insects, such as long tails and wings that do not fold flat over the abdomen. Their immature stages are aquatic fresh water forms (called "naiads" or "nymphs"), whose presence indicates a clean, unpolluted and highly oxygenated aquatic environment. They are unique among insect orders in having a fully winged terrestrial preadult stage, the subimago, which moults into a sexually mature adult, the imago. Mayflies "hatch" (emerge as adults) from spring to autumn, not necessarily in May, in enormous numbers. Some hatches attract tourists. Fly fishermen make use of mayfly hatches by choosing artificial fishing flies that resemble them. One of the most famous English mayflies is Rhithrogena germanica, the fisherman's "March brown mayfly". The brief lives of mayfly adults have been noted by naturalists and encyclopaedists since Aristotle and Pliny the Elder in classical antiquity. The German engraver Albrecht Dürer included a mayfly in his 1495 engraving The Holy Family with the Mayfly to suggest a link between heaven and earth. The English poet George Crabbe compared the brief life of a daily newspaper with that of a mayfly in the satirical poem "The Newspaper" (1785), both being known as "ephemera". ## Description ### Nymph Immature mayflies are aquatic and are referred to as nymphs or naiads. In contrast to their short lives as adults, they may live for several years in the water. They have an elongated, cylindrical or somewhat flattened body that passes through a number of instars (stages), molting and increasing in size each time. When ready to emerge from the water, nymphs vary in length, depending on species, from 3 to 30 mm (0.12 to 1.18 in). The head has a tough outer covering of sclerotin, often with various hard ridges and projections; it points either forwards or downwards, with the mouth at the front. There are two large compound eyes, three ocelli (simple eyes) and a pair of antennae of variable lengths, set between or in front of the eyes. The mouthparts are designed for chewing and consist of a flap-like labrum, a pair of strong mandibles, a pair of maxillae, a membranous hypopharynx and a labium. The thorax consists of three segments – the hindmost two, the mesothorax and metathorax, being fused. Each segment bears a pair of legs which usually terminate in a single claw. The legs are robust and often clad in bristles, hairs or spines. Wing pads develop on the mesothorax, and in some species, hindwing pads develop on the metathorax. The abdomen consists of ten segments, some of which may be obscured by a large pair of operculate gills, a thoracic shield (expanded part of the prothorax) or the developing wing pads. In most taxa up to seven pairs of gills arise from the top or sides of the abdomen, but in some species they are under the abdomen, and in a very few species the gills are instead located on the coxae of the legs, or the bases of the maxillae. The abdomen terminates in slender thread-like projections, consisting of a pair of cerci, with or without a third central caudal filament. ### Subimago The final moult of the nymph is not to the full adult form, but to a winged stage called a subimago that physically resembles the adult, but which is usually sexually immature and duller in colour. The subimago, or dun, often has partially cloudy wings fringed with minute hairs known as microtrichia; its eyes, legs and genitalia are not fully developed. Females of some mayflies (subfamily Palingeniinae) do not moult from a subimago state into an adult stage and are sexually mature while appearing like a subimago with microtrichia on the wing membrane. Oligoneuriine mayflies form another exception in retaining microtrichia on their wings but not on their bodies. Subimagos are generally poor fliers, have shorter appendages, and typically lack the colour patterns used to attract mates. In males of Ephoron leukon, the subimagos have forelegs that are short and compressed, with accordion like folds, and expands to more than double its length after moulting. After a period, usually lasting one or two days but in some species only a few minutes, the subimago moults to the full adult form, making mayflies the only insects where a winged form undergoes a further moult. ### Imago Adult mayflies, or imagos, are relatively primitive in structure, exhibiting traits that were probably present in the first flying insects. These include long tails and wings that do not fold flat over the abdomen. Mayflies are delicate-looking insects with one or two pairs of membranous, triangular wings, which are extensively covered with veins. At rest, the wings are held upright, like those of a butterfly. The hind wings are much smaller than the forewings and may be vestigial or absent. The second segment of the thorax, which bears the forewings, is enlarged to hold the main flight muscles. Adults have short, flexible antennae, large compound eyes, three ocelli and non-functional mouthparts. In most species, the males' eyes are large and the front legs unusually long, for use in locating and grasping females during the mid-air mating. In the males of some families, there are two large cylindrical "turban" eyes (also known as turbanate or turbinate eyes) that face upwards in addition to the lateral eyes. They are capable of detecting ultraviolet light and are thought to be used during courtship to detect females flying above them. In some species all the legs are functionless, apart from the front pair in males. The abdomen is long and roughly cylindrical, with ten segments and two or three long cerci (tail-like appendages) at the tip. Like Entognatha, Archaeognatha and Zygentoma, the spiracles on the abdomen don't have closing muscles. Uniquely among insects, mayflies possess paired genitalia, with the male having two aedeagi (penis-like organs) and the female two gonopores (sexual openings). ## Biology ### Reproduction and life cycle Mayflies are hemimetabolous (they have "incomplete metamorphosis"). They are unique among insects in that they moult one more time after acquiring functional wings; this last-but-one winged (alate) instar usually lives a very short time and is known as a subimago, or to fly fishermen as a dun. Mayflies at the subimago stage are a favourite food of many fish, and many fishing flies are modelled to resemble them. The subimago stage does not survive for long, rarely for more than 24 hours. In some species, it may last for just a few minutes, while the mayflies in the family Palingeniidae have sexually mature subimagos and no true adult form at all. Often, all the individuals in a population mature at once (a hatch), and for a day or two in the spring or autumn, mayflies are extremely abundant, dancing around each other in large groups, or resting on every available surface. In many species the emergence is synchronised with dawn or dusk, and light intensity seems to be an important cue for emergence, but other factors may also be involved. Baetis intercalaris, for example, usually emerges just after sunset in July and August, but in one year, a large hatch was observed at midday in June. The soft-bodied subimagos are very attractive to predators. Synchronous emergence is probably an adaptive strategy that reduces the individual's risk of being eaten. The lifespan of an adult mayfly is very short, varying with the species. The primary function of the adult is reproduction; adults do not feed and have only vestigial mouthparts, while their digestive systems are filled with air. Dolania americana has the shortest adult lifespan of any mayfly: the adult females of the species live for less than five minutes. Male adults may patrol individually, but most congregate in swarms a few metres above water with clear open sky above it, and perform a nuptial or courtship dance. Each insect has a characteristic up-and-down pattern of movement; strong wingbeats propel it upwards and forwards with the tail sloping down; when it stops moving its wings, it falls passively with the abdomen tilted upwards. Females fly into these swarms, and mating takes place in the air. A rising male clasps the thorax of a female from below using his front legs bent upwards, and inseminates her. Copulation may last just a few seconds, but occasionally a pair remains in tandem and flutters to the ground. Males may spend the night in vegetation and return to their dance the following day. Although they do not feed, some briefly touch the surface to drink a little water before flying off. Females typically lay between four hundred and three thousand eggs. The eggs are often dropped onto the surface of the water; sometimes the female deposits them by dipping the tip of her abdomen into the water during flight, releasing a small batch of eggs each time, or deposits them in bulk while standing next to the water. In a few species, the female submerges and places the eggs among plants or in crevices underwater, but in general, they sink to the bottom. The incubation time is variable, depending at least in part on temperature, and may be anything from a few days to nearly a year. Eggs can go into a quiet dormant phase or diapause. The larval growth rate is also temperature-dependent, as is the number of moults. At anywhere between ten and fifty, these post-embryonic moults are more numerous in mayflies than in most other insect orders. The nymphal stage of mayflies may last from several months to several years, depending on species and environmental conditions. Around half of all mayfly species whose reproductive biology has been described are parthenogenetic (able to asexually reproduce), including both partially and exclusively parthenogenetic populations and species. Many species breed in moving water, where there is a tendency for the eggs and nymphs to get washed downstream. To counteract this, females may fly upriver before depositing their eggs. For example, the female Tisza mayfly, the largest European species with a length of 12 cm (4.7 in), flies up to 3 kilometres (2 mi) upstream before depositing eggs on the water surface. These sink to the bottom and hatch after 45 days, the nymphs burrowing their way into the sediment where they spend two or three years before hatching into subimagos. When ready to emerge, several different strategies are used. In some species, the transformation of the nymph occurs underwater and the subimago swims to the surface and launches itself into the air. In other species, the nymph rises to the surface, bursts out of its skin, remains quiescent for a minute or two resting on the exuviae (cast skin) and then flies upwards, and in some, the nymph climbs out of the water before transforming. ### Ecology Nymphs live primarily in streams under rocks, in decaying vegetation or in sediments. Few species live in lakes, but they are among the most prolific. For example, the emergence of one species of Hexagenia was recorded on Doppler weather radar by the shoreline of Lake Erie in 2003. In the nymphs of most mayfly species, the paddle-like gills do not function as respiratory surfaces because sufficient oxygen is absorbed through the integument, instead serving to create a respiratory current. However, in low-oxygen environments such as the mud at the bottom of ponds in which Ephemera vulgata burrows, the filamentous gills act as true accessory respiratory organs and are used in gaseous exchange. In most species, the nymphs are herbivores or detritivores, feeding on algae, diatoms or detritus, but in a few species, they are predators of chironomid and other small insect larvae and nymphs. Nymphs of Povilla burrow into submerged wood and can be a problem for boat owners in Asia. Some are able to shift from one feeding group to another as they grow, thus enabling them to utilise a variety of food resources. They process a great quantity of organic matter as nymphs and transfer a lot of phosphates and nitrates to terrestrial environments when they emerge from the water, thus helping to remove pollutants from aqueous systems. Along with caddisfly larvae and gastropod molluscs, the grazing of mayfly nymphs has a significant impact on the primary producers, the plants and algae, on the bed of streams and rivers. The nymphs are eaten by a wide range of predators and form an important part of the aquatic food chain. Fish are among the main predators, picking nymphs off the bottom or ingesting them in the water column, and feeding on emerging nymphs and adults on the water surface. Carnivorous stonefly, caddisfly, alderfly and dragonfly larvae feed on bottom-dwelling mayfly nymphs, as do aquatic beetles, leeches, crayfish and amphibians. Besides the direct mortality caused by these predators, the behaviour of their potential prey is also affected, with the nymphs' growth rate being slowed by the need to hide rather than feed. The nymphs are highly susceptible to pollution and can be useful in the biomonitoring of water bodies. Once they have emerged, large numbers are preyed on by birds, bats and by other insects, such as Rhamphomyia longicauda. Mayfly nymphs may serve as hosts for parasites such as nematodes and trematodes. Some of these affect the nymphs' behaviour in such a way that they become more likely to be predated. Other nematodes turn adult male mayflies into quasi-females which haunt the edges of streams, enabling the parasites to break their way out into the aqueous environment they need to complete their life cycles. The nymphs can also serve as intermediate hosts for the horsehair worm Paragordius varius, which causes its definitive host, a grasshopper, to jump into water and drown. #### Effects on ecosystem functioning Mayflies are involved in both primary production and bioturbation. A study in laboratory simulated streams revealed that the mayfly genus Centroptilum increased the export of periphyton, thus indirectly affecting primary production positively, which is an essential process for ecosystems. The mayfly can also reallocate and alter the nutrient availability in aquatic habitats through the process of bioturbation. By burrowing in the bottom of lakes and redistributing nutrients, mayflies indirectly regulate phytoplankton and epibenthic primary production. Once burrowing to the bottom of the lake, mayfly nymphs begin to billow their respiratory gills. This motion creates current that carries food particles through the burrow and allows the nymph to filter feed. Other mayfly nymphs possess elaborate filter feeding mechanisms like that of the genus Isonychia. The nymph have forelegs that contain long bristle-like structures that have two rows of hairs. Interlocking hairs form the filter by which the insect traps food particles. The action of filter feeding has a small impact on water purification but an even larger impact on the convergence of small particulate matter into matter of a more complex form that goes on to benefit consumers later in the food chain. ### Distribution Mayflies are distributed all over the world in clean freshwater habitats, though absent from Antarctica. They tend to be absent from oceanic islands or represented by one or two species that have dispersed from nearby mainland. Female mayflies may be dispersed by wind, and eggs may be transferred by adhesion to the legs of waterbirds. The greatest generic diversity is found in the Neotropical realm, while the Holarctic has a smaller number of genera but a high degree of speciation. Some thirteen families are restricted to a single bioregion. The main families have some general habitat preferences: the Baetidae favour warm water; the Heptageniidae live under stones and prefer fast-flowing water; and the relatively large Ephemeridae make burrows in sandy lake or river beds. ## Conservation The nymph is the dominant life history stage of the mayfly. Different insect species vary in their tolerance to water pollution, but in general, the larval stages of mayflies, stoneflies (Plecoptera) and caddis flies (Trichoptera) are susceptible to a number of pollutants including sewage, pesticides and industrial effluent. In general, mayflies are particularly sensitive to acidification, but tolerances vary, and certain species are exceptionally tolerant to heavy metal contamination and to low pH levels. Ephemerellidae are among the most tolerant groups and Siphlonuridae and Caenidae the least. The adverse effects on the insects of pollution may be either lethal or sub-lethal, in the latter case resulting in altered enzyme function, poor growth, changed behaviour or lack of reproductive success. As important parts of the food chain, pollution can cause knock-on effects to other organisms; a dearth of herbivorous nymphs can cause overgrowth of algae, and a scarcity of predacious nymphs can result in an over-abundance of their prey species. Fish that feed on mayfly nymphs that have bioaccumulated heavy metals are themselves at risk. Adult female mayflies find water by detecting the polarization of reflected light. They are easily fooled by other polished surfaces which can act as traps for swarming mayflies. The threat to mayflies applies also to their eggs. "Modest levels" of pollution in rivers in England are sufficient to kill 80% of mayfly eggs, which are as vulnerable to pollutants as other life-cycle stages; numbers of the blue-winged olive mayfly (Baetis) have fallen dramatically, almost to none in some rivers. The major pollutants thought to be responsible are fine sediment and phosphate from agriculture and sewage. The status of many species of mayflies is unknown because they are known from only the original collection data. Four North American species are believed to be extinct. Among these, Pentagenia robusta was originally collected from the Ohio River near Cincinnati, but this species has not been seen since its original collection in the 1800s. Ephemera compar is known from a single specimen, collected from the "foothills of Colorado" in 1873, but despite intensive surveys of the Colorado mayflies reported in 1984, it has not been rediscovered. The International Union for Conservation of Nature (IUCN) red list of threatened species includes one mayfly: Tasmanophlebi lacuscoerulei, the large blue lake mayfly, which is a native of Australia and is listed as endangered because its alpine habitat is vulnerable to climate change. ## Taxonomy and phylogeny Ephemeroptera was defined by Alpheus Hyatt and Jennie Maria Arms Sheldon in 1890–1. The taxonomy of the Ephemeroptera was reworked by George F. Edmunds and Jay R Traver, starting in 1954. Traver contributed to the 1935 work The Biology of Mayflies, and has been called "the first Ephemeroptera specialist in North America". As of 2012, over 3,000 species of mayfly in 42 families and over 400 genera are known worldwide, including about 630 species in North America. Mayflies are an ancient group of winged (pterygote) insects. Putative fossil stem group representatives (e.g. Syntonopteroidea-like Lithoneura lameerrei) are already known from the late Carboniferous. The name Ephemeroptera is from the Greek ἐφήμερος, ephemeros "short-lived" (literally "lasting a day", cf. English "ephemeral"), and πτερόν, pteron, "wing", referring to the brief lifespan of adults. The English common name is for the insect's emergence in or around the month of May in the UK. The name shadfly is from the Atlantic fish the shad, which runs up American East Coast rivers at the same time as many mayflies emerge. From the Permian, numerous stem group representatives of mayflies are known, which are often lumped into a separate taxon Permoplectoptera (e.g. including Protereisma permianum in the Protereismatidae, and Misthodotidae). The larvae of Permoplectoptera still had 9 pairs of abdominal gills, and the adults still had long hindwings. Maybe the fossil family Cretereismatidae from the Lower Cretaceous Crato Formation of Brazil also belongs as the last offshoot to Permoplectoptera. The Crato outcrops otherwise yielded fossil specimens of modern mayfly families or the extinct (but modern) family Hexagenitidae. However, from the same locality the strange larvae and adults of the extinct family Mickoleitiidae (order Coxoplectoptera) have been described, which represents the fossil sister group of modern mayflies, even though they had very peculiar adaptations such as raptorial forelegs. The oldest mayfly inclusion in amber is Cretoneta zherichini (Leptophlebiidae) from the Lower Cretaceous of Siberia. In the much younger Baltic amber numerous inclusions of several modern families of mayflies have been found (Ephemeridae, Potamanthidae, Leptophlebiidae, Ametropodidae, Siphlonuridae, Isonychiidae, Heptageniidae, and Ephemerellidae). The modern genus Neoephemera is represented in the fossil record by the Ypresian species N. antiqua from Washington state. Grimaldi and Engel, reviewing the phylogeny in 2005, commented that many cladistic studies had been made with no stability in Ephemeroptera suborders and infraorders; the traditional division into Schistonota and Pannota was wrong because Pannota is derived from the Schistonota. The phylogeny of the Ephemeroptera was first studied using molecular analysis by Ogden and Whiting in 2005. They recovered the Baetidae as sister to the other clades. Mayfly phylogeny was further studied using morphological and molecular analyses by Ogden and others in 2009. They found that the Asian genus Siphluriscus was sister to all other mayflies. Some existing lineages such as Ephemeroidea, and families such as Ameletopsidae, were found not to be monophyletic, through convergence among nymphal features. The following traditional classification, with two suborders Pannota and Schistonota, was introduced in 1979 by W. P. McCafferty and George F. Edmunds. The list is based on Peters and Campbell (1991), in Insects of Australia. Suborder Pannota - Superfamily Ephemerelloidea - Ephemerellidae - Leptohyphidae - Tricorythidae - Superfamily Caenoidea - Neoephemeridae - Baetiscidae - Caenidae - Prosopistomatidae Suborder Schistonota - Superfamily Baetoidea - Siphlonuridae - Baetidae - Oniscigastridae - Ameletopsidae - Ametropodidae - Superfamily Heptagenioidea - Coloburiscidae - Oligoneuriidae - Isonychiidae - Heptageniidae - Superfamily Leptophlebioidea - Leptophlebiidae - Superfamily Ephemeroidea - Behningiidae - Potamanthidae - Euthyplociidae - Polymitarcyidae - Ephemeridae - Palingeniidae ### Phylogeny After ## In human culture ### In art The Dutch Golden Age author Augerius Clutius (Outgert Cluyt) illustrated some mayflies in his 1634 De Hemerobio ("On the Mayfly"), the earliest book written on the group. Maerten de Vos similarly illustrated a mayfly in his 1587 depiction of the fifth day of creation, amongst an assortment of fish and water birds. In 1495 Albrecht Dürer included a mayfly in his engraving The Holy Family with the Mayfly. The critics Larry Silver and Pamela H. Smith argue that the image provides "an explicit link between heaven and earth ... to suggest a cosmic resonance between sacred and profane, celestial and terrestrial, macrocosm and microcosm." ### In literature The Ancient Greek biologist and philosopher Aristotle wrote in his History of Animals that > Bloodless and many footed animals, whether furnished with wings or feet, move with more than four points of motion; as, for instance, the dayfly (ephemeron) moves with four feet and four wings: and, I may observe in passing, this creature is exceptional not only in regard to the duration of its existence, whence it receives its name, but also because though a quadruped it has wings also. The Ancient Roman encyclopaedist Pliny the Elder described the mayfly as the "hemerobius" in his Natural History: > The River Bug on the Black Sea at midsummer brings down some thin membranes that look like berries out of which burst a four-legged caterpillar in the manner of the creature mentioned above, but it does not live beyond one day, owing to which it is called the hemerobius. In his 1789 book The Natural History and Antiquities of Selborne, Gilbert White described in the entry for "June 10th, 1771" how > Myriads of May-flies appear for the first time on the Alresford stream. The air was crowded with them, and the surface of the water covered. Large trouts sucked them in as they lay struggling on the surface of the stream, unable to rise till their wings were dried ... Their motions are very peculiar, up and down for so many yards almost in a perpendicular line. The mayfly has come to symbolise the transitoriness and brevity of life. The English poet George Crabbe, known to have been interested in insects, compared the brief life of a newspaper with that of mayflies, both being known as "Ephemera", things that live for a day: > > In shoals the hours their constant numbers bring Like insects waking to th' advancing spring; Which take their rise from grubs obscene that lie In shallow pools, or thence ascend the sky: Such are these base ephemeras, so born To die before the next revolving morn. The theme of brief life is echoed in the artist Douglas Florian's 1998 poem, "The Mayfly". The American Poet Laureate Richard Wilbur's 2005 poem "Mayflies" includes the lines "I saw from unseen pools a mist of flies, In their quadrillions rise, And animate a ragged patch of glow, With sudden glittering". Another literary reference to mayflies is seen in The Epic of Gilgamesh, one of the earliest surviving great works of literature. The briefness of Gilgamesh's life is compared to that of the adult mayfly. In Szeged, Hungary, mayflies are celebrated in a monument near the Belvárosi bridge, the work of local sculptor Pal Farkas, depicting the courtship dance of mayflies. The American playwright David Ives wrote a short comedic play, Time Flies, in 2001, as to what two mayflies might discuss during their one day of existence. ### In fly fishing Mayflies are the primary source of models for artificial flies, hooks tied with coloured materials such as threads and feathers, used in fly fishing. These are based on different life-cycle stages of mayflies. For example, the flies known as "emergers" in North America are designed by fly fishermen to resemble subimago mayflies, and are intended to lure freshwater trout. In 1983, Patrick McCafferty recorded that artificial flies had been based on 36 genera of North American mayfly, from a total of 63 western species and 103 eastern/central species. A large number of these species have common names among fly fishermen, who need to develop a substantial knowledge of mayfly "habitat, distribution, seasonality, morphology and behavior" in order to match precisely the look and movements of the insects that the local trout are expecting. Izaak Walton describes the use of mayflies for catching trout in his 1653 book The Compleat Angler; for example, he names the "Green-drake" for use as a natural fly, and "duns" (mayfly subimagos) as artificial flies. These include for example the "Great Dun" and the "Great Blue Dun" in February; the "Whitish Dun" in March; the "Whirling Dun" and the "Yellow Dun" in April; the "Green-drake", the "Little Yellow May-Fly" and the "Grey-Drake" in May; and the "Black-Blue Dun" in July. Nymph or "wet fly" fishing was restored to popularity on the chalk streams of England by G. E. M. Skues with his 1910 book Minor Tactics of the Chalk Stream. In the book, Skues discusses the use of duns to catch trout. The March brown is "probably the most famous of all British mayflies", having been copied by anglers to catch trout for over 500 years. Some English public houses beside trout streams such as the River Test in Hampshire are named "The Mayfly". ### As a spectacle The hatch of the giant mayfly Palingenia longicauda on the Tisza and Maros Rivers in Hungary and Serbia, known as "Tisza blooming", is a tourist attraction. The 2014 hatch of the large black-brown mayfly Hexagenia bilineata on the Mississippi River in the US was imaged on weather radar; the swarm flew up to 760 m (2,500 feet) above the ground near La Crosse, Wisconsin, creating a radar signature that resembled a "significant rain storm", and the mass of dead insects covering roads, cars and buildings caused a "slimy mess". During the weekend of 13–14 June 2015, a large swarm of mayflies caused several vehicular accidents on the Columbia–Wrightsville Bridge, carrying Pennsylvania Route 462 across the Susquehanna River between Columbia and Wrightsville, Pennsylvania. The bridge had to be closed to traffic twice during that period due to impaired visibility and obstructions posed by piles of dead insects. ### As food Mayflies are consumed in several cultures and are estimated to contain the most raw protein content of any edible insect by dry weight. In Malawi, kungu, a paste of mayflies (Caenis kungu) and mosquitoes is made into a cake for eating. Adult mayflies are collected and eaten in many parts of China and Japan. Near Lake Victoria, Povilla mayflies are collected, dried and preserved for use in food preparations. ### As a name for ships and aircraft "Mayfly" was the crew's nickname for His Majesty's Airship No. 1, an aerial scout airship built by Vickers but wrecked by strong winds in 1911 before her trial flights. Two vessels of the Royal Navy were named HMS Mayfly: a torpedo boat launched in January 1907, and a Fly-class river gunboat constructed in sections at Yarrow in 1915. The Seddon Mayfly, which was constructed in 1908, was an aircraft that was unsuccessful in early flight. The first aircraft designed by a woman, Lillian Bland, was titled the Bland Mayfly. ### Other human uses In pre-1950s France, "chute de manne" was obtained by pressing mayflies into cakes and using them as bird food and fishbait. From an economic standpoint, mayflies also provide fisheries with an excellent diet for fish. Mayflies could find uses in the biomedical, pharmaceutical, and cosmetic industries. Their exoskeleton contains chitin, which has applications in these industries. Research on genome expression in the mayfly Cloeon dipterum, has provided ideas on the evolution of the insect wing and giving support to the so-called gill theory which suggests that the ancestral insect wing may have evolved from larval gills of aquatic insects like mayflies. Mayfly larvae do not survive in polluted aquatic habitats and, thus, have been chosen as bioindicators, markers of water quality in ecological assessments. In marketing, Nike produced a line of running shoes in 2003 titled "Mayfly". The shoes were designed with a wing venation pattern like the mayfly and were also said to have a finite lifetime. The telecommunication company Vodafone featured mayflies in a 2006 branding campaign, telling consumers to "make the most of now".
586,822
Qatna
1,162,667,991
Archaeological site in Syria
[ "Amarna letters locations", "Ancient Levant", "Archaeological sites in Homs Governorate", "Bronze Age sites in Syria", "Former kingdoms", "Former populated places in Syria", "Qatna", "States and territories disestablished in the 14th century BC", "States and territories established in the 20th century BC", "Tells (archaeology)" ]
Qatna (modern: Arabic: تل المشرفة, Tell al-Mishrifeh) (also Tell Misrife or Tell Mishrifeh) was an ancient city located in Homs Governorate, Syria. Its remains constitute a tell situated about 18 km (11 mi) northeast of Homs near the village of al-Mishrifeh. The city was an important center through most of the second millennium BC and in the first half of the first millennium BC. It contained one of the largest royal palaces of Bronze Age Syria and an intact royal tomb that has provided a great amount of archaeological evidence on the funerary habits of that period. First inhabited for a short period in the second half of the fourth millennium BC, it was repopulated around 2800 BC and continued to grow. By 2000 BC, it became the capital of a regional kingdom that spread its authority over large swaths of the central and southern Levant. The kingdom enjoyed good relations with Mari, but was engaged in constant warfare against Yamhad. By the 15th century BC, Qatna lost its hegemony and came under the authority of Mitanni. It later changed hands between the former and Egypt, until it was conquered and sacked by the Hittites in the late 14th century BC. Following its destruction, the city was reduced in size before being abandoned by the 13th century BC. It was resettled in the 10th century BC, becoming a center of the kingdoms of Palistin then Hamath until it was destroyed by the Assyrians in 720 BC, which reduced it to a small village that eventually disappeared in the 6th century BC. In the 19th century AD, the site was populated by villagers who were evacuated into the newly built village of al-Mishrifeh in 1982. The site has been excavated since the 1920s. Qatna was inhabited by different peoples, most importantly the Amorites, who established the kingdom, followed by the Arameans; Hurrians became part of the society in the 15th century BC and influenced Qatna's written language. The city's art is distinctive and shows signs of contact with different surrounding regions. The artifacts of Qatna show high-quality workmanship. The city's religion was complex and based on many cults in which ancestor worship played an important role. Qatna's location in the middle of the Near East trade networks helped it achieve wealth and prosperity; it traded with regions as far away as the Baltic and Afghanistan. The area surrounding Qatna was fertile, with abundant water, which made the lands suitable for grazing and supported a large population that contributed to the prosperity of the city. ## Etymology Third millennium texts do not mention the name Qatna; the archive of Ebla mentions the toponym "Gudadanum" (or "Ga-da-nu"), which has been identified with Qatna by some scholars, such as Giovanni Pettinato and Michael Astour, but this is debated. Aside from an obscure passage in the 20th-century BC Egyptian Story of Sinuhe, where the name Qatna is not clearly mentioned, the earliest occurrence of the name comes from the Middle Bronze Age archive of Mari, where the city is mentioned as "Qatanum", an Akkadianized format (<sup>āl</sup>Qa-ta-nim<sup>ki</sup>). In Alalakh, the name "Qa-ta-na" was used, an Amorite format that was shortened into Qatna during the Late Bronze Age. The name is Semitic; it derives from the root q-ṭ-n, meaning "thin" or "narrow" in a number of Semitic languages such as Akkadian, Syriac, and Ethiopian. "Ga-da-nu" from the Eblaite archive may also derive from that root. The toponym "Qatna" is strictly related to waterways and lakes; this could be a reference to the artificial narrowing that created a lake from the springs located southwest of the city, since Qatna grew on the eastern shore of a now dried-up lake. ## Site The city is located in the countryside, 18 km (11 mi) north of Homs. It was founded on a limestone plateau, and its extensive remains suggest fertile surroundings with abundant water, which is not the case in modern times. Three northward flowing tributary wadis (Mydan, Zorat and Slik) of the Orontes River cross the region of Qatna, enclosing an area 26 km (16 mi) north–south and 19 km (12 mi) east–west. The city lay along the central wadi (Zorat), surrounded by at least twenty five satellite settlements, most of them along the Mydan (marking the eastern border of the region) and Slik (marking the western border of the region) wadis. The wadis are now dry most of the year, but during the rainy season their discharge is disproportional to the size of their valleys, suggesting that the region was much more humid and water was more abundant in the past. The early city, dating to the Early Bronze Age IV (2200–2100 BC), was built in a circular plan; this circular site became the upper city (acropolis) of Qatna's later phases and was surrounded by a lower rectangular city. ### Qatna's landmarks #### Palaces - Building 8. The structure is dated to the transition period between the third and second millennia BC, and was abandoned in the late Middle Bronze Age II (1800–1600 BC). Its walls, which are still preserved, are 7.5 metres (25 ft) tall and 4 metres (13 ft) wide. The function of the building is not known, but its monumental nature and location on the upper city's summit, plus the existence of a pair of royal statues in it, suggest that it might have been a royal palace, especially since it preceded the erection of the main Royal palace of Qatna. In the 1970s, a concrete water tower was built to supply the modern village of al-Mushrifah; the new structure destroyed the eastern and northern walls of the building. - The royal palace. Covering an area of 16,000 square metres (170,000 sq ft), it was the biggest palace in the Levant of its time. The palace's northeastern part consisted of two stories, as did the northwestern wing. In total, the first story contained at least eighty rooms. Compared to other palaces of the era in the region, such as the Royal Palace of Mari, Qatna's palace was gigantic, including massive halls such as hall C, formerly known as the temple of Belet-Ekallim (Ninegal), which was 1,300 square metres (14,000 sq ft) in size, and hall A, which was 820 square metres (8,800 sq ft) in size. The palace was constructed during the Middle-to-Late Bronze Age transition period, c. 1600 BC, in the northern part of the acropolis above an abandoned necropolis. - The southern palace. Located immediately south of the royal palace, it had at least twenty rooms and concrete floors. The structure is heavily damaged, making the dating of its construction difficult. - The eastern palace. Located to the east of the royal palace in the upper city, it is dated to the Middle Bronze Age II and consisted of at least one big courtyard and fifteen rooms. - The lower city palace. Located in the northern part of the lower city, it was built in the 16th century BC. It contains at least sixty rooms. #### Tombs - Tomb IV. This was discovered in the 1920s by Robert du Mesnil du Buisson; he dated it to 2500–2400 BC, while Claude Frédéric-Armand Schaeffer assigned it to the period between 2200 and 1900 BC. The tomb is a multi-chambered shaft burial, the only one of this kind in the city. - The Middle Bronze Age necropolis, located near the northern edge of the upper city and heavily damaged by the royal palace constructed above it. The necropolis contained three types of burials: simple graves bordered by bricks, cooking vessels, or shafts cut into the rocks. The most notable shafts are tombs I, II, III and V. - The Royal Hypogeum (tomb VI). This is located 12 metres (39 ft) beneath the royal palace, at the northern edge. The tomb consists of four chambers cut in the bedrock beneath the palace's foundations, and a corridor, 40 metres (130 ft) long, that connects it to hall A of the royal palace. Four doors divide the corridor, which then takes a turn to the east and stops abruptly; an antechamber 5 metres (16 ft) beneath the floor of the corridor follows and a wooden stair is used to descend to it, after which a door leads to the burial chambers. The hypogeum was in use for around 350 years, and bodies of both genders and different ages were interred in it; a minimum of 19–24 individuals were found in the tomb. - Tomb VII. This is located beneath the northwest wing of the royal palace. It consists of an antechamber, and a double chamber shaped like a kidney. The tomb contained at least 79 individuals, in a striking contrast with the much bigger tomb VI that contained far fewer remains. Peter Pfälzner suggested that tomb VII was a place for re-burial; the very long period of the Royal Hypogeum's usage, meant that it needed to be cleared sometimes to make room for new interments and the older remains were thus transferred to tomb VII. #### Other landmarks - The walls. A large rampart surrounded Qatna reaching 18 metres (59 ft) in height and 60 metres (200 ft) to 90 metres (300 ft) in width at the base. The rampart contained many gates, and, according to a tablet from Qatna, the name of one of them was "(city) gate of the palace"; the royal palace lies east of the gate in the western rampart and might have been the palace named in the tablet. - Mishrifeh Lake. Qatna grew on the shore of a lake that dried completely toward the end of the Bronze Age, in c. 1200 BC. When the defenses were constructed, the northern and western parts of the rampart were built inside the lake, dividing it into an inner lake fed by a spring located in the northern foot of the upper city, while the larger part locked outside the walls constituted a reservoir for the inhabitants. ## History ### Chalcolithic The site was first occupied during the Late Chalcolithic IV period (3300–3000 BC). This early settlement was concentrated on the central part of the upper town; its function is unknown and it ended in the late fourth millennium BC. ### Early Bronze After a hiatus of several centuries, the site was reoccupied around 2800 BC during the Early Bronze Age III. The last two centuries of the third millennium BC saw widespread disruption of urban settlements in Syria and the abandonment of many cities; however, Qatna seems to be an exception, as it continued to grow. During the Early Bronze Age IV, Qatna reached a size of 25 ha (62 acres); it included a dense residential quarter and facilities for the storage and processing of grains, especially a large multi-roomed granary similar to the one in Tell Beydar. The city may have been one of the urban centers of the Ib'al federation, perhaps the center of a king or prince. The early city occupied the acropolis, and none of its remains were found in the lower city. Most of the small settlements surrounding Qatna, 1 ha (2.5 acres) to 2 ha (4.9 acres), appeared during this period; this might have been connected with the emergence of a central institution in the city. ### Kingdom of Qatna In the Middle Bronze, the Kingdom of Qatna was established around 2000 BC. At the beginning of the Middle Bronze Age I, the city expanded and covered an area of 110 ha (270 acres). This growth reduced the number of the small settlements as people were drawn into the expanded metropolis. It is probable that the earliest mention of "Qatna" by this name dates to the same period. According to Thomas Schneider, a city named Qedem, mentioned in a controversial passage in the Story of Sinuhe dating to the beginning of the Twelfth Dynasty of Egypt (early 20th century BC), is most probably to be identified with Qatna. Qedem in the Egyptian text is written "Qdm", and, in Egyptian, Qatna is written as "Qdn". If Schneider's interpretation is correct, then this is the first known written mention of the city. The text also mentions that the title of the ruler was Mekim (or Mekum), a royal title known from Ebla. The theory of Schneider is debated: in Sinuhe's story, the protagonist turned back to Qedem after reaching Byblos; Joachim Friedrich Quack pointed out that the Egyptian verb "ḥs ̯i" used in the text was known to indicate that a certain expedition had reached its final destination and was now returning to Egypt, indicating that Qedem was south of Byblos, while Qatna is to the north of Byblos. #### Zenith The next mention of Qatna after the Story of Sinuhe comes from Mari in the 18th century BC, during the reign of Išḫi-Addu of Qatna. However, a tablet found in Tuttul, dating to the early reign of the Mariote king Yahdun-Lim in the late 19th century BC, mentions a king named Amut-piʾel, who is most probably the father of Išḫi-Addu; this would make him the first known king of Qatna. Also during the reign of Yahdun-Lim, the kingdom of Yamhad in Aleppo and its king Sumu-Epuh enter the historical record through the texts of Mari. Early in their history, Qatna and Yamhad had hostile relations; Amut-piʾel I, in alliance with Yahdun-Lim and Ḫammu-Nabiḫ (probably king of Tuttul), attacked the Yamhadite city of Tuba, which was a personal possession of Aleppo's royal family, and took a large booty. Later, Yahdun-Lim embarked on an expedition to the Mediterranean Sea that was used for ideological purposes, as it was meant to echo Gilgamesh's deeds; the journey likely had undeclared political motives as well, when seen in the context of the alliance with Qatna. The Mariote–Qaṭanean alliance, which was probably cemented by dynastic marriage, must have provoked Yamhad, which supported rebellions in Mari to preoccupy Yahdun-Lim with his own problems. Despite the tension and battles, a full-scale war with Yamhad was avoided. Qatna was at its apex during the reign of Išḫi-Addu. Mari was conquered by Shamshi-Adad I of Assyria, who appointed his son Yasmah-Adad as its king. Išḫi-Addu was allied with Shamshi-Adad and is attested corresponding with Mari for a period of six years between c. 1783 and 1778 BC. At its height, the kingdom extended from the upper valley of the Orontes to Qadeš in the west, while Palmyra was Qatna's easternmost city. It was bordered by Yamhad in the north, while the south was dominated by Hazor, a Qaṭanean vassal. The many kingdoms of Amurru, which controlled the central Levantine coast between Byblos and Ugarit, bordered Qatna from the west and were counted among Išhi-Addu's vassals. Also under the rule of Qatna were various cities in the Beqaa Valley and the cities in the region of Apum, in the modern Damascus Oasis. The kingdom was sometimes threatened by nomads; a letter sent to Yasmah-Adad informs him that 2000 Suteans conducted a raid against Qatna. Relations with Yamhad worsened during Išḫi-Addu's reign and the conflict evolved into border warfare; Qatna occupied the city of Parga in the region of Hamath for a while before Sumu-Epuh retook it. In the south, Išḫi-Addu faced a general rebellion; the alliance with Assyria was cemented by the marriage of Išḫi-Addu's daughter to Yasmah-Adad in c. 1782 BC. The following year, after petitions by Qatna, Shamshi-Adad sent an army to help Išḫi-Addu deal with the rebellion. The Assyrian troops avoided engaging Yamhad and did not participate in its war with Qatna, while Išḫi-Addu took up residence in Qadeš to oversee the suppression of the rebellion, which apparently was supported by Yamhad. After four years in the service of Qatna, Shamshi-Adad ordered his troops to return; this might have been connected to a peace treaty between Assyria and Yarim-Lim I, son of Sumu-Epuh. Išḫi-Addu, who in the past had declared that "even if Shamshi-Adad would conclude peace with Sumu-epuh, I will never make peace with Sumu-epuh, as long as I live!", was delivered a heavy blow, but Mari's sources are silent on how the king dealt with the situation, and by the time they resumed mentioning Qatna in c. 1772 BC, Išḫi-Addu was dead and succeeded by his son Amut-piʾel II. #### Decline The political and military balance in the region changed dramatically during the reign of Amut-piʾel II; Shamshi-Adad I had died by about 1775 BC, and his empire disintegrated, while Yasmah-Adad was removed from his throne and replaced with Zimri-Lim. Yarim-Lim I gained the upper hand and turned his kingdom into the supreme power in the Levant; Qatna was forced to respect the borders and interests of Yamhad. In Mari, Zimri-Lim, who was Yarim-Lim's protégé, married Amut-piʾel II's sister and Yasmah-Adad's widow Dam-Ḫuraṣi, and this seemed to satisfy the king of Qatna, as his relations with Mari were never hostile. In 1772 BC, the Banu-Yamina tribes revolted against Zimri-Lim, who asked Qatna for help; Amut-piʾel II sent his troops to Dūr-Yahdun-Lim (probably modern Deir ez-Zor) to support Mari, but when he asked for Mariote military support at a later time, Zimri-Lim hesitated as Yarim-Lim I was expressly against such a dispatch. When Qatna tried to establish an alliance with Eshnunna, Mari, which was at war with Eshnunna, arrested the messengers on the pretext that Zimri-Lim feared for their safety; in reality, the king of Mari was probably acting on behalf of Yamhad to prevent Qatna from establishing such an alliance. The archive of Mari reports a plan between Zimri-Lim, the king of Carchemish and the king of Eshnunna (who made peace with Mari), to attack Qatna. Such an alliance could not have been realized without the participation of Yamhad, overlord of both Mari and Carchemish; in the end, the plan was not pursued and the tense relations between Qatna and Yamhad eased toward the last years of Yarim-Lim's reign. In a letter written to Zimri-Lim, Yarim-Lim I agreed to establish peace with Qatna if Amut-piʾel II were to come by himself to Aleppo, thus acknowledging the supremacy of Yamhad; no proof can be shown for a meeting taking place between the two kings. Just before his death in 1765 BC, Yarim-Lim called a meeting of his vassals, and Zimri-Lim traveled to Aleppo where he met messengers from Qatna and Hazor, indicating that Amut-piʾel II started recognizing the supremacy of Yarim-Lim, and that Hazor, Qatna's vassal, was now obeying Yamhad. Yarim-Lim's successor Hammurabi I arranged a peace with Qatna that probably did not require the Qaṭanean king to visit Aleppo personally, but indicated Qatna's acceptance of Yamhad's superiority. This apparent yielding seems a mere formality as Qatna continued its aspirations for power, as became clear in its behavior during the Elamite invasion of Mesopotamia in year ten of Zimri-Lim's reign. An Elamite messenger reached Emar and sent three of his servants to Qatna; Hammurabi I of Yamhad learned of this and sent troops to intercept them on their return. The servants were captured and questioned, revealing that Amut-piʾel II told them to tell their monarch that "The country is delivered to you, come up to me! if you come up, you will not be taken by surprise." The Qaṭanean king also sent two messengers to Elam, but they were probably captured in Babylon. The hegemony of Yamhad affected Qatna's economy; the trade route connecting Mesopotamia and Mari to Qatna through Palmyra lost its importance, while the trade routes from the Mediterranean to Mesopotamia came under the full control of Aleppo, contributing to Qatna's loss of wealth. Following the destruction of Mari by Hammurabi of Babylon around 1761 BC, information about Qatna becomes scarce; in the late 17th century BC, Yamhad invaded and defeated Qatna during the reign of Yarim-Lim III. The political and commercial importance of Qatna declined quickly during the Late Bronze Age (LB I), around 1600 BC, as a result of growing Egyptian and Mitannian influences. Numerous small states appeared in the region and detached from Qatna. #### Foreign domination It is not known when Qatna lost its independence. It became a Mitannian vassal in the 16th century BC, but the archive of Qatna proves that even in its final period during the 14th century BC, Qatna maintained a certain degree of autonomy. Early Egyptian military intrusions to the region occurred under Thutmose I (r. 1506–1493 BC– ). The name Qedem appears in an inscription found on a fragmented gateway from Karnak dated to the reign of Thutmose mentioning a military campaign in the northern Levant. The inscription suggests that the mentioned cities submitted to the king. The geographic sequence given in the inscription is Qedem ("Qdm"), Tunip ("Twnjp") and "Ḏj3 wny" (maybe Siyannu); Qatna (Qdn in Egyptian) would fit better in the geographic sequence and Alexander Ahrens suggested that the inscription might have meant Qatna. Any oaths of loyalty to Egypt taken by Levantine rulers were forgotten after Thutmose I's death. The Egyptians returned under the leadership of Thutmose III (r. 1479–1425 BC– ), who reached Qatna during his eighth Asiatic campaign, c. 1446 BC. Thutmose III did not rule directly in Qatna but established vassalage ties and attended an archery contest with the Qaṭanean king. Towards the end of Thutmose III's reign, and under the influence of Mitanni, the Syrian states changed their loyalty, causing Thutmose's successor Amenhotep II (r. 1427–1401/1397 BC– ) to march north in his seventh year on the throne, where he fought troops from Qatna near the city. The threat of the Hittites prompted Mitanni's king to sue for peace: Artatama I approached Amenhotep II for an alliance and long negotiations started. The talks lasted until after Amenhotep's death, when his successor Thutmose IV (r. 1401/1397–1391/1388 BC– ) finally sealed a treaty that divided the Levant between the two powers. Qatna and the states north of it, such as Nuhašše, fell into the sphere of Mitanni. Despite its reduced status, Qatna still controlled the Lebanon Mountains 80 km (50 mi) away in the 14th century BC. ##### Possible incorporation into Nuhašše During the reign of Adad-Nirari of Nuhašše in the 14th century BC, Qatna may have become part of his kingdom. In 1977, Astour considered Qatna a constituent part of the lands of Nuhašše, and identified a king of Qatna named Adad-Nirari with the Nuhaššite king. Astour was followed by Thomas Richter in 2002, who considered Qatna to be a secondary city in the domain of the Nuhaššite king. The tablets of Qatna mention a šakkanakku (military governor) named Lullu, and Richter considered him an official of Nuhašše. The hypothesis of Richter is debated; a number of scholars accept it, for example Pfälzner, who suggested that the Nuhaššite king may have resided in Qatna's royal palace. Richter dated the rule of the Nuhaššite king to the period preceding the Hittite king Šuppiluliuma I's first Syrian war, during which Adad-Nirari of Nuhašše opposed the Hittites, was defeated, and, according to Richter, had his kingdom split between different Hittite puppets including Idanda of Qatna. Gernot Wilhelm saw no ground for Richter's assumption concerning the identification of the Nuhaššite monarch with the Qaṭanean king. This identification rests on the theory that Qatna belonged geographically to the region of Nuhašše, but no solid evidence supports this assumption, and the Shattiwaza treaty between the Hittites and Mitannians clearly mentioned Qatna as a different realm from Nuhašše during the first Syrian war when the Nuhaššite king ruled. If Qatna was part of the Nuhaššite kingdom, its submission to the Hittites would not have been mentioned separately in the treaty. It is a fact that Qatna was ruled by Idanda during the first war and the Hittite documents do not mention a change of rulers in Qatna made by Šuppiluliuma, leaving no reason to suspect that Idanda ascended the throne as a result of the war. Jacques Freu likewise rejected Richter's hypothesis. Citing different arguments, he concluded that Adad-Nirari of Nuhašše was a contemporary of Idanda, the successor of the Qaṭanean Adad-Nirari. ##### The campaigns of Šuppiluliuma I Early in his reign, the Hittite king Šuppiluliuma I (r. c. 1350–1319 BC– ) aimed at conquering Mitanni's lands west of the Euphrates. Šuppiluliuma waged several campaigns to achieve his goal: the first Syrian foray, the second Syrian foray, the first Syrian war and the second Syrian war. The events and chronology of the Hittites' subjugation of Qatna are debated. King Idanda was a Hittite vassal; a letter sent by the Hittite general Ḫanutti contains a demand that Idanda fortify the city. Freu believed that Idanda abandoned Mitanni and joined the Hittites as a result of Šuppiluliuma's first Syrian foray. The Mitannian king Tushratta retaliated by invading Qatna, and burning the royal palace; an event dated to around 1340 BC. Wilhelm, on the other hand, believed that Idanda submitted to the Hittites as a result of the first Syrian war. ###### Collapse The events leading to the destruction of the royal palace did not cause the destruction of the whole city. The Shattiwaza treaty, which describes the events of the first Syrian war, mentions that Qatna was invaded and destroyed, and its people were deported during the war. However, Idanda's successor, Akizzi, was ruling in the second half of the Egyptian pharaoh Akhenaten's reign following the first Syrian war, or shortly before the second Syrian war. This discrepancy can be explained if the treaty did not mention the events in a chronological order; many scholars, such as Wilhelm, believe that the author of the document organized the text according to the principle of association, rather than following the sequence of events. Akizzi contacted Egypt and declared himself a servant to the pharaoh. An anti-Hittite coalition, probably organized by Akizzi, was established. Šuppiluliuma tried diplomatic means to solve the conflict but Akizzi rejected them. Hittite military intervention soon followed and Akizzi asked Egypt for troops, but received none. Šuppiluliuma himself came to Qatna, aided by Aziru of Amurru. The Hittite monarch took with him a statue of the sun deity, which had been given to Qatna by an ancestor of Akhenaten. This move symbolized the final capitulation of the kingdom. Akizzi survived the destruction of his city and continued his communication with the pharaoh for some time; in an Amarna letter (EA 55), the king of Qatna described to Akhenaten the actions of Šuppiluliuma and his plundering of Qatna. Hence, the final sack of Qatna occurred after the royal palace was destroyed in 1340 BC, and before the death of Akhenaten, to whom the letter was addressed, in c. 1334 BC. Trevor Bryce suggested that Akizzi might have accepted Hittite overlordship again. In any case, he was the last known king. The city lost its importance following its sacking and never regained its former status. ### Post-Hittite destruction The destruction of the royal palace constituted a break in Qatna's history; all other palaces were abandoned and the political system collapsed. A pottery workshop was built in the place of the southern palace, while the lower city palace was replaced by two adjacent courtyards surrounded by walls. Archaeological data suggest a much reduced settlement with no regional role. Following the 13th century BC, no archaeological evidence exists to prove the city was occupied; the toponym Qatna stopped appearing and the next occupation level dates to the late 10th century BC, suggesting it was uninhabited for three centuries. ### Syro-Hittite and following periods In the late 10th century and early 9th century BC, the site was reoccupied but its name during that time is unknown; three human head sculptures made of basalt were discovered in the site; they probably date to the mid-9th century BC. At this time, the region was probably under the control of Palistin, with Qatna under the rule of Hamath, which was probably part of Palistin. The basalt heads bear similarities to a statue discovered in Palistin's capital, but there is not enough information to allow a general conclusion over the borders of Palistin and its extent into Qatna. The settlement was a small one; it included large buildings that were used both as residences and manufacturing facilities. By the 8th century, the site saw a revival in settlement; the city expanded and many houses, public buildings, and storage areas were built. The newly expanded settlement was a contrast to the earlier 10th/9th century one; the existence of official buildings and the emergence of many satellite settlements surrounding Qatna suggest that the city was a local center in the kingdom of Hamath. The official buildings were violently destroyed, probably at the hands of the Assyrian king Sargon II (r. 722–705 BC– ), who annexed the region in 720 BC. The site continued to be inhabited during the Iron Age III, following the Assyrian destruction, but the settlement shrank considerably, being reduced to a village comprising the central part of the acropolis. It was abandoned in the mid-6th century BC. In the mid-19th century, a modern village (al-Mishrifeh) was built within the ancient site. Houses were built on top of the royal palace floors, damaging them to a certain degree, but also protecting the underlying ruins. In 1982, the Syrian Directorate-General of Antiquities and Museums resettled the inhabitants in a new village next to the ancient tell, thus making the site available for modern archaeological research. ## Society ### Population and language The kingdom of Qatna had a predominantly Semitic Amorite population; all the personal names from Qatna in the Mari archive were Amorite. The royal family was also Amorite and it stayed as such during the Mitannian era, which witnessed the expansion of Hurrians; by the fifteenth century BC, Qatna had a sizable Hurrian element. The Arameans were responsible for the re-occupation of the site in the first millennium BC. The Amorites in Qatna spoke their own language, but kings communicated with their counterparts using Akkadian, which was the language of writing in the city. Qatna's Akkadian became heavily influenced by Hurrian in the 15th and 14th centuries BC; Richter argued that a special Akkadian–Hurrian hybrid dialect developed in Qatna. Texts from Qatna exhibit many Hurrian elements, proving that Hurrian was prominent among scribes, but its predominance as a spoken language by the general public cannot be determined. ### Religion Details about the religious life in Qatna are not available due to the rarity of written evidence from the city; in general, many cults seems to have existed and mixed in Qatna, most prominently the royal ancestor cult, the cult of gods and the cult of the dead. #### The cult of gods Belet-Ekallim (Ninegal) was a prominent deity in Qatna; the inventories of gifts presented to the gods found in hall C of the palace show that she was a prominent element in the royal liturgy, where she was called the "lady of the palace" and "Belet Qatna", making her effectively the goddess of the city. However, no trace of a temple or shrine has been found in the building. The inventories also mention the "gods of the king"; it is debated whether this referred to deities or to royal ancestors. Jean Bottéro identified the "gods of the king" with the sun god Šamaš, whom Akizzi called the "god of my father" in his letter to Akhenaten. Gregorio del Olmo Lete considered Šamaš the god of Qatna's dynasty, but the "gods of the king" probably included other deities as well. Jean-Marie Durand considers Addu to be the god of the city based on a seal dating to Išḫi-Addu's reign describing Addu as such. Another indication of the deities worshiped in Qatna comes from the archive of Mari; the daughter of Išḫi-Add was devoted to the goddess Ishtar and Zimri-Lim once invited Amut-piʾel II to Mari to take part in rituals for that goddess, indicating that the cult of Ishtar was prominent in Qatna. #### The cult of the Betyles The texts of Mari show that the cult of stones, especially the "sikkanum" (i.e., Betyles—sacred stones), was widespread in western Syria, and its practice in Qatna is plausible. Du Mesnil du Buisson named room F in the royal palace "Haut-Lieu" and considered it a shrine of Ašera. Research done after 1999 ruled out du Mesnil du Buisson's hypothesis and concluded that the room is a bathroom, but further research showed that the bathroom interpretation must also be wrong. Pfälzner, based on its architecture being suitable for containing sacred stones, suggested that room F was the palace shrine for the cult of Betyles. Pfälzner concludes that "an ultimate proof, however, for the function of Room F at Qaṭna cannot be deduced from this parallel. Nor is there a clue as to the dedication of the possible Betyle-sanctuary at Qaṭna". #### Royal ancestors cult Ancestors were worshiped in Qatna; the royal hypogeum provided a large amount of data concerning the cult of ancestor worshiping and the practices associated with it. Two kinds of burials are distinguished; a primary burial intended to transport the dead into the netherworld, and a secondary burial that was intended to transform the deceased into their ultimate form: an ancestor. The royal hypogeum provides hints at the different rituals taking place during a secondary burial; a noticeable character is that skeletons were not complete, and no skulls are found for the majority of secondary burial remains. There is no evidence that skulls decayed as they would have left behind teeth, of which very few were found, indicating that the skulls were removed to be venerated in another location. Bones in the secondary burial were arranged without respect for anatomical order; it is plausible to assume that the distribution process was the result of symbolic rituals that indicated the changing of the deceased's role by incorporating him or her into the group of royal ancestors. Pottery vessels were deposited next to the secondary burial remains; they were fixed on top of food offerings meant as a food supply for the dead, giving evidence for the performance of Kispu (nourishing and caring for one's ancestor through a regular supply of food and drink). Hundreds of pilled vessels provide evidence that the living participated and dined with their ancestors, venerating them. Pfälzner argues for a third burial process which he calls the tertiary burial; the eastern chamber of the hypogeum was used as an ossuary where human remains and animal bones left from the Kispu were mixed and pilled. Pfälzner conclude that bones left in that chamber were deposited there because they had become useless in funerary rituals, thus the chamber was their final resting place. Bones in the eastern chamber were stored with no respect for the unity of an individual, indicating that the persons buried were now part of the collective group of ancestors; this did not mean that the individuals were no longer cared for, as the many bowls in the chamber indicate the continuation of food offerings to those ancestors. According to Pfälzner, a final burial stage can be noticed, which he calls the quaternary burial. Tomb VII, which most probably contained remains taken out of the royal hypogeum, seems to have worked as a storage for the remains of individuals whose Kispu cycle came to an end; very few bowls were found in that tomb. The Kispu was important for demonstrating the legitimacy of the king, thus it needed to be public and visible to a large crowd; Pfälzner suggests that hall A in the royal palace was the place for the public Kispu and that the antechamber of the royal hypogeum was dedicated for private Kispu that included only the king and the spirits of his ancestors. ### Culture Due to its location in the middle of the trade network of the ancient world, the cultural and social landscape of the city was complex, as the inhabitants had to deal with traders and envoys who brought with them different customs from distant regions. The inventories of gifts presented to deities from the royal palace indicate that Qatna used the sexagesimal numeral system. Textiles dyed with royal purple, a symbol of social status, were found in the royal hypogeum. Judging by the royal statues found in the royal hypogeum antechamber, a king of Qatna wore clothes different from those worn in Mesopotamia; his robes would have reached his ankles and the hem on his shawl would have been in the shape of a thick rope, while his beard was short and his headdress consisted of a broad band. For royal primary burials, several steps were followed: constructing the burial container, anointing the body with oil, heating the body, leading the burial procession, laying the sarcophagus floor with textiles, burying the body with another layer of textiles, and finally depositing a layer of plants and herbs. Elephants, which lived in western Syria, were esteemed in Qatna and connected to the royal family; they were apparently hunted by the royals and the king himself, as there is evidence that their bones were displayed in the palace; thus, elephants were part of the royal ideology and hunting an elephant was a symbol of prestige that glorified the strength of the king. An international style in art did not exist in Qatna; instead, a regional hybrid style prevailed where international motifs appear along with regional ones, yet all the pieces reveal enough features to trace them to Qatna. The volute-shaped plant is one of the most widespread international motifs; many pieces from the royal hypogeum were decorated with the motif, but Qatna had its own typical volute, where the crown is a single long lobe with dotted pendants branching out of the corners of the upper volute. The wall painting in Qatna's royal palace attests to contact with the Aegean region; they depict typical Minoan motifs such as palm trees and dolphins. Qatna also had a distinctive local craftsmanship; the wall paintings in the royal palace, though including Aegean motifs, depict elements that are not typical either in Syria or the Aegean region, such as turtles and crabs. This hybrid style of Qatna prompted Pfälzner to suggest a "craftsmanship interaction model", which is based on the assumption that Aegean artists were employed in local Syrian workshops. Local workshops modeled amber in Syrian style; many pieces were found in the royal hypogeum including 90 beads and a vessel in the shape of a lion head. Ivory was connected to the royal family and the pieces discovered reflect a high level of craftsmanship that was influenced by Egyptian traditions. Jewelry was made to fit local tastes even when the origin of the concept was foreign; an example would be the scarabs, traditional Egyptian objects, that were modified in Qatna by engraving them with local motifs and encasing them with gold, which is atypical for Egyptian specimens. Aside from two golden beads that seem imported from Egypt, no jewelry discovered was of foreign origin. Typical western Syrian architectural traditions are seen in the eastern palace, which has an asymmetrical plan and tripartite reception halls. The lower city palace also shows typical second-millennium Syrian features, being elongated and lacking the huge courtyards that were a traditional Mesopotamian feature; instead, the palace had several small courtyards spread within it. Qatna's royal palace was unique in its monumental architecture; it had a distinctive foundation and the throne room walls were 9 metres (30 ft) wide, which does not occur elsewhere in the architecture of the ancient Near East. The period following the destruction of the royal palace shows a clear break in culture, evidenced by the poor building materials and architectural techniques. ### Economy Finds in "Tomb IV" indicate that Qatna was engaged in long distance trade since its early history. The city's location on the edges of the Syrian steppes turned it into a strategic stop for caravans traveling to the Mediterranean Sea from the east. The countryside surrounding the city provided the key for its success in the Early Bronze Age IV; those lands were capable of supporting both agriculture and pastoralism. Despite the modern scarcity of water, geoarchaeological research on the wadis of the region confirm the abundance of water during the Bronze Age. The land was abundant in pasture lands; when drought struck Mari, Išḫi-Addu allowed its nomads to graze their flocks in Qatna. The written sources do not offer deep insight on the economy of the kingdom; it counted mainly on agriculture during the Middle Bronze Age but, by the Late Bronze Age, it became based on trade with surrounding regions. Securing raw materials scarce near the city was an important concern for the rulers; basalt was an important building tool and it was probably acquired from the Salamiyah region or Al-Rastan. Calcite was provided from either the Syrian coast or Egypt, amber came from the Baltic region, while regions in modern Afghanistan provided carnelian and lapis-lazul. The main routes passing Qatna were from Babylon to Byblos through Palmyra, from Ugarit to Emar, and from Anatolia to Egypt. Taxes on caravans crossing the trade routes allowed the city's royalty to get rich; an insight into Qatna's wealth can be acquired from the dowry of Išḫi-Addu's daughter, who was endowed with 10 talents of silver (288 kg) and 5 talents of textiles (worth 144 kg of silver). White horses were among Qatna's most famous exports, in addition to high-quality wines, woods from the nearby Lebanon mountain, and goods, such as chariots, from a highly skilled craft industry. Many Egyptian imports were found in the city, including the "sphinx of Ita", which represents a daughter of the Egyptian pharaoh Amenemhat II, and a vessel with the name of Senusret I inscribed on it, plus around 50 stone vessels in the royal hypogeum. Another vessel lists the name of Queen Ahmose-Nefertari, wife of 18th dynasty Pharaoh Ahmose I. Two units of weight and payment measurement are prominent in Qatna: the mina and the shekel. The mina had different values from region to region but it seems that in Qatna the preferred value was 470 g, while the preferred value of the shekel is hard to figure. ### Government The existence of agricultural facilities on the acropolis during the EB IV early city indicates that a central authority oversaw the production process; perhaps the city was a center of one of the princes of Ib'al. Another piece of evidence is "Tomb IV", which contained the remains of 40 people, 300 pottery vessels, weapons and ornaments. The tomb probably belonged to the elite or the ruling family of the city. In the kingdom of Qatna, the crown prince had the city of Nazala as his domain. The palace was mainly a political and administrative institution devoid of religious functions, in contrast to the palace of Mari. In the realm of Hamath, Qatna was an administrative center probably in control of the kingdom's southern regions. During the Assyrian period, Qatna lost its administrative role and even its urban character until its abandonment. Known kings of Qatna are: ## Excavations Du Mesnil du Buisson led excavations starting in 1924, and annually from 1927 to 1929; the third millennium BC remains provided scarce samples and most of the data come from Tomb IV. In 1994, a Syrian mission led by Michel Al-Maqdissi conducted several surveys and surface excavations, then, in 1999, a joint Syrian–Italian–German mission was formed that was headed by Al-Maqdissi, Daniele Morandi Bonacossi and Pfälzner. Due to the development of the excavations, the Directorate-General of Antiquities and Museums split the mission into Syrian (headed by Al-Maqdissi), Syrian–German (headed by Pfälzner) and Syrian–Italian (headed by Morandi Bonacossi) missions in 2004. Research was focused on the upper city while the lower city remained largely untouched; by 2006, only 5% of the site's total area had been excavated. The royal palace was split into two excavation areas: operation G covering the western part and operation H covering the eastern part. Operation J covers the summit of the acropolis, while the lower city palace is covered by operation K. One of the most important discoveries came in 2002, when the archive of king Idanda was discovered, containing 67 clay tablets. As a result of the Syrian Civil War, excavations stopped in 2011. ## See also - Al-Rawda - Amqu - Niya - List of cities of the ancient Near East
200,500
The Adventures of Brisco County, Jr.
1,161,283,373
American television series
[ "1990s American science fiction television series", "1990s American time travel television series", "1990s Western (genre) television series", "1993 American television series debuts", "1994 American television series endings", "English-language television shows", "Fiction set in 1893", "Fox Broadcasting Company original programming", "Science fiction Westerns", "Steampunk television series", "Television series by Warner Bros. Television Studios", "Television series created by Carlton Cuse", "Television series set in the 1890s", "Television shows set in San Francisco" ]
The Adventures of Brisco County, Jr., often referred to as just Brisco or Brisco County, is an American weird western television series created by Jeffrey Boam and Carlton Cuse. It ran for 27 episodes on the Fox network starting in the 1993–94 season. Set in the American West of 1893, the series follows its title character, a Harvard-educated lawyer-turned-bounty hunter hired by a group of wealthy industrialists to track and capture outlaw John Bly and his gang. Bruce Campbell plays Brisco, who is joined by a colorful group of supporting characters, including Julius Carry as fellow bounty hunter Lord Bowler and Christian Clemenson as stick-in-the-mud lawyer Socrates Poole. While ostensibly a Western, the series routinely includes elements of the science fiction and steampunk genres. Humor is a large part of the show; the writers attempted to keep the jokes and situations "just under over-the-top". A large number of episodes involve the Orb, a powerful device from the future. John Astin plays Professor Wickwire, an inventor who assists Brisco with anachronistic technology including diving suits, motorcycles, rockets, and airships. The search for new technology and progressive ideas, what the writers of the show called "The Coming Thing", is a central theme throughout the series. Brisco was developed by Boam and Cuse at the request of Fox executive Bob Greenblatt. Impressed by the duo's work on the script for the 1989 film Indiana Jones and the Last Crusade, Greenblatt suggested they develop a series that bore the tone and style of vintage movie serials. The initial ideas and proposals from the show's writers were more often suited for film than television and had to be scaled down. Brisco was one of the last television shows to be filmed on the Warner Bros. Western backlot. Randy Edelman composed the distinctive theme music, which has been reused by NBC during its coverage of the 1997 World Series and the Olympic Games. During its broadcast run, The Adventures of Brisco County, Jr. garnered a small but dedicated following and was well received by critics. The series earned high ratings at the beginning of its season, but later episodes failed to attract a substantial number of viewers. Fox canceled the show at the end of its first and only season. In 2006, Warner Home Video released a DVD set containing all 27 episodes. The series has been remembered fondly by critics, who praise its humor and unique blend of genres. ## Plot ### Background The Adventures of Brisco County, Jr., is set in a fictional American Old West of 1893. Robber barons control the financial and industrial interests of the West from the boardrooms of San Francisco's Westerfield Club. The famous U.S. Marshal Brisco County, Sr. (R. Lee Ermey) has apprehended a gang of outlaws and its leader, the notorious John Bly (Billy Drago). While transporting them to stand trial, County is murdered and the gang escapes. Meanwhile, in a nearby mine, a group of shackled Chinese workers unearths "The Orb", a large golden globe studded with rods. A worker draws one of the rods from out of the Orb, then touches several of his co-workers with it. As each worker is touched with the rod he is imbued with superhuman strength which they use to break the iron chains binding them, thus freeing themselves. The murder of Brisco County, Sr., and the discovery of the Orb set into motion the major plots of the series. ### Synopsis Members of the Westerfield Club hire Brisco County, Jr. (Bruce Campbell), the son of the slain U.S. Marshal, to track and re-capture Bly and his gang. The Westerfield Club's timid lawyer, Socrates Poole (Christian Clemenson) relays instructions and financial support to Brisco. Another bounty hunter, Lord Bowler (Julius Carry), who is known for his expert tracking skills, also hopes to capture Bly. Bitter over the elder County's fame, Bowler treats Brisco as a rival. The two men often find themselves reluctantly joining forces to achieve a common goal. Later in the series, Brisco and Bowler work together as partners and friends. In the pilot episode, Brisco tracks John Bly's second-in-command, Big Smith (M. C. Gainey). In a battle on a train car, Brisco knocks Smith off the train and into a river; he is assumed dead until he reappears later in the series. Brisco, Bowler and Socrates hunt the rest of Bly's gang in subsequent episodes. All ten of the gang members are captured or killed, and Brisco's pursuit of Bly, who is seeking the Orb for its supernatural power, frequently puts him into contact with the object. Each encounter with the Orb reveals a fantastic effect on people who use it. In the episode "The Orb Scholar", Bly shoots Brisco and leaves him to die. Professor Ogden Coles (Brandon Maggart), a scientist who studies the Orb, heals Brisco with the device. In the episode "Bye Bly", it is revealed that Bly is a fugitive from the distant future who has traveled to 1893 to steal the Orb. Bly plans to use the Orb to travel back to his time and rule the world. Instead, Brisco uses the Orb to travel through time to save Bowler's life. Brisco eventually kills Bly by stabbing him with a rod from the Orb, causing Bly to disintegrate into a pile of ashes. Series creator and executive producer Carlton Cuse said that the Orb represents faith and that depending on the intentions of those who use it, the object rewards or punishes them accordingly. The pilot episode introduces several characters who make recurring appearances throughout the series. Big Smith's moll Dixie Cousins (Kelly Rutherford) is a saloon singer and con artist who has a brief romantic encounter with Brisco. In later episodes, Dixie becomes Brisco's primary love interest. In his first mission, Brisco also meets Professor Albert Wickwire (John Astin), an eccentric scientist who returns to help many times during the series. Wickwire's ideas and inventions play into Brisco's interest in technology and the future, something Brisco calls "The Coming Thing". Pete Hutter (John Pyper-Ferguson) is a hapless mercenary working for Bly. He has a compulsive attachment to his "piece" (pistol), and given any opportunity will pontificate about topics such as art and philosophy. Pete appears throughout the series as a comic foil to trade barbs with the heroes. He appears to be killed three times during the series, but returns each time with a comic excuse for why he didn't die. The second half of the series includes many episodes with Whip Morgan (Jeff Phillips), a young cardsharp whose attempts to assist Brisco and Bowler often end up causing trouble. ### Signature show elements The show features classic Western motifs such as train robberies and gunfighter showdowns, in combination with atypical elements. Much of the series is devoted to the science fiction plot surrounding the Orb, and it is this mix of the Western genre with fantasy that has helped Brisco maintain its cult status. In almost every episode, the characters discover or are confronted by what is, for the time, fantastic technology. In the pilot episode, Brisco and Professor Wickwire modify a rocket to run on train tracks. In the episode "Brisco For the Defense", Brisco uses a slide projector to show a trial jury fingerprint evidence. Professor Wickwire returns many times in the series to assist with technology, including tinkering with motorcycles and rescuing the heroes with a helium-filled zeppelin. Campbell told Starlog magazine, "It's kind of Jules Verne meets The Wild Wild West." The presence of futuristic technology in a Victorian era Western places the series in the steampunk genre; it is one of the few such shows to have aired on prime-time television. At least one-third of the show's episodes contain steampunk or Weird West elements. Though "technology-out-of-time" frequently intrudes into the plots of Brisco, the fantastic machines or methods rarely appear again. Some of these out-of-time technologies were archaic renderings of those prevalent in the 20th century, and two film researchers, Cynthia Miller and A. Bowdoin Van Riper, suggest that followers of the show may be puzzled that such inventions, so useful in their own lives, are not exploited further. According to Cuse, the show was purposely set in 1893, exactly 100 years before the series premiered in 1993. Brisco is meant to be aware of the imminent changes in society and technology and actively looks for them. The writers of the show, and also the character of Brisco, refer to this concept as "The Coming Thing". Elaborating on this theme, Campbell said, "Basically this show is about the turn of the century, when the Old West met the Industrial Era. Cowboys still chew tobacco and ride the range and states are still territories, but over the horizon is the onset of electricity, the first autos and telephones. Brisco is in the middle of a transition from the past to the future." The collision of cowboy characters with puzzling technology and other anachronisms generates humor throughout the series. The writers made it a point to insert scenes mirroring the pop culture of the 20th century, from the apparent invention of the term "UFO" in the pilot episode to the appearance of a sheriff who looks and acts like Elvis Presley. Speaking about the humor of the show, Campbell said, "I would say 30 percent of each episode is being played for laughs. But it's not a winking at the camera, Airplane-type of humor. We're funny like Indiana Jones is funny; the laughs come primarily from the wide variety of ridiculous, colorful characters that come in and out of this series." ## Cast Bruce Campbell went through five auditions for the role of Brisco before he was hired. In his first audition with the casting director, Campbell spontaneously did a standing flip. The stunt impressed the casting director so much that during each subsequent audition, Campbell was asked to do the flip again. In his final audition, Campbell assured the network executives that if hired for the role, he would work hard to make the show a success. In an interview, Campbell said, "It's every actor's dream to play a cowboy, so when this opportunity came up, I mean, yeah, where do I sign?" He added that working on Brisco provided him with acting opportunities he would not have otherwise had. Cuse said getting Campbell "was just one of those collisions between an actor and a script that was just perfect ... I can't imagine Brisco having ever existed without him." Writing in Auxiliary Magazine, Luke Copping claimed that Brisco was Campbell's "last great" role before the actor fell into "a period of self-parody and overt camp that he did not redeem himself from until joining the cast of Burn Notice". Christian Clemenson went to Harvard with Cuse but still went through the normal audition channels to get the part of Poole. Clemenson was apprehensive about pursuing one of the lead roles in a television show because of the long time commitments involved. He later said, "The similarities between this show and The Wild Wild West, and my character to that show's Artemus Gordon, was an important hook for me. It was one of my favorite shows growing up, and as soon as I saw that Brisco County was based on the same kind of material and attitude as that show, I called my agent and said, 'I'll do anything I have to do to get this.'" Clemenson applied his experiences at Ivy League schools to play the uptight Poole. Praising Clemenson's work on Brisco, Cuse said, "You can't give him anything he's not capable of doing. He adds the voice of intelligence and caution to balance our cast". Julius Carry saw great potential in the character of Bowler. He had researched black cowboys for a project in college and used that knowledge in his portrayal of Bowler. Carry said that Bowler was similar to the real-life black deputy U.S. Marshal Bass Reeves, in that "Reeves always got his man and would often pull off incredible tricks to bring people in." Carry knew Clemenson from the time they worked together on the Western television pilot Independence. He had no knowledge of Campbell, but approved of the choice for the leading man after watching Army of Darkness. He later told Starlog, "I saw that he would be very good with the physical stuff and that he could deliver a one-liner. I knew the situation would be good." The original direction for Bowler was to have him constantly oppose Brisco, but as the series progressed the writers saw the good-natured chemistry between the actors and decided to make Brisco and Bowler a team. Bowler's race was never an issue in the show. According to Cary Darling, a television critic, this attitude is different from serious Westerns and "may hew more to the truth than one might think". He said historians have noted that black cowboys were common and that conflicts with white cowboys were rare. Kelly Rutherford's portrayal of Dixie Cousins, with her emphasis on innuendo and subtext, has been described as "less Miss Kitty (Gunsmoke) than Mae West". Rutherford said that playing Dixie allowed her to fulfill her "fantasy of being Madeline Kahn in Blazing Saddles". When John Astin was cast he was best known for his portrayal of Gomez Addams in The Addams Family. Cuse said that he and the writers enjoyed paying homage to the television star of their childhoods: "For us, it was like, 'Oh wow, we get to meet John Astin in the guise of employing him on this show!'" ## Production ### Conception and development In 1989, Indiana Jones and the Last Crusade was released in the cinemas. It was a commercial success, earning its producers US\$115 million from domestic screenings. The action-packed story, unfolding in a manner reminiscent of Saturday matinee movie serials, about the adventures of an archaeologist was written by Jeffrey Boam, with development and story help from Carlton Cuse; this film was their third collaboration, after Lethal Weapon 2 and 3. According to Cuse, Bob Greenblatt, an executive at Fox Broadcasting Company, engaged him and Boam to develop a television series "because of Indiana Jones and the Last Crusade". Greenblatt wanted a show that had a style similar to the Indiana Jones movies. Cuse started watching old serials and noticed that many fell into two genres: Westerns and science fiction. This gave Boam and Cuse the idea to combine the genres. They decided to emulate the serials' style; for example, each act within an episode begins with a title, usually a pun, and ends with a cliffhanger. Boam and Cuse did not intend for the series to be historically accurate. Their aim was to create an action-adventure with a modern feel. Cuse told USA Today, "We're not approaching this show as if we were doing a period piece. We see it as a contemporary program. Our characters just happen to be living in the West with 1990s sensibilities. The Indiana Jones movies were period pieces too, but you never thought of them that way." Anachronisms and pop culture references were intentionally inserted into the series. The show was intended to be family friendly, so violence was minimized in favor of having Brisco think his way out of dangerous situations. Boam said, "In the two-hour pilot Brisco doesn't even once have to shoot his gun. Our violence is cartoonish. There is no pain and suffering." Bruce Campbell was prominently featured in advertisements, billboards, and even a trailer shown in movie theaters. When the series was being promoted in the summer of 1993, Fox Entertainment chief Sandy Grushow said that if Campbell "isn't the next big television star, I'll eat my desk". ### Writing Cuse served as show runner and head writer. Boam, who served as executive producer, also contributed scripts for the show. The writing staff included John Wirth, Brad Kern, Tom Chehak, David Simkins, and John McNamara. They followed Cuse's informal instruction that the tone of the show remain "just under over-the-top": the series would be humorous but not too campy. Every member of the staff participated in breaking down and analyzing the stories they conceived. Worth commented, "there was a very high percentage of ideas that worked in the room and got translated to paper that worked when you put them on film. That doesn't always happen." Cuse described long hours writing the show, including several overnight sessions. Each episode of Brisco was filmed in seven days, so the turn-around time for scripts was one week. McNamara said that he became a "student of TV history" while writing for Brisco, reviewing old episodes of Maverick for inspiration on using humor in the Western genre. He said the writing team felt the television audience was ready for a "trans-genre form", because much of the audience grew up with Lethal Weapon, Star Trek, and The Wild Wild West. Researchers Lynnette Porter and Barry Porter acknowledge the writer's familiarity with Mark Twain's novel Pudd'nhead Wilson. Porter and Porter describe the novel as an "ancestor text", because the characters of Brisco and Bly both refer to it, and say that this type of literary device is used again by Cuse in Lost. One of the challenges the writers faced was scaling down their ideas to make them feasible for production. Cuse said that he let such ideas flourish because of his relative inexperience with writing for television series. An example given by the writers was Boam's idea for a full-sized "pirate ship on wheels". The writers quickly realized they needed to scale the idea down to something the production designers could create. They settled on putting a full pirate crew on a stagecoach with cannons. Kern said it was better to "shoot past the mark, and come back to it, rather than start below it". He elaborated on this, saying, "if you envision the 40-foot galleon and go back from that, you'll always end up with more than if you start out with a pirate on a horse." As the series progressed during its broadcast season, the writers received frequent notes and directives from Fox network executives calling for increases and decreases in the science-fiction, comedy and traditional Western elements. Cuse said, "I think we did a particularly good job of maintaining continuity with all the schizophrenic notes we were getting from the network." However, midway through the first season, the writers made a thematic shift from science-fiction to more comedy and adventure. Cuse said, "We were biting off more than we could chew... we were trying to do a comedic action adventure Western, with tongue-in-cheek humor, genuine drama, plus science fiction. All these things added too many elements to serve simultaneously." By the final third of the series, the writers had wrapped up the science-fiction plot with the Orb and focused more on traditional Western motifs. ### Production design The Adventures of Brisco County, Jr., was filmed primarily on the Warner Bros. soundstages. Town and street scenes were staged on the Western backlot, known as Laramie Street. It was one of the last Western shows to use the backlot. Cuse said that logistics were a problem because so many of the Hollywood Western sets and towns had been torn down by the 1990s. Outdoor scenes were shot on the Warner Bros. ranch in Valencia, California; Bronson Canyon in Los Angeles; and the Valuze Ranch in Santa Clarita, California. Some of the locomotive scenes from Brisco were filmed on location at Railtown 1897 State Historic Park in Jamestown, California. A painting used in the show as a backdrop to create illusions of greater depth perspectives is exhibited at the park. <div class="quotebox pullquote floatleft " style=" width:25em; ; "> > I was proud of the pilot, when Brisco and Bowler were tied up on the railroad track, and Comet had to come walking up and pull the rope loose and untie them and get them loose... Working a TV series for a trainer of horses is very tough because you don't have any time to prepare for the next show... It was a tough show, but I was very proud of the horses because they worked well, they never held the company up, and everything seemed to work fine. <cite class="left-aligned" style="">Gordon Spencer, head wrangler on The Adventures of Brisco County, Jr."</cite> </div> Comet was portrayed by five horses, each with a different talent. The main horse was Copper, chosen by veteran wrangler Gordon Spencer because it was calm and gentle. Campbell nicknamed the horse "Leadbelly" due to its ability to remain calm during action or dialogue scenes. Another horse, Boss, was used for long-range shots, chase scenes, and elaborate stunts, such as leaps through windows. Ace was called in when the crew had to shoot scenes in which the horse reared. Near the end of the season, a horse named Comet was trained, its name chosen so that the horse would get used to hearing it on set. The "true show horse" was Strip, which was adept at doing tricks, such as lip movements, head nods, and hoof stamping. According to Spencer, all those stunts "as well as tying the knots and opening the door and going into rooms and all of that" were done by Strip. For these scenes, Spencer would stand off-camera and use a stick to signal Strip. Campbell had a special pocket sewn into his costume and filled it with grain to reward Strip after every take. No other horse had more scenes than Strip and Copper. With white colorings on his nose and legs, Strip's appearance was chosen for Brisco's steed; Copper and the other horses were touched up with "clown white" greasepaint to match Strip's markings. Foley artist Casey Crabtree provided sounds for horse hoof movements, work that was praised by sounds effects industry expert David Yewdall. He said of Crabtree's work on Brisco, "Her horses sounded so natural and real – their hooves, the sound of their hooves on the texture of the ground, the sound of saddle movement, bridle jingles – it was as good as anything I would want for a feature film, and this was episodic television." The make-up on many of Briscos episodes was done by veteran artist Mel Berns Jr. Two props of the Orb were made. One of the prop Orbs was used for stunts and had retractable rods. A second version was manufactured from cast bronze, making it heavy: "You really didn't want to have to handle it," Campbell said. The rocket car seen in the pilot episode was built by special effects coordinator Kam Cooney and was a working vehicle with an internal combustion engine and throttle controls. Some items used in the show had been repurposed from older productions, and some would later be used in other shows. For example, the steam locomotive seen in the pilot episode was the same as the one used in Back to the Future Part III. Two of Carry's prop guns – rifles whose barrels were sawed-off in fashion of the Mare's Leg – were later reused in the science fiction television series Firefly. ### Music Stephen Graziano and Velton Ray Bunch composed original music for the series. Composed by Randy Edelman, the distinctive theme music gained recognition beyond the show's following; in the mid-1990s, NBC Sports had commissioned Edelman to compose theme music for its NFL coverage. At the time, NBC had often used excerpts from film scores as theme music for its sports broadcasts, and had used a portion of Edelman's Gettysburg score for the Breeder's Cup. A portfolio Edelman sent NBC included the Brisco theme, and by 1996 it was being used during coverage of the Olympics; the theme would be retired after the 2016 games. NBC used it again as the theme for their coverage of the 1997 World Series. Edelman said, "It was original, and it seemed to have the right spirit. It's got a very flowing melody, it's triumphant, and it has a certain warmth. And it has at the end of it, what all television things like this have, a 'button', an ending flourish that works really well if they need to chop it down into a 15-second thing." Cary Darling said that the "booming" theme song was "part Magnificent Seven, part Aaron Copland and as grand and wide as 'Big Sky Country' ". ## Broadcast history The Adventures of Brisco County, Jr., premiered on the Fox network at 8:00 pm on Friday, August 27, 1993, with a two-hour pilot movie. To bolster viewer interest in the show, Fox rebroadcast the pilot two days later at 7:00 pm. Both airings of the pilot returned strong ratings. Brisco's ratings for the pilot and first episode were high, particularly with the demographic of adults aged 18–49. The series was aired in Canada, including on Global Toronto (channel 29). The pilot movie was followed by 26 episodes, each 45 minutes long and airing at 8:00 pm on Fridays. Fox Entertainment chief Sandy Grushaw openly touted Brisco and its star Bruce Campbell. The network fully expected the show to be its breakout hit of the year, a distinction which eventually went to Brisco's follow-up, The X-Files. Hoping that more viewers would follow Brisco as it progressed, Fox approved producing an entire season of the show, despite post-pilot low ratings. Subsequent episodes failed to attract more viewers and the show was cancelled at end of its first and only season. After the series ended, Fox retransmitted the show on Sunday nights at 8:00 pm during July and August 1994. The show was later broadcast for a short time in syndication, airing on the U.S. cable channel TNT. ### Episodes ### Cancellation As the season progressed, the ratings declined, greatly hurting the show's chances of being renewed. Writer John McNamara partially blamed Brisco's low ratings on its Friday 8:00 p.m. time slot. He said not many people watch television at that time, so "fighting for numbers" then was "like being stuck on Normandy beach". Grushaw acknowledged the high quality of the show and the vocal support from its small fan base. "Obviously the viewers are very passionate about the show... and when you read some of the things they have to say, it gives you real pause", Grushaw told USA Today in 1994. By May of that year, Grushaw said renewing Brisco was a 50–50 call. At the end of its season, Brisco was one of the lowest rated shows of the year, and Fox confirmed its cancellation in June. Brisco's writers were planning for another season before the show's cancellation. They had not penned the ending of the first season as a finale for the series and had broad ideas for the second season, which would have featured Brisco settling in as the sheriff of a small town. In his autobiography, Campbell mused, "To explain why a TV show is canceled is almost impossible. Ironically Brisco, with its off-kilter humor, wouldn't have been developed on any other network, yet the appeal of 'Westerns' was still rural – not the side Fox's urban bread was buttered on." Writer and supervising producer Brad Kern reflected on the show's cancellation, saying, "Ten years later, everybody you talk to... they all love the show. I think that was the biggest disappointment about the show not coming back. We knew we were doing something special." Told of the show's success in the TV Guide "Save Our Shows" poll, Sandy Grushow said, "Obviously I'm happy and not entirely surprised", but added, "You can't dismiss a season's worth of ratings." Kim Manners, director on nearly a third of the Brisco episodes, said working on the series gave him an opportunity to grow creatively. He told writer Joe Nazzaro, "It really woke me up as a director, almost spiritually", and that directing for Brisco was a large contributing factor to his success as a regular director on The X-Files. Manners said, "When they didn't give it a second year, I was devastated", adding that he wished Cuse would have made a feature film based on Brisco. Considering the show's short life, Cuse later commented, "If the show could have survived into a second season, I think it could have ended up running for actually a long time. Some shows just sort of fall through the cracks in the right away and they kind of stay on the air long enough to aggregate an audience. I think if circumstances had been different, Brisco could have had a much longer life." Cuse also said the Friday night time slot hurt Brisco's chances of building an audience, saying, "We were on at 8 p.m. on Friday night, which is sort of a death slot – I mean people do still go bowling – few shows have succeeded in that slot." ## Home media In 2005, Kirthana Ramisetti of Entertainment Weekly posted that The Adventures of Brisco County, Jr., deserved to be released on DVD. Gord Lacey, the creator of the website TVShowsonDVD.com, told the New York Daily News that Brisco was among the five most requested shows on the site. Lacey spent several years lobbying industry contacts to get Brisco released on DVD. This led to correspondence with Cuse, who also wanted to get a DVD set produced. On July 18, 2006, Warner Home Video released The Adventures of Brisco County, Jr.: The Complete Series on DVD in Region 1, an 8-disc DVD set that contains all 27 episodes of the series. The release includes commentary tracks from Campbell and Cuse; an interactive menu of Brisco's signature references narrated by Campbell; The History of Brisco County, Jr., documentary; a feature called A Reading from the Book of Bruce; and another gallery hosted by Campbell focusing on the gadgets from the show. ## Reception ### Pilot episode In July 1993, Brisco's two-hour pilot was screened for television critics in Los Angeles. Initial critical reaction to the pilot was positive and focused on the humor and the science fiction plot points. USA Today's Matt Roush enjoyed the campy humor and the cast of the show, saying it worked on many levels and would "please all but the family curmudgeon". Calling Brisco "one of the best shows of the fall season", Jennifer Stevenson of the St. Petersburg Times praised the show's "intelligent, satirical asides". Kay Gardella wrote in the New York Daily News that the pilot set itself "apart from others of genre" with its humorous script and sight gags. The Los Angeles Times called Brisco "gratifying nonsense", and praised Campbell and the supporting cast for supplying humor without "going over the top". Some critics, such as Walter Goodman of The New York Times and David Hiltbrand of People, found the supporting characters "weakly cast" and not as strong as Campbell in the lead. Other reviewers praised the overall look of the show, such as Todd Everett of Variety, who approved of the "strong comic-book visual style" and the pilot's high production values. Writing in The Washington Post, Tom Shales said that the pilot's production was "more movielike than serieslike". The pilot's science-fiction plot elements were appreciated by New York magazine, which wrote favorably about the "millenarianism" of the show, including Brisco's use of a rocket to travel on railroad tracks. While Rod Dreher of the Washington Times liked the "nifty" Orb subplot, some critics responded negatively to the Orb. The Washington Post's Shales called the Orb "hokey supernatural bunk". Other reviewers complained generally about the broad mix of genres and number of subplots in the pilot. While TV Guide's Jeff Jarvis roundly praised the quality of the pilot and called Brisco his favorite Fox show of 1993, he criticized the pilot for being "padded with outlaws and mysterious orbs". Diane Werts of Newsday similarly said that Brisco "just about hits the bulls eye" with its "sharp wit" and "thrill a minute" action, although she noted that the pilot was over-packed with characters and subplots. Writing in The New York Times, Goodman said, "The writers try everything, including some business involving raiders of a lost orb, without much of a payoff." Entertainment Weekly's Ken Tucker enjoyed the "nervy attempt to do something different with the TV Western" in the pilot and said that "Brisco County is less a satire of the Western's cliches than a revitalization of them." Writing in the Toronto Star, Greg Quill said that the pilot introduced Brisco as "a western in the loosest use of the term". Quill noted that the pilot includes "every cliche in the western movie arsenal", but that "everything, from characters to plot turns, is skewed away from the norm", and that the pilot episode rose above the level of western spoof to become an "outrageously confident tribute to... the best of the genre". ### Broadcast run During the broadcast run of The Adventures of Brisco County, Jr., TV Guide featured a positive review of the show in its Couch Critic column and wrote, "It's as funny as it is exciting, which is not an easy combo to pull off... it's fresh and funny and different, and that's why we like it." The magazine twice listed Brisco as a family-friendly TV program: "Back when some of us grew up, Westerns were synonymous with great family entertainment, but – let's be honest – some of them were dull as dust. Not this one. Brisco is a Western with a sense of humor, filled with impish action for kids and adults." The Wall Street Journal reviewed a host of Westerns from 1992 and 1993 and said that Brisco was "the most sheer fun of the bunch", calling it "a period piece with slick production values and a mix of drama and humor, fast pace and high camp". In an article on the 1993 television season, the Toronto Star's Greg Quill wrote that Brisco was a program that represented "American TV craft at the top of its form". In contrast, Elvis Mitchell of Spin magazine gave Brisco a scathing review, calling the show's premise a "tedious... rickety gimmick". Mitchell acknowledged the show's "quick reflexes", but said the humor was "uncomfortable" with a "cynical quickness". He added, "Brisco County relieves us of the burden of laughing. It spends too much time looking at itself in the mirror, admiring its own adorable dimpled half-smile." Viewership figures for Brisco fell as its season progressed and in 1994, it was listed in TV Guide's annual "Save Our Shows" article. Readers were requested to write in and vote to save one of the four listed shows – one from each television network – that were in danger of being cancelled. The Adventures of Brisco County, Jr., won with 34.7 percent of the 72,000 votes cast. Cuse said the vote "reaffirms for me a feeling I've had – namely that the Nielsens aren't accurately reflecting people's interest in this show", adding that, given Fox's then relatively small share of the market, it was notable that the show got more votes than any of the programs from NBC, CBS, and ABC. Writing in USA Today, Matt Roush encouraged readers to watch the low-rated show, saying that families should watch it rather than "that interchangeable T.G.I.F. tripe". He said, "Brisco is mighty lavish but even more mightily loony, happily saddled with broad sight gags and tortured puns." Bruce Fretts of Entertainment Weekly speculated that mainstream success eluded the show because of its mixing of genres. He said, "Brisco refuses to behave like a normal Western, mixing in sci-fi, slapstick, and... kung fu." Chicago Tribune'''s Scott Williams praised Brisco for its "strong supporting cast" and "superb physical comedy and crisp dialogue". He said the show should have been a hit, but that the Friday night time slot hampered its ratings. ### Level of violence Brisco was criticized early on for the violence it portrayed; meant to be comical, a scene in the pilot in which four villains accidentally kill each other in a crossfire troubled critics instead. Cuse insisted that the show was still appropriate for children, saying, "I think we're very conscious of violence and I think we've made an effort to avoid violence in the pilot and in the future episodes". Halfway through the season, U.S. Senator Byron Dorgan singled out Brisco as the most violent show on television based on a study at Minnesota's Concordia University, in which students watched 132 hours of network and cable programming, during the week of September 28 to October 4, 1993. The students tallied each act of violence, and found that Brisco had 117 violent acts per hour. The study deemed Brisco more violent than the film Beverly Hills Cop, which was also viewed for the study. Cuse called the criticism "patently ridiculous", noting that only one episode of the show was viewed, in which a boxing match takes place. Each punch and jab was counted as an act of violence. Cuse spoke out against legislation to curb television violence, saying that politicians were "chasing a false objective". He said it was the job of a show's producer to control the moral content of a television program and the parents' duty to monitor what their children watch. The Los Angeles Times printed a story about Senator Dorgan's efforts to elicit a response from the Federal Communications Commission (FCC) with the title "Fox Tops Tally of Violence on Major TV Networks Media: Study of a week of prime-time shows also lists 'Brisco County' as bloodiest series. Senator wants FCC to issue report card, name sponsors". Cuse responded by writing a letter to the editor. In the letter, entitled "'Brisco County' Is a Family-Oriented Series", Cuse objected to the newspaper story title labeling Brisco as the "bloodiest series". He said that Senator Dorgan's press release did not mention blood and that the show's violence should be viewed in context. Cuse added the show had been listed as family friendly in other publications, and that he read every viewer letter sent regarding the show. "The overwhelming majority praise "Brisco County" for being a show that the entire family can watch together. After 15 original airings, I have not received one single letter criticizing the show on the grounds of violence or violent content." When the US Senate discussed forcing broadcast and cable networks to regulate violent programming, Cuse said that self-regulation within the industry was a positive move. As he operated on his "own internal moral principles", the measures would not affect his week-to-week work. ### Post-cancellation Writing in People magazine in 1995, Craig Tomashoff said the cancellation of Brisco was "one of the tragedies going into [the 1994–1995] TV season". Tomashoff suggested that the show influenced UPN's Legend, another Western series with comedy and science fiction elements. Reflecting on the show in the Orange County Register in 1996, critic Cary Darling lamented Briscos cancellation, saying that the show "stood way out from the rest of the broadcast pack". Darling reviewed the show, describing it as "a witty, multiracial Western that tempered its fisticuffs with fantasy, its innocence with irony, and its romantic vision of the Old West with an abiding New World faith in the future's infinite possibilities". Writing in Entertainment Weekly, Ken Tucker called the show a "one season wonder" that was "ahead of its time". When the series was released on DVD, critics remembered it fondly. Video Librarian called Brisco "criminally short-lived" and "wildly entertaining". Ken Tucker of Entertainment Weekly gave the series an "A−", calling the show "smart-alecky and witty, suspenseful and absurd". IGN DVD called the DVD set "impressive" and said that the series was "a satisfying show that hits its mark". Auxiliary Magazine called Brisco "one of the greatest sci-fi/Western epics in television history" and compared it favorably to the more well-known sci-fi/Western shows, Firefly and The Wild Wild West. In its 2006 gift guide, the Christian Science Monitor gave Brisco a positive review, saying, "Folks, there are so few comic sci-fi/Westerns, they should be celebrated, not canceled prematurely." In a 2018 interview with Houston Chronicle, Bruce Campbell voiced an interest in reviving the series. "I would actually be willing to do a Brisco revisited".
42,773,463
German–Yugoslav Partisan negotiations
1,144,605,377
March 1943 ceasefire and prisoner exchange talks
[ "1943 in international relations", "Germany in World War II", "Germany–Yugoslavia relations", "March 1943 events", "Yugoslav Partisans", "Yugoslavia in World War II" ]
The German–Yugoslav Partisan negotiations (Serbo-Croatian: Martovski pregovori, lit. 'March negotiations') were held between German commanders in the Independent State of Croatia and the Supreme Headquarters of the Yugoslav Partisans in March 1943 during World War II. The negotiations – focused on obtaining a ceasefire and establishing a prisoner exchange – were conducted during the Axis Case White offensive. They were used by the Partisans to delay the Axis forces while the Partisans crossed the Neretva River, and to allow the Partisans to focus on attacking their Chetnik rivals led by Draža Mihailović. The negotiations were accompanied by an informal ceasefire that lasted about six weeks before being called off on orders from Adolf Hitler. The short-term advantage gained by the Partisans through the negotiations was lost when the Axis Case Black offensive was launched in mid-May 1943. Prisoner exchanges, which had been occurring between the Germans and Partisans for some months prior, re-commenced in late 1943 and continued until the end of the war. Details of the negotiations were little known by historians until the 1970s, despite being mentioned by several authors from 1949 on. The key Partisan negotiator, Milovan Đilas, was first named in Walter Roberts' Tito, Mihailović, and the Allies, 1941–1945 in 1973. Roberts' book was met with protests from the Yugoslav government of Josip Broz Tito. The objections centred on claims that Roberts was effectively equating the German–Partisan negotiations with the collaboration agreements concluded by various Chetnik leaders with the Italians and Germans during the war. Roberts denied this, but added that the book did not accept the mythology of the Partisans as a "liberation movement" or the Chetniks as "traitorous collaborators". Subsequently, accounts of the negotiations were published by Yugoslav historians and the main Yugoslav protagonists. ## Background In August 1942, during the Partisan Long March west through the Independent State of Croatia (Croatian: Nezavisna Država Hrvatska, NDH), Josip Broz Tito's Yugoslav Partisans captured a group of eight Germans from the civil and military engineering group Organisation Todt near Livno. The leader of the captured group was a mining engineer, Hans Ott, who was also an officer of the Abwehr, the Wehrmacht's intelligence organisation. The captured group had been identifying new sources of metal and timber for the Germans, but Ott had also been tasked by the Abwehr with making contact with the Partisans. Following their capture, Ott told his captors that he had an important message to deliver to Partisan headquarters, and after he had been taken there he suggested to the Partisans that his group be exchanged for Partisans held by the Germans in jails in Zagreb. On that basis, Ott was sent to Zagreb on parole, where he met with the German Plenipotentiary General in Croatia, General der Infanterie (Lieutenant General) Edmund Glaise-Horstenau. He advised Glaise-Horstenau that Tito was willing to exchange the eight Germans for ten Partisans who were being held by the Germans, Italians and NDH authorities. Glaise-Horstenau contacted the commander of the Italian 2nd Army, Generale designato d'Armata (acting General) Mario Roatta, who had most of the identified Partisan prisoners in his custody. On 14 August, the German ambassador to the NDH, SA-Obergruppenführer (Lieutenant General) Siegfried Kasche, sent a telegram to the Reich Foreign Ministry advising of the proposed exchange and asked the Ministry to intercede with the Italians. In his book Tito, Mihailović, and the Allies, 1941–1945, published in 1973, the former US diplomat Walter Roberts argued that the Abwehr considered some sort of modus vivendi with the Partisans might be possible, and were thinking of more than prisoner exchanges when they gave Ott the task of making contact with the Partisans. The number of Germans in Partisan custody had been increasing, and this made some sort of prisoner swap agreement more likely. These agreements were initially led by Marijan Stilinović on behalf of the Partisan Supreme Headquarters. On 5 September, a prisoner swap was completed in an area between Duvno and Livno, where 38 Partisans and family members were exchanged for one senior German officer who had been captured during the Battle of Livno in December 1942. Continuing negotiations between the Germans and Partisan headquarters resulted in a further prisoner exchange on 17 November 1942. The second of these was negotiated by Stilinović and Vladimir Velebit, also a member of the Partisan Supreme Headquarters, and Ott was involved on the German side. On the day of the second prisoner exchange, the Partisans delivered a letter addressed to Glaise-Horstenau which apparently explained that the Partisans were "an independent armed force with military discipline and not an agglomeration of bands", and "proposed mutual application of the rules of international law, especially in regard to prisoners and wounded, a regular exchange of prisoners, and a sort of armistice between the two sides". Glaise-Horstenau, Kasche and others wanted to continue exchanging prisoners as a means of obtaining intelligence, and also wanted a modus vivendi with the Partisans to allow the Germans to exploit the mineral resources of the NDH without disruption. In particular, they wanted to minimise disruption in the NDH south of the Sava River and on the Zagreb–Belgrade railway line. Adolf Hitler and Reich Foreign Minister Joachim von Ribbentrop were opposed to a modus vivendi, as they were afraid it would give the Partisans the status of a regular belligerent. As a result of Hitler's opposition, this Partisan proposal was not answered. ## March negotiations From 20 January 1943, the Partisans had been hard-pressed by the Axis Case White offensive. Throughout that offensive, Partisan Supreme Headquarters engaged the Germans in negotiations to gain time to cross the Neretva River. In late February or early March 1943, the Partisans captured a German officer and about 25 soldiers, who joined about 100 Croatian Home Guards, and 15 Italian officers and 600 soldiers already being held as prisoners of war by the Partisans. Due to their desperate situation at this stage of Case White, and their need to delay the Axis in order to cross the Neretva river before the Germans struck, they decided to use the recently captured German officer to initiate negotiations. The German historians Ladislaus Hory and Martin Broszat concluded that at this critical period, Tito was also concerned that by the end of the war the attrition to his Partisan forces would be such that Mihailović's Chetniks would be more powerful. They suggest that Tito may have been willing to agree to a truce with the Germans in order to destroy the Chetniks. The negotiations commenced on 11 March 1943 in Gornji Vakuf. According to the historian Jozo Tomasevich, the three Partisans tasked with the negotiations show the importance that the Partisans placed on the outcome. They were: Koča Popović, Spanish Civil War veteran and commander of the 1st Proletarian Division; Milovan Đilas, a member of the Partisan Supreme Headquarters and member of the Politburo of the Communist Party of Yugoslavia (using the alias of Miloš Marković); and Velebit (using the alias of Dr. Vladimir Petrović). The German negotiators were led by the commander of the 717th Infantry Division Generalleutnant (Major General) Benignus Dippold, one of his staff officers and a Hitler Youth representative. In their written statement, the Partisans: - identified their prisoners and indicated who they wanted in exchange, emphasising that they wanted to complete the exchange as soon as possible; - said that if the Germans accepted the Partisan proposal, especially in regard to the wounded and captured, the Partisans would reciprocate; - stated that Partisan Supreme Headquarters believed that, given the circumstances, there was no reason for the Germans to attack the Partisans, and it would be in the interests of both if hostilities stopped and areas of responsibilities were agreed; - stated that they considered the Chetniks their main enemies; - proposed that an armistice should apply during the negotiations; and - required a signature from their higher headquarters on any final agreement. Popović returned to report to Tito, and the Wehrmacht Commander South-East Generaloberst (Senior General) Alexander Löhr approved an informal ceasefire while the talks continued. On 17 March, Kasche reported on the negotiations to the Reich Foreign Ministry, requesting approval to continue discussions, and asking for instructions. The following is an extract from Kasche's telegram: > Under circumstances possibility exists that Tito will demonstratively turn his back on Moscow and London who left him in the lurch. The wishes of the Partisans are: Fight against the Chetniks in the Sandžak, thereafter return to their villages and pacification in Croatian and Serbian areas; return of camp-followers to their villages after they are disarmed; no executions of leading Partisans on our part... It is my opinion that this possibility should be pursued since secession from the enemy of this fighting force highly regarded in world opinion would be very important. In fact, the Tito Partisans are, in their masses, not Communists and in general have not committed extraordinary excesses in their battles and in the treatment of prisoners and the population. I refer to previous written reports and also to my conversation with State Secretary von Weizsäcker. Request instructions. In talks with Casertano [Italian Minister in Zagreb] and Lorković [Croatian Foreign Minister] I found that the above development would be treated positively. According to Roberts, it is clear that the next phase of negotiations was intended to go beyond prisoner exchanges, as the prisoner of war negotiator Stilinović was not involved. Đilas and Velebit were passed through the German lines to Sarajevo and were then flown to Zagreb on 25 March in a military aircraft. These negotiations were with German representatives supervised by Ott, apparently on all the points discussed at Gornji Vakuf, and the Partisans made it clear to the Germans that their proposals did not amount to an offer of surrender. Velebit met personally with Glaise-Horstenau, as the Austrian had known Velebit's father, a Yugoslav general. After this first visit to Zagreb, Velebit visited Partisan commanders in Slavonia and eastern Bosnia passing on orders for the suspension of attacks on the Germans and their rail communications, and the release of prisoners. Kasche had not received a reply to his telegram of 17 March, so he sent a further telegram to von Ribbentrop on 26 March. In it he advised that two Partisan representatives had arrived in Zagreb for negotiations, and named them using their aliases. He pointed out that the Partisan interest in an armistice had increased, and emphasised that he considered this a significant development. By this time, Đilas and Velebit had returned to Zagreb, where they reiterated that the Partisans wanted recognition as a regular belligerent, and emphasised the futility of continued fighting. They effectively asked to be left alone to fight the Chetniks. According to Pavlowitch, it is not clear which side posed the question of what the Partisans would do if the British were to land in Yugoslavia without Partisan authorisation. Đilas and Velebit said they would fight them as well as the Germans. They stated that their propaganda had been slanted towards the Soviet Union because they did not want to communicate with London. Their determination to fight the British if they landed was because they believed that the British would try to thwart their objective of seizing power in Yugoslavia. The Partisans also believed that the British were clandestine supporters of Chetnik collaboration. Đilas and Velebit further stated that the Chetniks would not fight the British because such a landing was exactly what they were waiting for. Von Ribbentrop responded on 29 March, prohibiting all further contact with the Partisans and inquiring about what evidence Kasche had gathered to support his optimistic conclusions. When told of the talks with the Partisans, Hitler apparently responded, "One does not negotiate with rebels—rebels must be shot". On 31 March, Kasche responded with a further telegram, saying that there had been no direct contact with Tito, and contradicted his earlier telegram by stating that the contacts had been strictly about prisoner exchanges. Kasche stated that Tito had abided by his promises thus far, and: > I think the Partisan question is misjudged by us. Our fight therefore has been practically without success anywhere. It should be based more on political and less on military means. Complete victory over the Partisans is unattainable militarily or through police measures. Military measures can destroy clearly defined areas of revolt, security measures can discover communications and serve to finish off Partisans and their helpers. The extent of success depends on troops and time available. If both are scarce the possibility of political solutions should not be rejected out of hand. Kasche further stated that it would be useful from a military perspective if the Partisans were allowed to fight the Chetniks without German interference, and counselled against trying to fight the Partisans and the Chetniks at the same time. On 30 March, Đilas had returned to Partisan headquarters with 12 more Partisans that had been held in the Ustaše-run Jasenovac concentration camp. Velebit remained in Zagreb to complete a further task: he successfully arranged the release of a detained Slovenian communist, Herta Haas, who was Tito's wife and the mother of his two-year-old son, Aleksandar. ## Reaction and aftermath Mihailović was the first to receive reports of contact between the Germans and Partisans, and passed them on to his British Special Operations Executive liaison officer, Colonel Bill Bailey. When Bailey's report arrived in London on 22 March, it was not taken seriously. Italian military intelligence also became aware of the talks. Tito himself mentioned the prisoner exchanges to the Comintern in Moscow, but when they realised more was being discussed and demanded an explanation, Tito was taken aback. He responded that he was not getting any external support, and needed to look after the interests of captured Partisans and refugees. German–Partisan prisoner exchanges re-commenced in late 1943, but became the responsibility of the Partisan Chief Headquarters for Croatia rather than Partisan Supreme Headquarters. Initially these were organised by Stilinović, then by Dr. Josip Brnčić, before Boris Bakrač took over the role. Between March 1944 and May 1945, Bakrač attended about 40 meetings with German representatives, 25 of which were in Zagreb under agreements for safe conduct. On the German side, Ott continued to play a leading role. These negotiations resulted in the exchange of between 600 and 800 Partisans in total. ## Historiography The negotiations were first mentioned publicly in 1949 when Stephen Clissold published his Whirlwind: An Account of Marshal Tito's Rise to Power. This was closely followed by Wilhelm Höttl's Die Geheime Front, Organisation, Personen und Aktionen des deutschen Geheimdienstes (The Secret Front, the Organisation, People and Activities of the German Secret Service) in 1950. There was another mention in a book published in German in 1956, Generalmajor Rudolf Kiszling's Die Kroaten. Der Schicksalsweg eines Südslawenvolkes (The Croats: The Fateful Path of a South Slav People). Ilija Jukić obtained evidence from German Foreign Ministry sources, which he included in his 1965 book Pogledi na prošlost, sadašnjost i budućnost hrvatskog naroda (Views on the Past, Present and Future of the Croatian Nation), published in London. In 1967, the Yugoslav historian Mišo Leković was officially commissioned to produce a full report on the talks. In 1969, Ivan Avakumović published his Mihailović prema nemačkim dokumentima (Mihailović according to German documents), which used captured German military documents. In 1973, Roberts published Tito, Mihailović, and the Allies, 1941–1945 which included information about the German–Partisan negotiations of March 1943. The publishing of the book disturbed the Yugoslav government, which lodged a complaint with the US Department of State. The thrust of the Yugoslav complaint was that the book equated the Partisans with the Chetniks. Roberts denied this, stating that his book did not equate the two or accept the Partisan mythology of the Partisans as a "liberation movement" or the Chetniks as "traitorous collaborators". The book also identified Đilas as the main negotiator. In 1977, Đilas confirmed his involvement in his book Wartime, but stated that he would not have disclosed the details of the negotiations if it had not already been known through Roberts' book. In 1978, Tito admitted that the negotiations occurred, but characterised their purpose as "solely to obtain German recognition of belligerent status for the Partisans". In 1985, after Tito's death, Leković was able to publish the results of his investigation that had started in 1967, in Martovski pregovori 1943 (The March Negotiations 1943). In 1989, Popović gave his version of events in Aleksandar Nenadović's Razgovori s Kočom (Conversations with Koča), followed by Velebit in Mira Šuvar's Vladimir Velebit: svjedok historije (Vladimir Velebit: Witness to History) in 2001, and in his own Tajne i zamke Drugog svjetskog rata (Secrets and Traps of the Second World War) the following year.
4,399
Beaver
1,171,134,376
Genus of semiaquatic rodents that build dams and lodges
[ "Articles containing video clips", "Beavers", "Extant Miocene first appearances", "Holarctic fauna", "Semiaquatic mammals", "Taxa named by Carl Linnaeus" ]
Beavers (genus Castor) are large, semiaquatic rodents of the Northern Hemisphere. There are two existing species: the North American beaver (Castor canadensis) and the Eurasian beaver (C. fiber). Beavers are the second-largest living rodents, after capybaras, weighing up to 50 kg (110 lb). They have stout bodies with large heads, long chisel-like incisors, brown or gray fur, hand-like front feet, webbed back feet, and tails that are flat and scaly. The two species differ in skull and tail shape and fur color. Beavers can be found in a number of freshwater habitats, such as rivers, streams, lakes and ponds. They are herbivorous, consuming tree bark, aquatic plants, grasses and sedges. Beavers build dams and lodges using tree branches, vegetation, rocks and mud; they chew down trees for building material. Dams restrict water flow, and lodges serve as shelters. Their infrastructure creates wetlands used by many other species, and because of their effect on other organisms in the ecosystem, beavers are considered a keystone species. Adult males and females live in monogamous pairs with their offspring. After their first year, the young help their parents repair dams and lodges; older siblings may also help raise newly-born offspring. Beavers hold territories and mark them using scent mounds made of mud, debris, and castoreum—a liquid substance excreted through the beaver's urethra-based castor sacs. Beavers can also recognize their kin by their anal gland secretions and are more likely to tolerate them as neighbors. Historically, beavers have been hunted for their fur, meat, and castoreum. Castoreum has been used in medicine, perfume, and food flavoring; beaver pelts have been a major driver of the fur trade. Before protections began in the 19th and early 20th centuries, overhunting had nearly exterminated both species. Their populations have since rebounded, and they are listed as species of least concern by the IUCN Red List of mammals. In human culture, the beaver symbolizes industriousness, especially in connection with construction; it is the national animal of Canada. ## Etymology The English word beaver comes from the Old English word beofor or befor and is connected to the German word biber and the Dutch word bever. The ultimate origin of the word is an Indo-European root for . The genus name Castor has its origin in the Greek kastor and translates as . The name beaver is the source for several names of places in Europe including Beverley, Bièvres, Biberbach, Biebrich, Bibra, Bibern, Bibrka, Bobr, Bjurbäcker, Bjurfors, Bober, Bóbrka and Bjurlund. ## Taxonomy There are two extant species: the North American beaver (Castor canadensis) and the Eurasian beaver (C. fiber). The Eurasian beaver is slightly longer and has a more lengthened skull, triangular nasal cavities (as opposed to the square ones of the North American species), a lighter fur color, and a narrower tail. Carl Linnaeus coined the genus Castor in 1758; he also coined the specific (species) epithet fiber. German zoologist Heinrich Kuhl coined C. canadensis in 1820. However, they were not confirmed to be separate species until the 1970s, when chromosomal evidence became available. (The Eurasian has 48 chromosomes, while the North American has 40.) Prior to that, many considered them the same species. The difference in chromosome numbers prevents them from interbreeding. Twenty-five subspecies have been classified for C. canadensis, and nine have been classified for C. fiber. ### Evolution Beavers belong to the rodent suborder Castorimorpha, along with Heteromyidae (kangaroo rats and kangaroo mice), and the gophers. Modern beavers are the only extant members of the family Castoridae. They originated in North America in the late Eocene and colonized Eurasia via the Bering Land Bridge in the early Oligocene, coinciding with the Grande Coupure, a time of significant changes in animal species around 33 million years ago (myr). The more basal castorids had several unique features: more complex occlusion between cheek teeth, parallel rows of upper teeth, premolars that were only slightly smaller than molars, the presence of a third set of premolars (P3), a hole in the stapes of the inner ear, a smooth palatine bone (with the palatine opening closer to the rear end of the bone), and a longer snout. More derived castorids have less complex occlusion, upper tooth rows that create a V-shape towards the back, larger second premolars compared to molars, absence of a third premolar set and stapes hole, a more grooved palatine (with the opening shifted towards the front), and reduced incisive foramen. Members of the subfamily Palaeocastorinae appeared in late-Oligocene North America. This group consisted primarily of smaller animals with relatively large front legs, a flattened skull, and a reduced tail—all features of a fossorial (burrowing) lifestyle. In the early Miocene (about 24 mya), castorids evolved a semiaquatic lifestyle. Members of the subfamily Castoroidinae are considered to be a sister group to modern beavers, and included giants like Castoroides of North America and Trogontherium of Eurasia. Castoroides is estimated to have had a length of 1.9–2.2 m (6.2–7.2 ft) and a weight of 90–125 kg (198–276 lb). Fossils of one genus in Castoroidinae, Dipoides, have been found near piles of chewed wood, though Dipoides appears to have been an inferior woodcutter compared to Castor. Researchers suggest that modern beavers and Castoroidinae shared a bark-eating common ancestor. Dam and lodge-building likely developed from bark-eating, and allowed beavers to survive in the harsh winters of the subarctic. There is no conclusive evidence for this behavior occurring in non-Castor species. The genus Castor likely originated in Eurasia. The earliest fossil remains appear to be C. neglectus, found in Germany and dated 12–10 mya. Mitochondrial DNA studies place the common ancestor of the two living species at around 8 mya. The ancestors of the North American beaver would have crossed the Bering Land Bridge around 7.5 mya. Castor may have competed with members of Castoroidinae, which led to niche differentiation. The fossil species C. praefiber was likely an ancestor of the Eurasian beaver. C. californicus from the Early Pleistocene of North America was similar to but larger than the extant North American beaver. ## Characteristics Beavers are the second-largest living rodents, after capybaras. They have a head–body length of 80–120 cm (31–47 in), with a 25–50 cm (9.8–19.7 in) tail, a shoulder height of 30–60 cm (12–24 in), and generally weigh 11–30 kg (24–66 lb), but can be as heavy as 50 kg (110 lb). Males and females are almost identical externally. Their bodies are streamlined like marine mammals and their robust build allows them to pull heavy loads. A beaver coat has 12,000–23,000 hairs/cm<sup>2</sup> (77,000–148,000 hairs/in<sup>2</sup>) and functions to keep the animal warm, to help it float in water, and to protect it against predators. Guard hairs are 5–6 cm (2.0–2.4 in) long and typically reddish brown, but can range from yellowish brown to nearly black. The underfur is 2–3 cm (0.79–1.18 in) long and dark gray. Beavers molt every summer. Beavers have large skulls with powerful chewing muscles. They have four chisel-shaped incisors that continue to grow throughout their lives. The incisors are covered in a thick enamel that is colored orange or reddish-brown by iron compounds. The lower incisors have roots that are almost as long as the entire lower jaw. Beavers have one premolar and three molars on all four sides of the jaws, adding up to 20 teeth. The molars have meandering ridges for grinding woody material. The eyes, ears and nostrils are arranged so that they can remain above water while the rest of the body is submerged. The nostrils and ears have valves that close underwater, while nictitating membranes cover the eyes. To protect the larynx and trachea from water flow, the epiglottis is contained within the nasal cavity instead of the throat. In addition, the back of the tongue can rise and create a waterproof seal. A beaver's lips can close behind the incisors, preventing water from entering their mouths as they cut and bite onto things while submerged. The beaver's front feet are dexterous, allowing them to grasp and manipulate objects and food, as well as dig. The hind feet are larger and have webbing between the toes, and the second innermost toe has a "double nail" used for grooming. Beavers can swim at 8 km/h (5.0 mph); only their webbed hind feet are used to swim, while the front feet fold under the chest. On the surface, the hind limbs thrust one after the other; while underwater, they move at the same time. Beavers are awkward on land but can move quickly when they feel threatened. They can carry objects while walking on their hind legs. The beaver's distinctive tail has a conical, muscular, hairy base; the remaining two-thirds of the appendage is flat and scaly. The tail has multiple functions: it provides support for the animal when it is upright (such as when chewing down a tree), acts as a rudder when it is swimming, and stores fat for winter. It also has a countercurrent blood vessel system which allows the animal to lose heat in warm temperatures and retain heat in cold temperatures. The beaver's sex organs are inside the body, and the male's penis has a cartilaginous baculum. They have only one opening, a cloaca, which is used for reproduction, scent-marking, defecation, and urination. The cloaca evolved secondarily, as most mammals have lost this feature, and may reduce the area vulnerable to infection in dirty water. The beaver's intestine is six times longer than its body, and the caecum is double the volume of its stomach. Microorganisms in the caecum allow them to process around 30 percent of the cellulose they eat. A beaver defecates in the water, leaving behind balls of sawdust. Female beavers have four mammary glands; these produce milk with 19 percent fat, a higher fat content than other rodents. Beavers have two pairs of glands: castor sacs, which are part of the urethra, and anal glands. The castor sacs secrete castoreum, a liquid substance used mainly for marking territory. Anal glands produce an oily substance which the beaver uses as a waterproof ointment for its coat. The substance plays a role in individual and family recognition. Anal secretions are darker in females than males among Eurasian beavers, while the reverse is true for the North American species. Compared to many other rodents, a beaver's brain has a hypothalamus that is much smaller than the cerebrum; this indicates a relatively advanced brain with higher intelligence. The cerebellum is large, allowing the animal to move within a three-dimensional space (such as underwater) similar to tree-climbing squirrels. The neocortex is devoted mainly to touch and hearing. Touch is more advanced in the lips and hands than the whiskers and tail. Vision in the beaver is relatively poor; the beaver eye cannot see as well underwater as an otter. Beavers have a good sense of smell, which they use for detecting land predators and for inspecting scent marks, food, and other individuals. Beavers can hold their breath for as long as 15 minutes but typically remain underwater for no more than five or six minutes. Dives typically last less than 30 seconds and are usually no more than 1 m (3 ft 3 in) deep. When diving, their heart rate decreases to 60 beats per minute, half its normal pace, and blood flow is directed more towards the brain. A beaver's body also has a high tolerance for carbon dioxide. When surfacing, the animal can replace 75 percent of the air in its lungs in one breath, compared to 15 percent for a human. ## Distribution and status The IUCN Red List of mammals lists both beaver species as least concern. The North American beaver is widespread throughout most of the United States and Canada and can be found in northern Mexico. The species was introduced to Finland in 1937 (and then spread to northwestern Russia) and to Tierra del Fuego, Patagonia, in 1946. , the introduced population of North American beavers in Finland has been moving closer to the habitat of the Eurasian beaver. Historically, the North American beaver was trapped and nearly extirpated because its fur was highly sought after. Protections have allowed the beaver population on the continent to rebound to an estimated 6–12 million by the late 20th century; still far lower than the originally estimated 60–400 million North American beavers before the fur trade. The introduced population in Tierra del Fuego is estimated at 35,000–50,000 individuals . The Eurasian beaver's range historically included much of Eurasia, but was decimated by hunting by the early 20th century. In Europe, beavers were reduced to fragmented populations, with combined population numbers being estimated at 1,200 individuals for the Rhône of France, the Elbe in Germany, southern Norway, the Neman river and Dnieper Basin in Belarus, and the Voronezh river in Russia. The beaver has since recolonized parts of its former range, aided by conservation policies and reintroductions. Beaver populations now range across western, central, and eastern Europe, and western Russia and the Scandinavian Peninsula. Beginning in 2009, beavers have been successfully reintroduced to parts of Great Britain. , the total Eurasian beaver population in Europe was estimated at over one million. Small native populations are also present in Mongolia and northwestern China; their numbers were estimated at 150 and 700, respectively, . Under New Zealand's Hazardous Substances and New Organisms Act 1996, beavers are classed as a "prohibited new organism" preventing them from being introduced into the country. ## Ecology Beavers live in freshwater ecosystems such as rivers, streams, lakes and ponds. Water is the most important part of beaver habitat; they swim and dive in it, and it provides them a refuge from land predators, restricts access to their homes and allows them to move building objects more easily. Beavers prefer slower moving streams, typically with a gradient (steepness) of one percent, though they have been recorded using streams with gradients as high as 15 percent. Beavers are found in wider streams more often than in narrower ones. They also prefer areas with no regular flooding and may abandon a location for years after a significant flood. Beavers typically select flat landscapes with diverse vegetation close to the water. North American beavers prefer trees being 60 m (200 ft) or less from the water, but will roam several hundred meters to find more. Beavers have also been recorded in mountainous areas. Dispersing beavers will use certain habitats temporarily before finding their ideal home. These include small streams, temporary swamps, ditches, and backyards. These sites lack important resources, so the animals do not stay there permanently. Beavers have increasingly settled at or near human-made environments, including agricultural areas, suburbs, golf courses, and shopping malls. Beavers have an herbivorous and a generalist diet. During the spring and summer, they mainly feed on herbaceous plant material such as leaves, roots, herbs, ferns, grasses, sedges, water lilies, water shields, rushes, and cattails. During the fall and winter, they eat more bark and cambium of woody plants; tree and shrub species consumed include aspen, birch, oak, dogwood, willow and alder. There is some disagreement about why beavers select specific woody plants; some research has shown that beavers more frequently select species which are more easily digested, while others suggest beavers principally forage based on stem size. Beavers may cache their food for the winter, piling wood in the deepest part of their pond where it cannot be reached by other browsers. This cache is known as a "raft"; when the top becomes frozen, it creates a "cap". The beaver accesses the raft by swimming under the ice. Many populations of Eurasian beaver do not make rafts, but forage on land during winter. Beavers usually live up to 10 years. Felids, canids, and bears may prey upon them. Beavers are protected from predators when in their lodges, and prefer to stay near water. Parasites of the beaver include the bacteria Francisella tularensis, which causes tularemia; the protozoan Giardia duodenalis, which causes giardiasis (beaver fever); and the beaver beetle and mites of the genus Schizocarpus. They have also been recorded to be infected with the rabies virus. ### Infrastructure Beavers need trees and shrubs to use as building material for dams, which restrict flowing water to create a pond for them to live in, and for lodges, which act as shelters and refuges from predators and the elements. Without such material, beavers dig burrows into a bank to live. Dam construction begins in late summer or early fall, and they repair them whenever needed. Beavers can cut down trees up to 15 cm (5.9 in) wide in less than 50 minutes. Thicker trees, at 25 cm (9.8 in) wide or more, may not fall for hours. When chewing down a tree, beavers switch between biting with the left and right side of the mouth. Tree branches are then cut and carried to their destination with the powerful jaw and neck muscles. Other building materials, like mud and rocks, are held by the forelimbs and tucked between the chin and chest. Beaver start building dams when they hear running water, and the sound of a leak in a dam triggers them to repair it. To build a dam, beavers stack up relatively long and thick logs between banks and in opposite directions. Heavy rocks keep them stable, and grass is packed between them. Beavers continue to pile on more material until the dam slopes in a direction facing upstream. Dams can range in height from 20 cm (7.9 in) to 3 m (9.8 ft) and can stretch from 0.3 m (1 ft 0 in) to several hundred meters long. Beaver dams are more effective in trapping and slowly leaking water than man-made concrete dams. Lake-dwelling beavers do not need to build dams. Beavers make two types of lodges: bank lodges and open-water lodges. Bank lodges are burrows dug along the shore and covered in sticks. The more complex freestanding, open-water lodges are built over a platform of piled-up sticks. The lodge is mostly sealed with mud, except for a hole at the top which acts as an air vent. Both types are accessed by underwater entrances. The above-water space inside the lodge is known as the "living chamber", and a "dining area" may exist close to the water entrance. Families routinely clean out old plant material and bring in new material. North American beavers build more open-water lodges than Eurasian beavers. Beaver lodges built by new settlers are typically small and sloppy. More experienced families can build structures with a height of 2 m (6 ft 7 in) and an above-water diameter of 6 m (20 ft). A lodge sturdy enough to withstand the coming winter can be finished in just two nights. Both lodge types can be present at a beaver site. During the summer, beavers tend to use bank lodges to keep cool. They use open-water lodges during the winter. The air vent provides ventilation, and newly-added carbon dioxide can be cleared in an hour. The lodge remains consistent in oxygen and carbon dioxide levels from season to season. Beavers in some areas will dig canals connected to their ponds. The canals fill with groundwater and give beavers access and easier transport of resources, as well as allow them to escape predators. These canals can stretch up to 1 m (3 ft 3 in) wide, 0.5 m (1 ft 8 in) deep, and over 0.5 km (0.31 mi) long. It has been hypothesized that beavers' canals are not only transportation routes but an extension of their "central place" around the lodge and/or food cache. As they drag wood across the land, beavers leave behind trails or "slides", which they reuse when moving new material. ### Environmental effects The beaver works as an ecosystem engineer and keystone species, as its activities can have a great impact on the landscape and biodiversity of an area. Aside from humans, few other extant animals appear to do more to shape their environment. When building dams, beavers alter the paths of streams and rivers, allowing for the creation of extensive wetland habitats. In one study, beavers were associated with large increases in open-water areas. When beavers returned to an area, 160% more open water was available during droughts than in previous years, when they were absent. Beaver dams also lead to higher water tables, in mineral soil environments and in wetlands such as peatlands. In peatlands particularly, their dams stabilize the constantly changing water levels, leading to greater carbon storage. Beaver ponds, and the wetlands that succeed them, remove sediments and pollutants from waterways, and can stop the loss of important soils. These ponds can increase the productivity of freshwater ecosystems by accumulating nitrogen in sediments. Beaver activity can affect the temperature of the water; in northern latitudes, ice thaws earlier in the warmer beaver-dammed waters. Beavers may contribute to climate change. In Arctic areas, the floods they create can cause permafrost to thaw, releasing methane into the atmosphere. As wetlands are formed and riparian habitats are enlarged, aquatic plants colonize the newly-available watery habitat. One study in the Adirondacks found that beaver engineering lead to an increase of more than 33 percent in herbaceous plant diversity along the water's edge. Another study in semiarid eastern Oregon found that the width of riparian vegetation on stream banks increased several-fold as beaver dams watered previously dry terraces adjacent to the stream. Riparian ecosystems in arid areas appear to sustain more plant life when beaver dams are present. Beaver ponds act as a refuge for riverbank plants during wildfires, and provide them with enough moisture to resist such fires. Introduced beavers at Tierra del Fuego have been responsible for destroying the indigenous forest. Unlike trees in North America, many trees in South America cannot grow back after being cut down. Beaver activity impacts communities of aquatic invertebrates. Damming typically leads to an increase of slow or motionless water species, like dragonflies, oligochaetes, snails, and mussels. This is to the detriment of rapid water species like black flies, stoneflies, and net-spinning caddisflies. Beaver floodings create more dead trees, providing more habitat for terrestrial invertebrates like Drosophila flies and bark beetles, which live and breed in dead wood. The presence of beavers can increase wild salmon and trout populations, and the average size of these fishes. These species use beaver habitats for spawning, overwintering, feeding, and as havens from changes in water flow. The positive effects of beaver dams on fish appear to outweigh the negative effects, such as blocking of migration. Beaver ponds have been shown to be beneficial to frog populations by protecting areas for larvae to mature in warm water. The stable waters of beaver ponds also provide ideal habitat for freshwater turtles. Beavers help waterfowl by creating increased areas of water. The widening of the riparian zone associated with beaver dams has been shown to increase the abundance and diversity of birds favoring the water's edge, an impact that may be especially important in semi-arid climates. Fish-eating birds use beaver ponds for foraging, and in some areas, certain species appear more frequently at sites where beavers were active than at sites with no beaver activity. In a study of Wyoming streams and rivers, watercourses with beavers had 75 times as many ducks as those without. As trees are drowned by rising beaver impoundments, they become an ideal habitat for woodpeckers, which carve cavities that may be later used by other bird species. Beaver-caused ice thawing in northern latitudes allows Canada geese to nest earlier. Other semi-aquatic mammals, such as water voles, muskrats, minks, and otters, will shelter in beaver lodges. Beaver modifications to streams in Poland create habitats favorable to bat species that forage at the water surface and "prefer moderate vegetation clutter". Large herbivores, such as some deer species, benefit from beaver activity as they can access vegetation from fallen trees and ponds. ## Behavior Beavers are mainly nocturnal and crepuscular, and spend the daytime in their shelters. In northern latitudes, beaver activity is decoupled from the 24-hour cycle during the winter, and may last as long as 29 hours. They do not hibernate during winter, and spend much of their time in their lodges. ### Family life The core of beaver social organization is the family, which is composed of an adult male and an adult female in a monogamous pair and their offspring. Beaver families can have as many as ten members; groups about this size require multiple lodges. Mutual grooming and play fighting maintain bonds between family members, and aggression between them is uncommon. Adult beavers mate with their partners, though partner replacement appears to be common. A beaver that loses its partner will wait for another one to come by. Estrus cycles begin in late December and peak in mid-January. Females may have two to four estrus cycles per season, each lasting 12–24 hours. The pair typically mate in the water and to a lesser extent in the lodge, for half a minute to three minutes. Up to four young, or kits, are born in spring and summer, after a three or four-month gestation. Newborn beavers are precocial with a full fur coat, and can open their eyes within days of birth. Their mother is the primary caretaker, while their father maintains the territory. Older siblings from a previous litter also play a role. After they are born, the kits spend their first one to two months in the lodge. Kits suckle for as long as three months, but can eat solid food within their second week and rely on their parents and older siblings to bring it to them. Eventually, beaver kits explore outside the lodge and forage on their own, but may follow an older relative and hold onto their backs. After their first year, young beavers help their families with construction. Beavers sexually mature around 1.5–3 years. They become independent at two years old, but remain with their parents for an extra year or more during times of food shortage, high population density, or drought. ### Territories and spacing Beavers typically disperse from their parental colonies during the spring or when the winter snow melts. They often travel less than 5 km (3.1 mi), but long-distance dispersals are not uncommon when previous colonizers have already exploited local resources. Beavers are able to travel greater distances when free-flowing water is available. Individuals may meet their mates during the dispersal stage, and the pair travel together. It may take them weeks or months to reach their final destination; longer distances may require several years. Beavers establish and defend territories along the banks of their ponds, which may be 1–7 km (0.62–4.35 mi) in length. Beavers mark their territories by constructing scent mounds made of mud and vegetation, scented with castoreum. Those with many territorial neighbors create more scent mounds. Scent marking increases in spring, during the dispersal of yearlings, to deter interlopers. Beavers are generally intolerant of intruders and fights may result in deep bites to the sides, rump, and tail. They exhibit a behavior known as the "dear enemy effect"; a territory-holder will investigate and become familiar with the scents of its neighbors and react more aggressively to the scents of strangers passing by. Beavers are also more tolerant of individuals that are their kin. They recognize them by using their keen sense of smell to detect differences in the composition of anal gland secretions. Anal gland secretion profiles are more similar among relatives than unrelated individuals. ### Communication Beavers within a family greet each other with whines. Kits will attract the attention of adults with mews, squeaks, and cries. Defensive beavers produce a hissing growl and gnash their teeth. Tail slaps, which involve an animal hitting the water surface with its tail, serve as alarm signals warning other beavers of a potential threat. An adult's tail slap is more successful in alerting others, who will escape into the lodge or deeper water. Juveniles have not yet learned the proper use of a tail slap, and hence are normally ignored. Eurasian beavers have been recorded using a territorial "stick display", which involves individuals holding up a stick and bouncing in shallow water. ## Interactions with humans Beavers sometimes come into conflict with humans over land use; individual beavers may be labeled as "nuisance beavers". Beavers can damage crops, timber stocks, roads, ditches, gardens, and pastures via gnawing, eating, digging, and flooding. They occasionally attack humans and domestic pets, particularly when infected with rabies, in defense of their territory, or when they feel threatened. Some of these attacks have been fatal, including at least one human death. Beavers can spread giardiasis ('beaver fever') by infecting surface waters, though outbreaks are more commonly caused by human activity. Flow devices, like beaver pipes, are used to manage beaver flooding, while fencing and hardware cloth protect trees and shrubs from beaver damage. If necessary, hand tools, heavy equipment, or explosives are used to remove dams. Hunting, trapping, and relocation may be permitted as forms of population control and for removal of individuals. The governments of Argentina and Chile have authorized the trapping of invasive beavers in hopes of eliminating them. The ecological importance of beavers has led to cities like Seattle designing their parks and green spaces to accommodate the animals. The Martinez beavers became famous in the mid-2000s for their role in improving the ecosystem of Alhambra Creek in Martinez, California. Zoos have displayed beavers since at least the 19th century, though not commonly. In captivity, beavers have been used for entertainment, fur harvesting, and for reintroduction into the wild. Captive beavers require access to water, substrate for digging, and artificial shelters. Archibald Stansfeld "Grey Owl" Belaney pioneered beaver conservation in the early 20th century. Belaney wrote several books, and was first to professionally film beavers in their environment. In 1931, he moved to a log cabin in Prince Albert National Park, where he was the "caretaker of park animals" and raised a beaver pair and their four offspring. ### Commercial use Beavers have been hunted, trapped, and exploited for their fur, meat, and castoreum. Since the animals typically stayed in one place, trappers could easily find them and could kill entire families in a lodge. Many pre-modern people mistakenly thought that castoreum was produced by the testicles or that the castor sacs of the beaver were its testicles, and females were hermaphrodites. Aesop's Fables describes beavers chewing off their testicles to preserve themselves from hunters, which is impossible because a beaver's testicles are internal. This myth persisted for centuries, and was corrected by French physician Guillaume Rondelet in the 1500s. Beavers have historically been hunted and captured using deadfalls, snares, nets, bows and arrows, spears, clubs, firearms, and leg-hold traps. Castoreum was used to lure the animals. Castoreum was used for a variety of medical purposes; Pliny the Elder promoted it as a treatment for stomach problems, flatulence, seizures, sciatica, vertigo, and epilepsy. He stated it could stop hiccups when mixed with vinegar, toothaches if mixed with oil (by administering into the ear opening on the same side as the tooth), and could be used as an antivenom. The substance has traditionally been prescribed to treat hysteria in women, which was believed to have been caused by a "toxic" womb. Castoreum's properties have been credited to the accumulation of salicylic acid from willow and aspen trees in the beaver's diet, and has a physiological effect comparable to aspirin. Today, the medical use of castoreum has declined and is limited mainly to homeopathy. The substance is also used as an ingredient in perfumes and tinctures, and as a flavouring in food and drinks. Various Native American groups have historically hunted beavers for food. Beaver meat was advantageous, being more calorie-rich and fattened than other red meats, and the animals remained plump in winter, when they were most hunted. The bones were used to make tools. In medieval Europe, the Catholic Church considered the beaver to be part mammal and part fish, and allowed followers to eat the scaly, fishlike tail on meatless Fridays during Lent. Beaver tails were thus highly-prized in Europe; they were described by French naturalist Pierre Belon as tasting like a "nicely dressed eel". Beaver pelts were used to make hats; felters would remove the guard hairs. The number of pelts needed depended on the type of hat, with Cavalier and Puritan hats requiring more fur than top hats. In the late 16th century, Europeans began to deal in North American furs due to the lack of taxes or tariffs on the continent and the decline of fur-bearers at home. Beaver pelts caused or contributed to the Beaver Wars, King William's War, and the French and Indian War; the trade made John Jacob Astor and the owners of the North West Company very wealthy. For Europeans in North America, the fur trade was a driver of the exploration and westward exploration on the continent and contact with native peoples, who traded with them. The fur trade peaked between 1860 and 1870, when over 150,000 beaver pelts were purchased annually by the Hudson's Bay Company and fur companies in the United States. The contemporary global fur trade is not as profitable due to conservation, anti-fur and animal rights campaigns. ### In culture The beaver has been used to represent productivity, trade, tradition, masculinity, and respectability. References to the beaver's skills are reflected in everyday language. The English verb "to beaver" means working with great effort and being "as busy as a beaver"; a "beaver intellect" refers to a way of thinking that is slow and honest. The word "beaver" can also be used as a sexual term for the human vulva. Native American myths emphasize the beaver's skill and industriousness. In the mythology of the Haida, beavers are descended from the Beaver-Woman, who built a dam on a stream next to their cabin while her husband was out hunting and gave birth to the first beavers. In a Cree story, the Great Beaver and its dam caused a world flood. Other tales involve beavers using their tree chewing skills against an enemy. Beavers have been featured as companions in some stories, including a Lakota tale where a young woman flees from her evil husband with the aid of her pet beaver. Europeans have traditionally thought of beavers as fantastical animals due to their amphibious nature. They depicted them with exaggerated tusk-like teeth, dog- or pig-like bodies, fish tails, and visible testicles. French cartographer Nicolas de Fer illustrated beavers building a dam at Niagara Falls, fantastically depicting them like human builders. Beavers have also appeared in literature such as Dante Alighieri's Divine Comedy and the writings of Athanasius Kircher, who wrote that on Noah's Ark the beavers were housed near a water-filled tub that was also used by mermaids and otters. The beaver has long been associated with Canada, appearing on the first pictorial postage stamp issued in the Canadian colonies in 1851 as the so-called "Three-Penny Beaver". It was declared the national animal in 1975. The five-cent coin, the coat of arms of the Hudson's Bay Company, and the logos for Parks Canada and Roots Canada use its image. Frank and Gordon are two fictional beavers that appeared in Bell Canada's advertisements between 2005 and 2008. However, the beaver's status as a rodent has made it controversial, and it was not chosen to be on the Arms of Canada in 1921. The beaver has commonly been used to represent Canada in political cartoons, typically to signify it as a friendly but relatively weak nation. In the United States, the beaver is the state animal of New York and Oregon. It is also featured on the coat of arms of the London School of Economics. ## See also - Beaver drop
2,644,584
Battle of Hayes Pond
1,169,602,777
1958 armed confrontation near Maxton, North Carolina, US
[ "1958 in North Carolina", "1958 riots", "January 1958 events in the United States", "Ku Klux Klan crimes", "Ku Klux Klan in North Carolina", "Lumbee", "Native American history of North Carolina", "Race riots in the United States", "Riots and civil disorder in North Carolina", "Robeson County, North Carolina" ]
The Battle of Hayes Pond, also known as the Battle of Maxton Field or the Maxton Riot, was an armed confrontation between members of a Ku Klux Klan (KKK) organization and Lumbee Indians at a Klan rally near Maxton, North Carolina, on the night of January 18, 1958. The clash resulted in the disruption of the rally and a significant amount of media coverage praising the Lumbees and condemning the Klansmen. In 1956, James W. "Catfish" Cole, a KKK member from South Carolina, established the North Carolina Knights, a Klan organization aimed at defending racial segregation. In early 1958 Cole focused his efforts on upholding segregation in Robeson County, North Carolina, which had a triracial population of Native Americans, whites, and blacks. Many of the Native Americans were members of the recently recognized Lumbee Tribe, a group having its origins in other indigenous peoples but had grown into a single community around the county. Cole oversaw two cross burnings meant to frighten the Lumbees from racial mixing, and scheduled a Klan rally which he hoped would have a large turnout. Cole and his Klansmen widely advertised their event, driving throughout the county in a truck outfitted with a loudspeaker to broadcast their plans. The announcements infuriated the Lumbee community and some decided to try to disrupt the meeting. Fearing violence, local law enforcement officials pleaded with Cole to suspend his plans, but he refused. On January 18, 1958, Cole and about 50 Klansmen, most of whom were followers of his from South Carolina, gathered in a leased cornfield near Hayes Pond, a place adjacent to the town of Maxton. Several hundred Lumbees, many armed, arrived and encircled the group and jeered at them. After an altercation in which the single light in the field was destroyed, the Lumbees began firing their weapons and most of the Klansmen fled. Cole hid in a swamp while the Lumbees seized Klan regalia and carried them to Pembroke to celebrate. Police restored order on the field and arrested one Klansman. Afterwards, Cole and the arrested Klansman were indicted and convicted for inciting a riot. The event was widely covered in the local and national press, which blamed the Klan for the disorder and praised the Lumbees for their actions. Cole never organized another public rally in Robeson County after the incident. In 2011 the Lumbee Tribal Council declared January 18 a "Tribal Day of Historical Recognition". ## Background ### Robeson County and the Lumbee Tribe The Lumbee people in southeastern North Carolina originated from various Native American groups which were greatly impacted by conflicts and infectious diseases dating back to the period of European colonization. Those who survived these disruptions grouped together as a homogeneous community. Culturally, this group was not particularly distinct from proximate European Americans; they were mostly agrarian, and shared similar styles of dress, homes, and music. They also spoke English and were mostly Protestants. Their identity was rooted in kinship and shared location. Through intermarriage, they acquired some white and black ancestry. In 1830, the United States government began a policy of Indian removal, forcibly relocating traditional-living, "tribal" Native American populations in the American South further west. Native Americans in Robeson County, North Carolina, owing to their assimilation into Euro-American culture, were not subject to removal. However, from this point on they were increasingly subject to racial discrimination. In 1835 the Constitution of North Carolina classified the eastern Carolina Native Americans as "free persons of color". Under this system they were denied the right to vote, bear arms, or attend white schools. During the American Civil War, the Confederate States Army conscripted them for labor, though some resisted, leading to the Lowry War. In 1885, following Native Americans' refusal to attend black schools, the state of North Carolina recognized this group as Croatans and established a separate school system for them. This tripartite segregation was unique in the American South, though whites generally regarded both the Native Americans and blacks as "colored". Some other county facilities were separated for "Whites", "Negroes", and "Indians". In 1913 the North Carolina General Assembly reclassified the Indians as Cherokees. Hundreds of Native Americans from Robeson County fought for the United States during World War II in white units (blacks were segregated into different outfits). Many returned with a willingness to pursue social change. Some of them, especially the war veterans, disliked Robeson County's segregation. Other leaders lobbied for the adoption of a unique name to identify their group. In the early 1950s, some led by minister D. F. Lowry formed an organization, the Lumbee Brotherhood, to unite the community. The chosen name, "Lumbee", was derived from the Lumber River, which ran through Robeson County. Lowry and his supporters argued that this was a suitable label, since the community had its origins in various indigenous groups but all resided near the river. In 1952 the name Lumbee was approved by the Native Americans in a referendum, and the following year the General Assembly formally recognized the label. In 1956 the United States Congress formally extended partial recognition to the Lumbee Tribe, affirming their existence as an indigenous community but disallowing them from use of federal funds and services available to other Native American groups. By 1958 Robeson County had a triracial population consisting of approximately 40,000 whites, 30,000 Native Americans (including Lumbees and Tuscaroras), and 20,000 blacks. ### Ku Klux Klan activity In 1954 the United States Supreme Court issued its decision in Brown v. Board of Education, ruling that racial segregation in public schools was unconstitutional. The ruling sparked a significant amount of pro-segregation activity among whites in the South, who formed various groups to oppose integration. It also led to a resurgence in Ku Klux Klan (KKK) activity. The Klan was a white supremacist and nativist movement which sought to defend racial segregation. It had different formal organizational incarnations, but all groups generally espoused white supremacy and a commitment to Protestant Christianity. The KKK was historically violent, and by the 1950s Klan violence was looked down upon by North Carolina officials. There had been a Klan presence in Robeson County in the early part of the decade before it was forced out under pressure from District Solicitor Malcolm Buie Seawell and the federal government. In 1956 James W. "Catfish" Cole, a former member of the U.S. Klans, organized a new Klan chapter called the North Carolina Knights. With Cole leading them as their "Grand Wizard", they held their first rally in the small Robeson County community of Shannon, where Cole defended segregation. He was able to use segregationist rhetoric to grow his following throughout the following year. He also began promoting the Klan in the town of Monroe in Union County, where black civil rights activists were seeking to end segregation in public facilities. In October 1957 Cole's group attacked a National Association for the Advancement of Colored People member's house in the town, but were repelled by gunfire from armed black activists led by Robert F. Williams. In early 1958 Cole refocused his efforts on upholding segregation in Robeson County. He hoped to use this campaign to shore up support for his organization. On January 13, 1958, Cole and several Klansmen invited local journalist Bruce Roberts to cover their itinerary for the evening in Robeson. In St. Pauls, they burned a cross near the home of a Native American woman who was dating a white man. They also burned a cross in Lumberton, near the home of an Indian family that had recently moved into a white neighborhood. Cole informed Roberts that he was planning a large Klan rally the following Saturday night somewhere in or near the town of Pembroke, the center of Robeson's Lumbee community, where he would condemn the "mongrelization" of the races. Roberts reported on the events and the planned rally in the January 14 edition of the Scottish Chief, the newspaper of the small town of Maxton. Nearby publications quickly repeated the story. Cole hoped the rally would attract hundreds or thousands of Klansmen. Rumors circulated that Robeson gun stores were selling large quantities of ammunition on Tuesday, raising fears of a violent confrontation. One Klansman went into the offices of the Scottish Chief and the Lumberton Post to ask them to advertise the rally. They also posted fliers to display their intentions. To further publicize the event, Cole and other Klansmen drove throughout the county in a truck outfitted with a loudspeaker, broadcasting their plans. The loudspeaker announcements infuriated the Lumbee community. Fearing violence, Robeson County Sheriff Malcolm McLeod went to Cole's home in South Carolina and pleaded with him to suspend the rally, but Cole refused, telling him, "It sounds like you don't know how to handle your people. We're going to come show you." Unable to find someone willing to lease him land in Pembroke, Cole rented a small cornfield from a white farmer who lived near Hayes Pond. Hayes Pond was a former mill pond located along Big Shoe Heel Creek, south of Maxton, approximately 10 miles (16 km) from Pembroke. Maxton Chief of Police Bob Fisher, who was opposed to the Klan's presence, sent letters to state and federal authorities to ask for their assistance, while the town board of commissioners passed a resolution condemning the Klan and denouncing the rally. At a barbershop in Pembroke, a group of Lumbee men met and suggested confronting the Klansmen in Maxton so that they would not disturb their town. Other Lumbees discussed the situation in the local Veterans of Foreign Wars Hall. Accounts of how organized the Lumbees were in their response vary. In the 1960s anthropologist Karen Blu interviewed several Lumbee participants, and none mentioned the names of any leaders in this effort. She wrote that "one man" who was cited as a leader by the press was frequently criticized by her respondents for apparently professing that role. According to local activist Willa Robinson, black people who worked in the same businesses with Klansmen and were familiar with the KKK gave the Lumbees intelligence about the meeting. National news organizations such as the Associated Press, United Press International, and the International News Syndicate crafted reports printed in North Carolina and across the country which spoke of potential violence at the rally. ## Battle Cole scheduled the rally to begin at 8:30 p.m. on January 18, telling his followers to expect a crowd of at least 500 supporters. At about 7 p.m. around 10 Klansmen drove up and parked in the middle of the field. They exited their vehicles carrying guns; one was wearing Klan robes. They were confident, and one of them told a reporter from The News and Observer, "You'd better be careful. We'd hate to shoot the wrong man." Numerous local and state newspaper journalists were present, as were photographers and some radio and television broadcast reporters, including personnel from WTSB-Lumberton. The Klansmen set up a light pole and a public address system both wired to a portable generator, a banner emblazoned with the letters "KKK", and a cross which they planned to burn. Sheriff McLeod arrived with 16 deputies to maintain order. He told them that if Lumbees attacked the Klan they should "take [their] time" in breaking up a clash. A further dozen North Carolina State Highway Patrol officers under Captain Raymond Williams, some armed with submachine guns, waited about a mile down the road out of sight, ready to mobilize in case of violence. Over the course of the next hour, more Klansmen drove into the field to join those already present. Some of them brought their wives and children, though they remained in their cars to keep warm. Most of them were from South Carolina, and few, if any, were from Robeson County. At the same time, cars carrying three to six Lumbees each began parking along the side of the road. They remained in their vehicles to stay warm. By 8 p.m. the Klansmen, numbering about 50, realized they were outnumbered and grew anxious. Cole rehearsed his speech—which condemned racial integration—while the public address system played Christian hymns. At about 8:15 p.m., the Lumbees exited their vehicles and began streaming towards the field. Historian Christopher Oakley estimated that 300–400 Lumbees were present, most of them men. Historian Malinda Maynor Lowery listed the presence of 500 Lumbee men—many of them World War II veterans—and 50 women. Some accounts recall 1,000 Native Americans present. Many of the men were armed with rifles, shotguns, pistols, and knives. As the Lumbees drew closer they began to jeer the Klan, shouting "We want Cole!" and "God damn the KKK!" The Klansmen responded by calling the Lumbees "half-niggers". McLeod pulled Cole aside and said, "Well, you know how it is. I can't control the crowd with the few men I've got. I'm not telling you to not hold a meeting, but you see how it is." According to The News and Observer reporter Charles Craven, Cole told the sheriff, "I want to get my wife and babies out...Somebody's going for them...My little babies." Cole refused to suspend the meeting, and by 8:25 most of the Klansmen and Lumbee had circled around the light pole. Sources disagree on how the physical confrontation started. According to Oakley, shortly before 8:30 two young Lumbee men ran forward, smashed the light pole, and shut off the public address system. This plunged the field into darkness and led to a momentary silence. According to Sanford Locklear, he and his brother-in-law, Neil Lowry, approached Cole and asked him why he was there. Cole said, "We come to talk to these people," to which Locklear responded, "Well, you're ain't gone [sic] talk to these people tonight." Cole reaffirmed his intention to speak, which Locklear again rejected. Locklear then pushed Cole with his rifle and said, "And don't you move. If you do, well, I'll kill you." Lowry then shot out the light on the pole, and Locklear kicked the public address system. Initial newspaper reports of the affair stated that one Lumbee smashed the light with the butt of his shotgun, and this version corresponded to media photographs. Newsweek was the first publication to report that the light had been shot out. The Lumbee then began firing their guns—mostly into the air—and shouting. Some fired at the tires of the Klansmen's cars. News photographers then began taking photos of the ensuing commotion. Cole quickly retreated into the nearby swamp, leaving his wife, Carolyn, and his three children behind. According to Oakley, most of the Klansmen did the same, while Lowery wrote that many of them got in their cars and drove away erratically in an attempt to escape, some crashing into ditches. Carolyn got her car stuck in a ditch; Lumbee oral tradition maintains that they had to help her push her car out, while Craven recalled seeing her run away with her three children as several Lumbee men jokingly "pretended" to free her car from the rut. Robeson County sheriff's deputies then fired two tear gas grenades in an attempt to disperse the crowd. Several minutes later Williams led the highway patrol officers onto the field and restored order. McLeod announced over loudspeaker that there was still time to go home and watch Gunsmoke on television. He also found Klansmen hiding in the brush and directed them out of the area. By 9 p.m., the wind had dissipated the tear gas and the crowd had been cleared. The police confiscated two trunkloads of firearms from the Lumbees and Klansmen. James Garland Martin, a Klansman who served as Cole's sergeant-at-arms, was found by deputies lying in a ditch and subsequently arrested for public drunkenness and carrying a concealed weapon. Cole remained in hiding for two days. Four Klansmen received minor gunshot wounds during the affair. Three reporters were also injured, as was a Lakota U.S. Army soldier who had traveled from Fort Bragg to witness the events. After the shooting stopped, several Lumbee spoke with the press and posed for photographs. Some took the Klan's public address system and their cross. Simeon Oxendine, a World War II veteran and the son of Pembroke's mayor, and Charlie Warriax, stole the KKK banner. Later that night Lumbees celebrated in Pembroke, driving in a motorcade and marching through the streets before gathering in front of the police station in Pembroke to hang and burn an effigy of Cole. Oxendine and Warriax drove to the city of Charlotte with the KKK banner and entered the offices of The Charlotte Observer shortly after midnight. They gave an interview with the reporter on duty and posed with the banner in the photography studio. One picture from the shoot of Oxendine and Warriax wrapped in the banner was sent to other newspapers over the Associated Press wire and published a week later on a full page spread in Life. ## Aftermath ### Reactions The local black community was pleased with the results of the clash, while the white community was relieved. North Carolina Governor Luther H. Hodges responded to the incident by calling Sheriff McLeod and Pembroke Mayor J. C. Oxendine to assure them of his help if the situation required it. He then released a statement to the public, condemning the Ku Klux Klan as a violent group and stating that responsibility for the disorder "rests squarely" on Klan leaders. Other white observers—both locally and nationally—had mixed feelings about responsibility, expressing sympathy for the Lumbees' actions but suggesting that Cole's First Amendment rights may have been violated. Mayor Oxendine received telegrams, letters, and phone calls of approval from Native and non-indigenous Americans from around the United States. Alabama Governor Jim Folsom issued a statement reading, "The white man has mistreated the Indian for 400 years. This is one time I'm glad to see and hope the Indians continue to beat the paleface." The day after the failed rally, large North Carolinian newspapers such as The News and Observer and The Charlotte Observer ran stories on the clash. Most were favorable to the Lumbees and portrayed the Klansmen as antagonists. Initial reports in state and national newspapers were melodramatic and portrayed the Lumbees using stereotypes associated with Western Plains Indians. The Santa Fe New Mexican misidentified the Lumbees as part of the Eastern Band of Cherokee Indians. The local media in Robeson County did not publish on Sundays, thus it was not until later in the week that the Scottish Chief and The Robesonian released their reporting. Their stories covered the affair and preceding events in detail and avoided the use of caricatures, treating the Lumbees as they did other community residents. On January 23 the Scottish Chief issued an editorial titled "Setting the Record Straight," which criticized the national sensationalism, saying, "all too frequently news mediums are searching for a colorful angle to a story and in doing so stretch or add to the facts." Local editorials sided with the Lumbees, framing the clash as a conflict between locals and outsiders, though the editorial board of The Robesonian downplayed local discontent with segregation, proclaiming there was "no racial rift" between Native Americans and whites in Robeson County. Editorial pieces from around the United States ridiculed the Klan for their behavior. The Anti-Defamation League reported that the affair "sent a ripple of laughter clear across the country." Reflecting on the national praise for the Lumbees' actions at Hayes Pond in contrast to the muted response to the armed black resistance to the KKK in Monroe in 1957, Robert Williams wrote, "The national press played up the Indian-Klan fight because they didn't consider this a great threat—the Indians are a tiny minority and people could laugh at the incident as a sentimental joke—but no one wanted Negroes to get the impression that this was an accepted way to deal with the Klan." ### Legal proceedings On January 20 Sheriff McLeod declared that he would seek the arrest of Cole for the disorder. The following day a Robeson County grand jury indicted Cole, Martin, and others unknown to the state for inciting a riot. Cole, who was by then in South Carolina, posted bail, declared his intent to fight extradition back to North Carolina, and said he would host a new rally, saying, "It will be the greatest rally the Klan has had. I expect there will be more than 5,000 Klansmen there and probably more. Klansmen all over the South are pretty upset." This never occurred. Cole was eventually extradited with the permission of the South Carolina governor and held on bond. On January 23, Martin was tried for the drunkenness and weapons charges before the Recorder's Court in Maxton by Judge Pro Tem Lacy Maynor, the second Native American in Robeson County to ever be elected to a judgeship. Martin denounced the Klan for abandoning him in the field and vowed that he would leave the organization. Maynor thought that the circumstances of the situation were "tragic" and gave Martin a suspended 60-day sentence and a \$60 fine. In his sentence the judge told him, "You have helped to bring about nationwide advertisement to a people who do not want that kind of advertisement—who only want to create a community that would be an asset to our nation. If your organization had something worthwhile to offer, we would be happy to have you. But the history of your organization proves that it has nothing to offer". Cole and Martin both faced the riot charges in the Robeson County Superior Court in Lumberton. Cole argued in his defense that he had legally rented the field, had a right to hold a rally, and that the Lumbees had provoked the situation while McLeod had provided inadequate security. The prosecutors argued that the Klan had aggravated public sentiment by burning crosses in the county and employing inflammatory speech, billed the rally as a public event (thus it was not a private meeting), and that, according to statements made by Martin, had encouraged Klansmen to bring weapons with them. About 350 Lumbees sat in the gallery during the trial. The prosecutor asked the jury, "Gentlemen, you had better stop this. If you don't, there will be more bloodshed." He then pointed to the audience and said, "If you think you can take [any] Kluxer [...] and drive that crowd around, you've got another think a-coming". In March, the jury found Martin and Cole guilty. The judge delivered Cole the most stringent sentence of 18–24 months incarceration, while Martin was given a lesser punishment. Cole appealed his case and was freed on bond pending its reconsideration. ### Impact on the Ku Klux Klan Cole never organized another public rally in Robeson County after the incident. In the wake of Cole's and Martin's arrests as well as some disagreements about organization finances, some members of the North Carolina Knights split off and created their own Klan chapters. Cole attempted to rebrand his organization as a militant, "fighting outfit", and used this to recruit new members across the state with some success. Throughout North Carolina, Klan leaders told their members to expect armed resistance to their work and prepare themselves accordingly. In early 1959 Cole was arrested in South Carolina for posing as a private investigator and shortly thereafter lost his appeal in North Carolina for the riot charge and was imprisoned. His incarceration curtailed Klan recruiting and though the North Carolina Knights elected a new grand wizard to replace him, coordinated policing by the State Bureau of Investigation and other agencies—encouraged by Hodges—led to a decline in Klan membership. North Carolina KKK organizations later resurged in the mid-1960s. In 1966, Klansmen declared their intent to hold another rally at the same field near Maxton, provoking the ire of Lumbees. State authorities received reports of Lumbees stockpiling weapons, and a superior court issued an injunction, prohibiting the meeting. Klan challenges to the order were dismissed. North Carolina Knights Grand Dragon Bob Jones told the press, "We want to ally with the Indian and see he gets some civil rights from the government. The Indians have never had an ally and if we're going to give civil rights to the Niggers, we're going to give them to the Indians." Simeon Oxendine was dismissive of these remarks, saying, "I don't think Jones is in a position to give anything to anyone. I think the constitution gives us our rights," and the Native Americans in the county were unreceptive to the invitation. ## Legacy and commemoration The clash has been generally remembered under two monikers: the "Battle of Hayes Pond" or the "Battle of Maxton Field". The media dubbed it the "Maxton Riot". The incident brought national attention to the people, with Lumbee historian Adolph Dial later saying, "Until the Klan thing, people didn't even know there were Lumbees." In the aftermath of the battle, most Lumbees recalled it as a purely local affair and an action of self-defense for their community from hostile outsiders; they did not see it as a symbolic protest, an attempt to gain national attention, or as a component of the larger American civil rights movement. Local whites also tended to view the Klan rally as the work of outsiders from South Carolina. In 1958, California-based folk singer Malvina Reynolds wrote a song about the incident, entitled, "The Battle of Maxton Field", which satirized the Klan, and was later covered by folk musician Pete Seeger to commercial success. Since 1958, several Lumbee authors have written accounts of the battle, and the Lumbee Tribe included a recounting in its 1987 petition for full federal recognition to the Bureau of Indian Affairs. Newspapers in North Carolina have periodically cited the clash in their discussions of the Klan and white supremacy. In 2003 the Lumbee Tribe presented 100 "Lumbee Warriors"—persons verified to have been involved in the Hayes Pond battle—with a medal of honor. In 2011 the Lumbee Tribal Council passed an ordinance declaring January 18 a "Tribal Day of Historical Recognition". On June 26, 2018, North Carolina erected a highway historical marker at the convergence of NC Highway 130 and Maxton Pond Road near Maxton to commemorate the event. In October 2021, politician Charles Graham, a Lumbee from Robeson County, released a video advertisement for his 2022 campaign in North Carolina's 9th congressional district which recounted the battle. The video went viral on the internet, garnering over 4 million views within 24 hours, and 8 million within three days on the social media platforms Twitter, Facebook and TikTok, becoming the most viewed congressional advertisement ever. Graham said he used the event in his campaign to showcase "history where people of all walks of life came together to stand against absolute evil." ## Explanatory notes
37,332
Josiah Willard Gibbs
1,172,477,548
American scientist (1839–1903)
[ "1839 births", "1903 deaths", "19th-century American mathematicians", "20th-century American mathematicians", "American physical chemists", "Burials at Grove Street Cemetery", "Computational statisticians", "Connecticut Republicans", "Fluid dynamicists", "Foreign Members of the Royal Society", "Hall of Fame for Great Americans inductees", "Heidelberg University alumni", "Hopkins School alumni", "Mathematical analysts", "Philosophers of science", "Recipients of the Copley Medal", "Scientists from New Haven, Connecticut", "Statistical physicists", "Theoretical physicists", "Thermodynamicists", "Yale College alumni", "Yale School of Engineering & Applied Science alumni", "Yale University faculty" ]
Josiah Willard Gibbs (/ɡɪbz/; February 11, 1839 – April 28, 1903) was an American scientist who made significant theoretical contributions to physics, chemistry, and mathematics. His work on the applications of thermodynamics was instrumental in transforming physical chemistry into a rigorous inductive science. Together with James Clerk Maxwell and Ludwig Boltzmann, he created statistical mechanics (a term that he coined), explaining the laws of thermodynamics as consequences of the statistical properties of ensembles of the possible states of a physical system composed of many particles. Gibbs also worked on the application of Maxwell's equations to problems in physical optics. As a mathematician, he invented modern vector calculus (independently of the British scientist Oliver Heaviside, who carried out similar work during the same period). In 1863, Yale awarded Gibbs the first American doctorate in engineering. After a three-year sojourn in Europe, Gibbs spent the rest of his career at Yale, where he was a professor of mathematical physics from 1871 until his death in 1903. Working in relative isolation, he became the earliest theoretical scientist in the United States to earn an international reputation and was praised by Albert Einstein as "the greatest mind in American history." In 1901, Gibbs received what was then considered the highest honor awarded by the international scientific community, the Copley Medal of the Royal Society of London, "for his contributions to mathematical physics." Commentators and biographers have remarked on the contrast between Gibbs's quiet, solitary life in turn of the century New England and the great international impact of his ideas. Though his work was almost entirely theoretical, the practical value of Gibbs's contributions became evident with the development of industrial chemistry during the first half of the 20th century. According to Robert A. Millikan, in pure science, Gibbs "did for statistical mechanics and thermodynamics what Laplace did for celestial mechanics and Maxwell did for electrodynamics, namely, made his field a well-nigh finished theoretical structure." ## Biography ### Family background Gibbs was born in New Haven, Connecticut. He belonged to an old Yankee family that had produced distinguished American clergymen and academics since the 17th century. He was the fourth of five children and the only son of Josiah Willard Gibbs Sr., and his wife Mary Anna, née Van Cleve. On his father's side, he was descended from Samuel Willard, who served as acting President of Harvard College from 1701 to 1707. On his mother's side, one of his ancestors was the Rev. Jonathan Dickinson, the first president of the College of New Jersey (later Princeton University). Gibbs's given name, which he shared with his father and several other members of his extended family, derived from his ancestor Josiah Willard, who had been Secretary of the Province of Massachusetts Bay in the 18th century. His paternal grandmother, Mercy (Prescott) Gibbs, was the sister of Rebecca Minot Prescott Sherman, the wife of American founding father Roger Sherman; and he was the second cousin of Roger Sherman Baldwin, see the Amistad case below. The elder Gibbs was generally known to his family and colleagues as "Josiah", while the son was called "Willard". Josiah Gibbs was a linguist and theologian who served as professor of sacred literature at Yale Divinity School from 1824 until his death in 1861. He is chiefly remembered today as the abolitionist who found an interpreter for the African passengers of the ship Amistad, allowing them to testify during the trial that followed their rebellion against being sold as slaves. ### Education Willard Gibbs was educated at the Hopkins School and entered Yale College in 1854 at the age of 15. At Yale, Gibbs received prizes for excellence in mathematics and Latin, and he graduated in 1858, near the top of his class. He remained at Yale as a graduate student at the Sheffield Scientific School. At age 19, soon after his graduation from college, Gibbs was inducted into the Connecticut Academy of Arts and Sciences, a scholarly institution composed primarily of members of the Yale faculty. Relatively few documents from the period survive and it is difficult to reconstruct the details of Gibbs's early career with precision. In the opinion of biographers, Gibbs's principal mentor and champion, both at Yale and in the Connecticut Academy, was probably the astronomer and mathematician Hubert Anson Newton, a leading authority on meteors, who remained Gibbs's lifelong friend and confidant. After the death of his father in 1861, Gibbs inherited enough money to make him financially independent. Recurrent pulmonary trouble ailed the young Gibbs and his physicians were concerned that he might be susceptible to tuberculosis, which had killed his mother. He also suffered from astigmatism, whose treatment was then still largely unfamiliar to oculists, so that Gibbs had to diagnose himself and grind his own lenses. Though in later years he used glasses only for reading or other close work, Gibbs's delicate health and imperfect eyesight probably explain why he did not volunteer to fight in the Civil War of 1861–65. He was not conscripted and he remained at Yale for the duration of the war. In 1863, Gibbs received the first Doctorate of Philosophy (PhD) in engineering granted in the US, for a thesis entitled "On the Form of the Teeth of Wheels in Spur Gearing", in which he used geometrical techniques to investigate the optimum design for gears. In 1861, Yale had become the first US university to offer a PhD degree and Gibbs's was only the fifth PhD granted in the US in any subject. ### Career, 1863–73 After graduation, Gibbs was appointed as tutor at the college for a term of three years. During the first two years, he taught Latin and during the third year, he taught "natural philosophy" (i.e., physics). In 1866, he patented a design for a railway brake and read a paper before the Connecticut Academy, entitled "The Proper Magnitude of the Units of Length", in which he proposed a scheme for rationalizing the system of units of measurement used in mechanics. After his term as tutor ended, Gibbs traveled to Europe with his sisters. They spent the winter of 1866–67 in Paris, where Gibbs attended lectures at the Sorbonne and the Collège de France, given by such distinguished mathematical scientists as Joseph Liouville and Michel Chasles. Having undertaken a punishing regimen of study, Gibbs caught a serious cold and a doctor, fearing tuberculosis, advised him to rest on the Riviera, where he and his sisters spent several months and where he made a full recovery. Moving to Berlin, Gibbs attended the lectures taught by mathematicians Karl Weierstrass and Leopold Kronecker, as well as by chemist Heinrich Gustav Magnus. In August 1867, Gibbs's sister Julia was married in Berlin to Addison Van Name, who had been Gibbs's classmate at Yale. The newly married couple returned to New Haven, leaving Gibbs and his sister Anna in Germany. In Heidelberg, Gibbs was exposed to the work of physicists Gustav Kirchhoff and Hermann von Helmholtz, and chemist Robert Bunsen. At the time, German academics were the leading authorities in the natural sciences, especially chemistry and thermodynamics. Gibbs returned to Yale in June 1869 and briefly taught French to engineering students. It was probably also around this time that he worked on a new design for a steam-engine governor, his last significant investigation in mechanical engineering. In 1871, he was appointed Professor of Mathematical Physics at Yale, the first such professorship in the United States. Gibbs, who had independent means and had yet to publish anything, was assigned to teach graduate students exclusively and was hired without salary. ### Career, 1873–80 Gibbs published his first work in 1873. His papers on the geometric representation of thermodynamic quantities appeared in the Transactions of the Connecticut Academy. These papers introduced the use of different type phase diagrams, which were his favorite aids to the imagination process when doing research, rather than the mechanical models, such as the ones that Maxwell used in constructing his electromagnetic theory, which might not completely represent their corresponding phenomena. Although the journal had few readers capable of understanding Gibbs's work, he shared reprints with correspondents in Europe and received an enthusiastic response from James Clerk Maxwell at Cambridge. Maxwell even made, with his own hands, a clay model illustrating Gibbs's construct. He then produced two plaster casts of his model and mailed one to Gibbs. That cast is on display at the Yale physics department. Maxwell included a chapter on Gibbs's work in the next edition of his Theory of Heat, published in 1875. He explained the usefulness of Gibbs's graphical methods in a lecture to the Chemical Society of London and even referred to it in the article on "Diagrams" that he wrote for the Encyclopædia Britannica. Prospects of collaboration between him and Gibbs were cut short by Maxwell's early death in 1879, aged 48. The joke later circulated in New Haven that "only one man lived who could understand Gibbs's papers. That was Maxwell, and now he is dead." Gibbs then extended his thermodynamic analysis to multi-phase chemical systems (i.e., to systems composed of more than one form of matter) and considered a variety of concrete applications. He described that research in a monograph titled "On the Equilibrium of Heterogeneous Substances", published by the Connecticut Academy in two parts that appeared respectively in 1875 and 1878. That work, which covers about three hundred pages and contains exactly seven hundred numbered mathematical equations, begins with a quotation from Rudolf Clausius that expresses what would later be called the first and second laws of thermodynamics: "The energy of the world is constant. The entropy of the world tends towards a maximum." Gibbs's monograph rigorously and ingeniously applied his thermodynamic techniques to the interpretation of physico-chemical phenomena, explaining and relating what had previously been a mass of isolated facts and observations. The work has been described as "the Principia of thermodynamics" and as a work of "practically unlimited scope". It solidly laid the foundation for physical Chemistry. Wilhelm Ostwald, who translated Gibbs's monograph into German, referred to Gibbs as the "founder of chemical energetics". According to modern commentators, > It is universally recognised that its publication was an event of the first importance in the history of chemistry ... Nevertheless it was a number of years before its value was generally known, this delay was due largely to the fact that its mathematical form and rigorous deductive processes make it difficult reading for anyone, and especially so for students of experimental chemistry whom it most concerns. Gibbs continued to work without pay until 1880, when the new Johns Hopkins University in Baltimore, Maryland offered him a position paying \$3,000 per year. In response, Yale offered him an annual salary of \$2,000, which he was content to accept. ### Career, 1880–1903 From 1880 to 1884, Gibbs worked on developing the exterior algebra of Hermann Grassmann into a vector calculus well-suited to the needs of physicists. With this object in mind, Gibbs distinguished between the dot and cross products of two vectors and introduced the concept of dyadics. Similar work was carried out independently, and at around the same time, by the British mathematical physicist and engineer Oliver Heaviside. Gibbs sought to convince other physicists of the convenience of the vectorial approach over the quaternionic calculus of William Rowan Hamilton, which was then widely used by British scientists. This led him, in the early 1890s, to a controversy with Peter Guthrie Tait and others in the pages of Nature. Gibbs's lecture notes on vector calculus were privately printed in 1881 and 1884 for the use of his students, and were later adapted by Edwin Bidwell Wilson into a textbook, Vector Analysis, published in 1901. That book helped to popularize the "del" notation that is widely used today in electrodynamics and fluid mechanics. In other mathematical work, he re-discovered the "Gibbs phenomenon" in the theory of Fourier series (which, unbeknownst to him and to later scholars, had been described fifty years before by an obscure English mathematician, Henry Wilbraham). From 1882 to 1889, Gibbs wrote five papers on physical optics, in which he investigated birefringence and other optical phenomena and defended Maxwell's electromagnetic theory of light against the mechanical theories of Lord Kelvin and others. In his work on optics, just as much as in his work on thermodynamics, Gibbs deliberately avoided speculating about the microscopic structure of matter and purposefully confined his research problems to those that can be solved from broad general principles and experimentally confirmed facts. The methods that he used were highly original and the obtained results showed decisively the correctness of Maxwell's electromagnetic theory. Gibbs coined the term statistical mechanics and introduced key concepts in the corresponding mathematical description of physical systems, including the notions of chemical potential (1876), and statistical ensemble (1902). Gibbs's derivation of the laws of thermodynamics from the statistical properties of systems consisting of many particles was presented in his highly influential textbook Elementary Principles in Statistical Mechanics, published in 1902, a year before his death. Gibbs's retiring personality and intense focus on his work limited his accessibility to students. His principal protégé was Edwin Bidwell Wilson, who nonetheless explained that "except in the classroom I saw very little of Gibbs. He had a way, toward the end of the afternoon, of taking a stroll about the streets between his study in the old Sloane Laboratory and his home—a little exercise between work and dinner—and one might occasionally come across him at that time." Gibbs did supervise the doctoral thesis on mathematical economics written by Irving Fisher in 1891. After Gibbs's death, Fisher financed the publication of his Collected Works. Another distinguished student was Lee De Forest, later a pioneer of radio technology. Gibbs died in New Haven on April 28, 1903, at the age of 64, the victim of an acute intestinal obstruction. A funeral was conducted two days later at his home on 121 High Street, and his body was buried in the nearby Grove Street Cemetery. In May, Yale organized a memorial meeting at the Sloane Laboratory. The eminent British physicist J. J. Thomson was in attendance and delivered a brief address. ### Personal life and character Gibbs never married, living all his life in his childhood home with his sister Julia and her husband Addison Van Name, who was the Yale librarian. Except for his customary summer vacations in the Adirondacks (at Keene Valley, New York) and later at the White Mountains (in Intervale, New Hampshire), his sojourn in Europe in 1866–69 was almost the only time that Gibbs spent outside New Haven. He joined Yale's College Church (a Congregational church) at the end of his freshman year and remained a regular attendant for the rest of his life. Gibbs generally voted for the Republican candidate in presidential elections but, like other "Mugwumps", his concern over the growing corruption associated with machine politics led him to support Grover Cleveland, a conservative Democrat, in the election of 1884. Little else is known of his religious or political views, which he mostly kept to himself. Gibbs did not produce a substantial personal correspondence and many of his letters were later lost or destroyed. Beyond the technical writings concerning his research, he published only two other pieces: a brief obituary for Rudolf Clausius, one of the founders of the mathematical theory of thermodynamics, and a longer biographical memoir of his mentor at Yale, H. A. Newton. In Edward Bidwell Wilson's view, > Gibbs was not an advertiser for personal renown nor a propagandist for science; he was a scholar, scion of an old scholarly family, living before the days when research had become résearch ... Gibbs was not a freak, he had no striking ways, he was a kindly dignified gentleman. According to Lynde Wheeler, who had been Gibbs's student at Yale, in his later years Gibbs > was always neatly dressed, usually wore a felt hat on the street, and never exhibited any of the physical mannerisms or eccentricities sometimes thought to be inseparable from genius ... His manner was cordial without being effusive and conveyed clearly the innate simplicity and sincerity of his nature. He was a careful investor and financial manager, and at his death in 1903 his estate was valued at \$100,000 (roughly \$ today). For many years, he served as trustee, secretary, and treasurer of his alma mater, the Hopkins School. US President Chester A. Arthur appointed him as one of the commissioners to the National Conference of Electricians, which convened in Philadelphia in September 1884, and Gibbs presided over one of its sessions. A keen and skilled horseman, Gibbs was seen habitually in New Haven driving his sister's carriage. In an obituary published in the American Journal of Science, Gibbs's former student Henry A. Bumstead referred to Gibbs's personal character: > Unassuming in manner, genial and kindly in his intercourse with his fellow-men, never showing impatience or irritation, devoid of personal ambition of the baser sort or of the slightest desire to exalt himself, he went far toward realizing the ideal of the unselfish, Christian gentleman. In the minds of those who knew him, the greatness of his intellectual achievements will never overshadow the beauty and dignity of his life. ## Major scientific contributions ### Chemical and electrochemical thermodynamics Gibbs's papers from the 1870s introduced the idea of expressing the internal energy U of a system in terms of the entropy S, in addition to the usual state-variables of volume V, pressure p, and temperature T. He also introduced the concept of the chemical potential $\mu$ of a given chemical species, defined to be the rate of the increase in U associated with the increase in the number N of molecules of that species (at constant entropy and volume). Thus, it was Gibbs who first combined the first and second laws of thermodynamics by expressing the infinitesimal change in the internal energy, dU, of a closed system in the form: $\mathrm{d}U = T\mathrm{d}S - p \,\mathrm{d}V + \sum_i \mu_i \,\mathrm{d} N_i\,$ where T is the absolute temperature, p is the pressure, dS is an infinitesimal change in entropy and dV is an infinitesimal change of volume. The last term is the sum, over all the chemical species in a chemical reaction, of the chemical potential, μ<sub>i</sub>, of the i<sup>th</sup> species, multiplied by the infinitesimal change in the number of moles, dN<sub>i</sub> of that species. By taking the Legendre transform of this expression, he defined the concepts of enthalpy, H and Gibbs free energy, G. $G_{(p,T)} = H - TS$ This compares to the expression for Helmholtz free energy, A. $A_{(v,T)} = U-TS\,$ When the Gibbs free energy for a chemical reaction is negative the reaction will proceed spontaneously. When a chemical system is at equilibrium, the change in Gibbs free energy is zero. An equilibrium constant is simply related to the free energy change when the reactants are in their standard states. $\Delta G^\ominus=-RT \ln K^\ominus$ Chemical potential is usually defined as partial molar Gibbs free energy. $\mu_i=\left(\frac{\partial G}{\partial N_i}\right)_{T,P,N_{j\neq i}}$ Gibbs also obtained what later came to be known as the "Gibbs–Duhem equation". In an electrochemical reaction characterized by an electromotive force E and an amount of transferred charge Q, Gibbs's starting equation becomes $\mathrm{d}U = T\mathrm{d}S - p \,\mathrm{d}V + \mathcal{E}\mathrm{d}Q$. The publication of the paper "On the Equilibrium of Heterogeneous Substances" (1874–78) is now regarded as a landmark in the development of chemistry. In it, Gibbs developed a rigorous mathematical theory for various transport phenomena, including adsorption, electrochemistry, and the Marangoni effect in fluid mixtures. He also formulated the phase rule $F\;=\;C\;-\;P\;+\;2$ for the number F of variables that may be independently controlled in an equilibrium mixture of C components existing in P phases. Phase rule is very useful in diverse areas, such as metallurgy, mineralogy, and petrology. It can also be applied to various research problems in physical chemistry. ### Statistical mechanics Together with James Clerk Maxwell and Ludwig Boltzmann, Gibbs founded "statistical mechanics", a term that he coined to identify the branch of theoretical physics that accounts for the observed thermodynamic properties of systems in terms of the statistics of ensembles of all possible physical states of a system composed of many particles. He introduced the concept of "phase of a mechanical system". He used the concept to define the microcanonical, canonical, and grand canonical ensembles; all related to the Gibbs measure, thus obtaining a more general formulation of the statistical properties of many-particle systems than Maxwell and Boltzmann had achieved before him. Gibbs generalized Boltzmann's statistical interpretation of entropy $S$ by defining the entropy of an arbitrary ensemble as $S = -k_\text{B}\,\sum_i p_i \ln \,p_i$, where $k_\text{B}$ is the Boltzmann constant, while the sum is over all possible microstates $i$, with $p_i$ the corresponding probability of the microstate (see Gibbs entropy formula). This same formula would later play a central role in Claude Shannon's information theory and is therefore often seen as the basis of the modern information-theoretical interpretation of thermodynamics. According to Henri Poincaré, writing in 1904, even though Maxwell and Boltzmann had previously explained the irreversibility of macroscopic physical processes in probabilistic terms, "the one who has seen it most clearly, in a book too little read because it is a little difficult to read, is Gibbs, in his Elementary Principles of Statistical Mechanics." Gibbs's analysis of irreversibility, and his formulation of Boltzmann's H-theorem and of the ergodic hypothesis, were major influences on the mathematical physics of the 20th century. Gibbs was well aware that the application of the equipartition theorem to large systems of classical particles failed to explain the measurements of the specific heats of both solids and gases, and he argued that this was evidence of the danger of basing thermodynamics on "hypotheses about the constitution of matter". Gibbs's own framework for statistical mechanics, based on ensembles of macroscopically indistinguishable microstates, could be carried over almost intact after the discovery that the microscopic laws of nature obey quantum rules, rather than the classical laws known to Gibbs and to his contemporaries. His resolution of the so-called "Gibbs paradox", about the entropy of the mixing of gases, is now often cited as a prefiguration of the indistinguishability of particles required by quantum physics. ### Vector analysis British scientists, including Maxwell, had relied on Hamilton's quaternions in order to express the dynamics of physical quantities, like the electric and magnetic fields, having both a magnitude and a direction in three-dimensional space. Following W. K. Clifford in his Elements of Dynamic (1888), Gibbs noted that the product of quaternions could be separated into two parts: a one-dimensional (scalar) quantity and a three-dimensional vector, so that the use of quaternions involved mathematical complications and redundancies that could be avoided in the interest of simplicity and to facilitate teaching. In his Yale classroom notes he defined distinct dot and cross products for pairs of vectors and introduced the now common notation for them. Through the 1901 textbook Vector Analysis prepared by E. B. Wilson from Gibbs notes, he was largely responsible for the development of the vector calculus techniques still used today in electrodynamics and fluid mechanics. While he was working on vector analysis in the late 1870s, Gibbs discovered that his approach was similar to the one that Grassmann had taken in his "multiple algebra". Gibbs then sought to publicize Grassmann's work, stressing that it was both more general and historically prior to Hamilton's quaternionic algebra. To establish priority of Grassmann's ideas, Gibbs convinced Grassmann's heirs to seek the publication in Germany of the essay "Theorie der Ebbe und Flut" on tides that Grassmann had submitted in 1840 to the faculty at the University of Berlin, in which he had first introduced the notion of what would later be called a vector space (linear space). As Gibbs had advocated in the 1880s and 1890s, quaternions were eventually all but abandoned by physicists in favor of the vectorial approach developed by him and, independently, by Oliver Heaviside. Gibbs applied his vector methods to the determination of planetary and comet orbits. He also developed the concept of mutually reciprocal triads of vectors that later proved to be of importance in crystallography. ### Physical optics Though Gibbs's research on physical optics is less well known today than his other work, it made a significant contribution to classical electromagnetism by applying Maxwell's equations to the theory of optical processes such as birefringence, dispersion, and optical activity. In that work, Gibbs showed that those processes could be accounted for by Maxwell's equations without any special assumptions about the microscopic structure of matter or about the nature of the medium in which electromagnetic waves were supposed to propagate (the so-called luminiferous ether). Gibbs also stressed that the absence of a longitudinal electromagnetic wave, which is needed to account for the observed properties of light, is automatically guaranteed by Maxwell's equations (by virtue of what is now called their "gauge invariance"), whereas in mechanical theories of light, such as Lord Kelvin's, it must be imposed as an ad hoc condition on the properties of the aether. In his last paper on physical optics, Gibbs concluded that > it may be said for the electrical theory [of light] that it is not obliged to invent hypotheses, but only to apply the laws furnished by the science of electricity, and that it is difficult to account for the coincidences between the electrical and optical properties of media unless we regard the motions of light as electrical. Shortly afterwards, the electromagnetic nature of light was demonstrated by the experiments of Heinrich Hertz in Germany. ## Scientific recognition Gibbs worked at a time when there was little tradition of rigorous theoretical science in the United States. His research was not easily understandable to his students or his colleagues, and he made no effort to popularize his ideas or to simplify their exposition to make them more accessible. His seminal work on thermodynamics was published mostly in the Transactions of the Connecticut Academy, a journal edited by his librarian brother-in-law, which was little read in the US and even less so in Europe. When Gibbs submitted his long paper on the equilibrium of heterogeneous substances to the academy, both Elias Loomis and H. A. Newton protested that they did not understand Gibbs's work at all, but they helped to raise the money needed to pay for the typesetting of the many mathematical symbols in the paper. Several Yale faculty members, as well as business and professional men in New Haven, contributed funds for that purpose. Even though it had been immediately embraced by Maxwell, Gibbs's graphical formulation of the laws of thermodynamics only came into widespread use in the mid 20th century, with the work of László Tisza and Herbert Callen. According to James Gerald Crowther, > in his later years [Gibbs] was a tall, dignified gentleman, with a healthy stride and ruddy complexion, performing his share of household chores, approachable and kind (if unintelligible) to students. Gibbs was highly esteemed by his friends, but American science was too preoccupied with practical questions to make much use of his profound theoretical work during his lifetime. He lived out his quiet life at Yale, deeply admired by a few able students but making no immediate impress on American science commensurate with his genius. On the other hand, Gibbs did receive the major honors then possible for an academic scientist in the US. He was elected to the National Academy of Sciences in 1879 and received the 1880 Rumford Prize from the American Academy of Arts and Sciences for his work on chemical thermodynamics. He was also awarded honorary doctorates by Princeton University and Williams College. In Europe, Gibbs was inducted as honorary member of the London Mathematical Society in 1892 and elected Foreign Member of the Royal Society in 1897. He was elected as corresponding member of the Prussian and French Academies of Science and received honorary doctorates from the universities of Dublin, Erlangen, and Christiania (now Oslo). The Royal Society further honored Gibbs in 1901 with the Copley Medal, then regarded as the highest international award in the natural sciences, noting that he had been "the first to apply the second law of thermodynamics to the exhaustive discussion of the relation between chemical, electrical and thermal energy and capacity for external work." Gibbs, who remained in New Haven, was represented at the award ceremony by Commander Richardson Clover, the US naval attaché in London. In his autobiography, mathematician Gian-Carlo Rota tells of casually browsing the mathematical stacks of Sterling Library and stumbling on a handwritten mailing list, attached to some of Gibbs's course notes, which listed over two hundred notable scientists of his day, including Poincaré, Boltzmann, David Hilbert, and Ernst Mach. From this, Rota concluded that Gibbs's work was better known among the scientific elite of his day than the published material suggests. Lynde Wheeler reproduces that mailing list in an appendix to his biography of Gibbs. That Gibbs succeeded in interesting his European correspondents in his work is demonstrated by the fact that his monograph "On the Equilibrium of Heterogeneous Substances" was translated into German (then the leading language for chemistry) by Wilhelm Ostwald in 1892 and into French by Henri Louis Le Châtelier in 1899. ## Influence Gibbs's most immediate and obvious influence was on physical chemistry and statistical mechanics, two disciplines which he greatly helped to found. During Gibbs's lifetime, his phase rule was experimentally validated by Dutch chemist H. W. Bakhuis Roozeboom, who showed how to apply it in a variety of situations, thereby assuring it of widespread use. In industrial chemistry, Gibbs's thermodynamics found many applications during the early 20th century, from electrochemistry to the development of the Haber process for the synthesis of ammonia. When Dutch physicist J. D. van der Waals received the 1910 Nobel Prize "for his work on the equation of state for gases and liquids" he acknowledged the great influence of Gibbs's work on that subject. Max Planck received the 1918 Nobel Prize for his work on quantum mechanics, particularly his 1900 paper on Planck's law for quantized black-body radiation. That work was based largely on the thermodynamics of Kirchhoff, Boltzmann, and Gibbs. Planck declared that Gibbs's name "not only in America but in the whole world will ever be reckoned among the most renowned theoretical physicists of all times." The first half of the 20th century saw the publication of two influential textbooks that soon came to be regarded as founding documents of chemical thermodynamics, both of which used and extended Gibbs's work in that field: these were Thermodynamics and the Free Energy of Chemical Processes (1923), by Gilbert N. Lewis and Merle Randall, and Modern Thermodynamics by the Methods of Willard Gibbs (1933), by Edward A. Guggenheim. Gibbs's work on statistical ensembles, as presented in his 1902 textbook, has had a great impact in both theoretical physics and in pure mathematics. According to mathematical physicist Arthur Wightman, > It is one of the striking features of the work of Gibbs, noticed by every student of thermodynamics and statistical mechanics, that his formulations of physical concepts were so felicitously chosen that they have survived 100 years of turbulent development in theoretical physics and mathematics. Initially unaware of Gibbs's contributions in that field, Albert Einstein wrote three papers on statistical mechanics, published between 1902 and 1904. After reading Gibbs's textbook (which was translated into German by Ernst Zermelo in 1905), Einstein declared that Gibbs's treatment was superior to his own and explained that he would not have written those papers if he had known Gibbs's work. Gibbs's early papers on the use of graphical methods in thermodynamics reflect a powerfully original understanding of what mathematicians would later call "convex analysis", including ideas that, according to Barry Simon, "lay dormant for about seventy-five years". Important mathematical concepts based on Gibbs's work on thermodynamics and statistical mechanics include the Gibbs lemma in game theory, the Gibbs inequality in information theory, as well as Gibbs sampling in computational statistics. The development of vector calculus was Gibbs's other great contribution to mathematics. The publication in 1901 of E. B. Wilson's textbook Vector Analysis, based on Gibbs's lectures at Yale, did much to propagate the use of vectorial methods and notation in both mathematics and theoretical physics, definitively displacing the quaternions that had until then been dominant in the scientific literature. At Yale, Gibbs was also mentor to Lee De Forest, who went on to invent the triode amplifier and has been called the "father of radio". De Forest credited Gibbs's influence for the realization "that the leaders in electrical development would be those who pursued the higher theory of waves and oscillations and the transmission by these means of intelligence and power." Another student of Gibbs who played a significant role in the development of radio technology was Lynde Wheeler. Gibbs also had an indirect influence on mathematical economics. He supervised the thesis of Irving Fisher, who received the first PhD in economics from Yale in 1891. In that work, published in 1892 as Mathematical Investigations in the Theory of Value and Prices, Fisher drew a direct analogy between Gibbsian equilibrium in physical and chemical systems, and the general equilibrium of markets, and he used Gibbs's vectorial notation. Gibbs's protégé Edwin Bidwell Wilson became, in turn, a mentor to leading American economist and Nobel Laureate Paul Samuelson. In 1947, Samuelson published Foundations of Economic Analysis, based on his doctoral dissertation, in which he used as epigraph a remark attributed to Gibbs: "Mathematics is a language." Samuelson later explained that in his understanding of prices his "debts were not primarily to Pareto or Slutsky, but to the great thermodynamicist, Willard Gibbs of Yale." Mathematician Norbert Wiener cited Gibbs's use of probability in the formulation of statistical mechanics as "the first great revolution of twentieth century physics" and as a major influence on his conception of cybernetics. Wiener explained in the preface to his book The Human Use of Human Beings that it was "devoted to the impact of the Gibbsian point of view on modern life, both through the substantive changes it has made to working science, and through the changes it has made indirectly in our attitude to life in general." ## Commemoration When the German physical chemist Walther Nernst visited Yale in 1906 to give the Silliman lecture, he was surprised to find no tangible memorial for Gibbs. Nernst donated his \$500 lecture fee to the university to help pay for a suitable monument. This was finally unveiled in 1912, in the form of a bronze bas-relief by sculptor Lee Lawrie, installed in the Sloane Physics Laboratory. In 1910, the American Chemical Society established the Willard Gibbs Award for eminent work in pure or applied chemistry. In 1923, the American Mathematical Society endowed the Josiah Willard Gibbs Lectureship, "to show the public some idea of the aspects of mathematics and its applications". In 1945, Yale University created the J. Willard Gibbs Professorship in Theoretical Chemistry, held until 1973 by Lars Onsager. Onsager, who much like Gibbs, focused on applying new mathematical ideas to problems in physical chemistry, won the 1968 Nobel Prize in chemistry. In addition to establishing the Josiah Willard Gibbs Laboratories and the J. Willard Gibbs Assistant Professorship in Mathematics, Yale has also hosted two symposia dedicated to Gibbs's life and work, one in 1989 and another on the centenary of his death, in 2003. Rutgers University endowed a J. Willard Gibbs Professorship of Thermomechanics, held as of 2014 by Bernard Coleman. Gibbs was elected in 1950 to the Hall of Fame for Great Americans. The oceanographic research ship USNS Josiah Willard Gibbs (T-AGOR-1) was in service with the United States Navy from 1958 to 1971. Gibbs crater, near the eastern limb of the Moon, was named in the scientist's honor in 1964. Edward Guggenheim introduced the symbol G for the Gibbs free energy in 1933, and this was used also by Dirk ter Haar in 1966. This notation is now universal and is recommended by the IUPAC. In 1960, William Giauque and others suggested the name "gibbs" (abbreviated gbs.) for the unit of entropy calorie per kelvin, but this usage did not become common, and the corresponding SI unit joule per kelvin carries no special name. In 1954, a year before his death, Albert Einstein was asked by an interviewer who were the greatest thinkers that he had known. Einstein replied: "Lorentz", adding "I never met Willard Gibbs; perhaps, had I done so, I might have placed him beside Lorentz." Author Bill Bryson in his bestselling popular science book A Short History of Nearly Everything ranks Gibbs as "perhaps the most brilliant person that most people have never heard of". In 1958, USS San Carlos was renamed USNS Josiah Willard Gibbs and re-designated as an oceanographic research ship. ### In literature In 1909, the American historian and novelist Henry Adams finished an essay entitled "The Rule of Phase Applied to History", in which he sought to apply Gibbs's phase rule and other thermodynamic concepts to a general theory of human history. William James, Henry Bumstead, and others criticized both Adams's tenuous grasp of the scientific concepts that he invoked, as well as the arbitrariness of his application of those concepts as metaphors for the evolution of human thought and society. The essay remained unpublished until it appeared posthumously in 1919, in The Degradation of the Democratic Dogma, edited by Henry Adams's younger brother Brooks. In the 1930s, feminist poet Muriel Rukeyser became fascinated by Willard Gibbs and wrote a long poem about his life and work ("Gibbs", included in the collection A Turning Wind, published in 1939), as well as a book-length biography (Willard Gibbs, 1942). According to Rukeyser: > Willard Gibbs is the type of the imagination at work in the world. His story is that of an opening up which has had its effect on our lives and our thinking; and, it seems to me, it is the emblem of the naked imagination—which is called abstract and impractical, but whose discoveries can be used by anyone who is interested, in whatever "field"—an imagination which for me, more than that of any other figure in American thought, any poet, or political, or religious figure, stands for imagination at its essential points. In 1946, Fortune magazine illustrated a cover story on "Fundamental Science" with a representation of the thermodynamic surface that Maxwell had built based on Gibbs's proposal. Rukeyser called this surface a "statue of water" and the magazine saw in it "the abstract creation of a great American scientist that lends itself to the symbolism of contemporary art forms." The artwork by Arthur Lidov also included Gibbs's mathematical expression of the phase rule for heterogeneous mixtures, as well as a radar screen, an oscilloscope waveform, Newton's apple, and a small rendition of a three-dimensional phase diagram. Gibbs's nephew, Ralph Gibbs Van Name, a professor of physical chemistry at Yale, was unhappy with Rukeyser's biography, in part because of her lack of scientific training. Van Name had withheld the family papers from her and, after her book was published in 1942 to positive literary but mixed scientific reviews, he tried to encourage Gibbs's former students to produce a more technically oriented biography. Rukeyser's approach to Gibbs was also sharply criticized by Gibbs's former student and protégé Edwin Wilson. With Van Name's and Wilson's encouragement, physicist Lynde Wheeler published a new biography of Gibbs in 1951. Both Gibbs and Rukeyser's biography of him figure prominently in the poetry collection True North (1997) by Stephanie Strickland. In fiction, Gibbs appears as the mentor to character Kit Traverse in Thomas Pynchon's novel Against the Day (2006). That novel also prominently discusses the birefringence of Iceland spar, an optical phenomenon that Gibbs investigated. ### Gibbs stamp (2005) In 2005, the United States Postal Service issued the American Scientists commemorative postage stamp series designed by artist Victor Stabin, depicting Gibbs, John von Neumann, Barbara McClintock, and Richard Feynman. The first day of issue ceremony for the series was held on May 4 at Yale University's Luce Hall and was attended by John Marburger, scientific advisor to the president of the United States, Rick Levin, president of Yale, and family members of the scientists honored, including physician John W. Gibbs, a distant cousin of Willard Gibbs. Kenneth R. Jolls, a professor of chemical engineering at Iowa State University and an expert on graphical methods in thermodynamics, consulted on the design of the stamp honoring Gibbs. The stamp identifies Gibbs as a "thermodynamicist" and features a diagram from the 4th edition of Maxwell's Theory of Heat, published in 1875, which illustrates Gibbs's thermodynamic surface for water. Microprinting on the collar of Gibbs's portrait depicts his original mathematical equation for the change in the energy of a substance in terms of its entropy and the other state variables. ## Outline of principal work - Physical chemistry: free energy, phase diagram, phase rule, transport phenomena - Statistical mechanics: statistical ensemble, phase space, chemical potential, Gibbs entropy, Gibbs paradox - Mathematics: Vector Analysis, convex analysis, Gibbs phenomenon - Electromagnetism: Maxwell's equations, birefringence ## See also - Concentration of measure in physics - Thermodynamics of crystal growth - Governor (device) - List of notable textbooks in statistical mechanics - List of theoretical physicists - List of things named after Josiah W. Gibbs - Timeline of United States discoveries - Timeline of thermodynamics
1,326,400
LSWR N15 class
1,169,508,351
Class of 74 two-cylinder 4-6-0 locomotives
[ "2′C h2 locomotives", "4-6-0 locomotives", "Arthurian legend", "London and South Western Railway locomotives", "NBL locomotives", "Passenger locomotives", "Railway locomotives introduced in 1919", "Southern Railway (UK) locomotives", "Standard gauge steam locomotives of Great Britain" ]
The LSWR N15 class was a British 2–cylinder 4-6-0 express passenger steam locomotive designed by Robert Urie. The class has a complex build history spanning three sub-classes and eight years of construction from 1918 to 1927. The first batch of the class was constructed for the London and South Western Railway (LSWR), where they hauled heavy express passenger trains to the south coast ports and further west to Exeter. After the Lord Nelsons, they were the second biggest 4-6-0 passenger locomotives on the Southern Railway. They could reach speeds of up to 90 mph (145 km/h). Following the grouping of railway companies in 1923, the LSWR became part of the Southern Railway (SR) and its publicity department gave the N15 locomotives names associated with Arthurian legend; the class hence becoming known as King Arthurs. The chief mechanical engineer (CME) of the newly formed company, Richard Maunsell, modified the Urie locomotives in the light of operational experience and increased the class strength to 74 locomotives. Maunsell and his Chief Draughtsman James Clayton incorporated several improvements, notably to the steam circuit and valve gear. The new locomotives were built over several batches at Eastleigh Works and Glasgow, leading to the nicknames of "Eastleigh Arthurs", "Scotch Arthurs" and Scotchmen in service. The class was subjected to smoke deflection experiments in 1926, becoming the first British class of steam locomotive to be fitted with smoke deflectors. Maunsell's successor, Oliver Bulleid, attempted to improve performance by altering exhaust arrangements. The locomotives continued operating with British Railways (BR) until the end of 1962. One example, SR N15 class 777 Sir Lamiel, is preserved as part of the National Collection and can be seen on mainline railtours. ## Background Robert Urie completed his H15 class mixed-traffic 4-6-0 design in 1913 and the prototype was built in August 1914. It showed a marked improvement in performance over Dugald Drummond's LSWR T14 class 4-6-0 when tested on local and express passenger trains. The introduction of ten H15 engines into service coincided with the outbreak of the First World War, which prevented construction of further class members. Despite the interruption caused by the conflict, Urie anticipated that peacetime increases in passenger traffic would necessitate longer trains from London to the south-west of England. Passenger loadings on the heavy boat trains to the London and South Western Railway's (LSWR) ports of Portsmouth, Weymouth and Southampton had been increasing prior to the war, and was beginning to overcome the capabilities of the LSWR's passenger locomotive fleet. His response was to produce a modern, standard express passenger design similar to the H15. ## Design and construction Trials undertaken in 1914 with the H15 class prototype had demonstrated to Urie that the basic design showed considerable speed potential on the Western section of the LSWR from Basingstoke westwards, and could form the basis of a powerful new class of 4-6-0 express passenger locomotive with larger 6 ft 7 in (2.01 m) driving wheels. The LSWR required such a locomotive, which would need to cope with increasing train loads on this long and arduous route to the West Country. The result was the N15 class design, completed by Urie in 1917. It incorporated features from the H15 class, including eight-wheel double bogie tenders with outside plate frames over the wheels and exposed Walschaerts valve gear. High running plates along the boiler were retained for ease of oiling and maintenance. Despite the similarities, the N15 class represented a refinement of the H15 template. The cylinders were increased in size to 22 in × 28 in (560 mm × 710 mm) in diameter, the largest used on a British steam locomotive at that time. The substantial boiler design was also different from the parallel version used on the H15, and became the first tapered types to be constructed at Eastleigh Works. Contrary to boiler construction practices elsewhere where tapering began near the firebox, it was restricted to the front end of the N15's barrel to reduce the diameter of the smokebox, and consequently the weight carried by the front bogie. The design also featured Urie's design of narrow-diameter "stovepipe" chimney, a large dome cover on top of the boiler, and his "Eastleigh" superheater. ### "Urie N15s" The N15 design was approved by the LSWR management committee, though the order for construction was postponed until wartime control of raw materials was relaxed. Government approval was obtained in mid–1918, and Eastleigh Works began to produce the LSWR's first new locomotive class since 1914. The first locomotives, later known by crewmen as the "Urie N15s", were built in two ten-engine batches by the LSWR's Eastleigh Works between 1918–19 and 1922–23. Of the first batch, the prototype, No. 736 entered service on 31 August 1918, with four more appearing between September 1918 and April 1919. They shared a similar profile to Urie's H15 class with the use of flat-sided Drummond-style cabs with gently curving roofs. The double bogie tenders were outwardly similar in appearance to those used on the H15s, although strengthened during construction with extra internal bracing to hold 5,000 imperial gallons (22,700 L) of water. A shortage of copper delayed completion of Nos. 741–745, and the last of the batch emerged from Eastleigh in November 1919. After the running-in of Nos. 736–745 and an intensification of the LSWR timetable to the West Country, a second batch of ten was ordered in October 1921. They entered service over the period June 1922 – March 1923, and were numbered in the series 746–755. At Grouping in January 1923, the LSWR became part of the new Southern Railway, whose chief mechanical engineer (CME) was Richard Maunsell. Maunsell planned to introduce his own designs of express passenger locomotive, one of which was to become the future Lord Nelson class. Despite this, there was a short-term need to maintain existing services that required modification and expansion of Urie's N15 design. ### Maunsell's "Eastleigh Arthurs": Drummond rebuilds Maunsell's projected design of express passenger locomotive was not ready for introduction during the summer timetable of 1925, so a third batch of ten N15s was ordered for construction at Eastleigh. This batch was part of an outstanding LSWR order to rebuild 15 of Drummond's unsuccessful 4-cylinder F13, G14 and P14 classes 4-6-0s into 2–cylinder H15 class locomotives. Only the five F13s were converted to H15s; the remaining ten G14 and P14s (Nos. 448–457, renumbered E448–E457) were rebuilt as N15s, implementing modifications to Urie's original design. The modifications are attributed to Maunsell's Chief Draughtsman James Clayton, who had transferred to Ashford railway works in 1914 from Derby Works. They were the result of cooperation between the South Eastern and Chatham Railway (SECR) and the Great Western Railway (GWR) when Maunsell was seconded to the Railway Executive Committee during the First World War. The aim was to create a series of standard freight and passenger locomotives for use throughout Britain, and meant that Clayton was privy to the latest GWR developments in steam design. These included streamlined steam passages, long-travel valves, the maximisation of power through reduced cylinder sizes and higher boiler pressure. Maunsell initiated trials with Urie N15 No. 442 in 1924, and proved that better performance could be obtained by altering the steam circuit, valve travel and draughting arrangements. As a result, Clayton reduced the N15 cylinder diameter to 20.5 inches (520 mm) and replaced the safety valves with Ross pop valves set to 200 psi (1.38 MPa) boiler pressure. The Urie boiler was retained, though the Eastleigh superheater was replaced by a Maunsell type with 10 per cent greater superheating surface area. This was supplemented by a larger steam chest and an increased-diameter chimney casting specially designed for the rebuilds. It incorporated a rim and capuchon to control exhaust flow into the atmosphere. Valve events (the timing of valve movements with the piston) were also revised to promote efficient steam usage and the wheels were re-balanced to reduce hammerblow. When rebuilding was complete, only the numbers, smokebox doors with centre tightening handles and the flat-sided cabs remained of the G14 and P14 classes. The rebuilds retained their distinctive Drummond "watercart" tenders, which were modified with the removal of the complex injector feedwater heating equipment. The "watercart" tenders were of 4,300 imp gal (19,500 L) water and 5.00 long tons (5.1 t) coal capacity. The ten rebuilds became the first members of the King Arthur class upon entering service. ### "Scotch Arthurs" As the Drummond G14 and P14 4-6-0s were rebuilt to the N15 specification at Eastleigh, a lack of production capacity due to repair and overhaul meant that Maunsell ordered a further batch of 20 locomotives from the North British Locomotive Company in 1924. The company had under-quoted to gain the contract, which meant that production of the batch was rushed. The necessity to maintain an intensive timetable on the Southern Railway's Western section prompted an increase of the order to 30 locomotives (Nos. E763–E792). Their construction in Glasgow would gain them the "Scotch Arthurs" nickname in service. They were all delivered to the Southern Railway by October 1925, and featured the front-end refinements used on the Drummond rebuilds. The North British batch was built to the Southern's new composite loading gauge and differed from previous batches in having an Ashford-style cab based upon that used on the N class. Unlike the Drummond cab retained by Nos. 448–457 and E741–E755, the Ashford cab was of an all-steel construction and had a roof that was flush with the cab sides, allowing it to be used on gauge-restricted routes in the east of the network. It was inspired by the standard cab developed in 1904 by R. M. Deeley for the Midland Railway, and was one of a number of Midland features introduced by Clayton to the SECR and subsequently the Southern Railway. The smokebox door was revised to the Ashford pattern, which omitted the use of central tightening handles in favour of clamps around the circumference. The batch was fitted with the Urie-designed, North British-built 5,000 imp gal (22,700 L) capacity double-bogie tenders. ### Maunsell's "Eastleigh Arthurs": second batch With the "Scotch Arthurs" in service, the Southern Railway had an ample fleet of express passenger locomotives for its Western section routes. As part of a process of fleet standardisation, the Operating Department expressed a desire replace obsolescent locomotives on the Eastern and Central sections with the King Arthur class. In May 1925, a batch of 25 locomotives (Nos. E793–E817) based upon the Scotch Arthurs was ordered for construction at Eastleigh with smaller firebox grates and improved water heating surfaces. After the first 14 (Nos. E793–E806) were built, it was decided to discontinue construction in favour of Maunsell's new 4-cylinder Lord Nelson class design in June 1926. The Operating Department intended to equip Nos. E793–E807 with six-wheel, 4,000 imp gal (18,200 L) capacity tenders for use on the former SECR lines of the Eastern section. These were to replace Scotch Arthurs Nos. E763–E772 on boat train duties. This was because the 5,000 imp gal (22,700 L) tenders attached to Nos. E763–E772 were better suited to the longer routes of the Western section. The final ten engines (Nos. E808–E817) were for the former LBSCR routes of the Central section, where short turntables restricted tender size to the 3,500 imp gal (15,900 L) Ashford variety used on the N class. After the order was changed to the Lord Nelson class design, 14 N class tenders were fitted to Nos. E793–E806 for use the Central section. The high draw-gear (the link between locomotive and tender) of the N class tenders necessitated modification to the frames beneath the cab. ### Naming the locomotives When the former Drummond G14 and P14 4-6-0s were rebuilt to Maunsell's N15 specification in February 1925, the Southern Railway decided to give names to all express passenger locomotives. Because of the railway's association with the West of England, the Public Relations Officer, John Elliot suggested that members of the N15 class should be named after characters and places associated with the legend of King Arthur. When Maunsell was told of the decision to name the locomotives, he replied: "Tell Sir Herbert [Walker] I have no objection, but I warn you, it won't make any difference to the working of the engines". Walker was the General Manager of the Southern Railway, who had told Elliot that Maunsell's permission was required. The first G14 to be rebuilt, No. E453, was given the first name and christened King Arthur. The Urie locomotives (hitherto referred as N15s rather than King Arthurs) were also given names connected with Arthurian legend and were referred to as "Urie Arthurs"; the Maunsell batches of N15s were nicknamed the "Eastleigh" and "Scotch Arthurs". ## Operational details The N15 class was intended to haul heavy expresses over the long LSWR mainlines between Waterloo, Weymouth, Exeter and Plymouth. Locomotives were changed at Salisbury before the upgrading of the South Western Mainline in 1922, when fast running through to Exeter was possible. The Southern Railway's motive power re-organisation following the Grouping of 1923 saw the class allocated to sheds across the network and used on Bournemouth to Oxford cross-country trains. Operations were expanded to more restricted Central and Eastern section mainlines in 1925, and suitably modified class members hauled commuter and heavy boat trains from London Victoria to Dover Marine and expresses to Brighton. In 1931, No. E780 Sir Persant hauled the inaugural Bournemouth Belle Pullman train from Waterloo to Bournemouth West. In peacetime, the class was occasionally used on fast freights from Southampton Docks, although it was common to see them at the head of freight and troop trains during the Second World War. Ten "Urie Arthurs" were transferred to the London and North Eastern Railway (LNER) in October 1942, and were based at Heaton shed for use on freight and occasional passenger trains in the north east and southern Scotland. They returned to the Southern Railway in July 1943 after the introduction of United States Army Transportation Corps S160 class 2-8-0s into service. From 1945 the King Arthur class regularly deputised for Bulleid's new Pacifics, which were experiencing poor serviceability due to mechanical failures. The entire class came into British Railways ownership in 1948: they could be found in most areas of the Southern Region on medium-length expresses and stopping trains on the ex-LSWR mainline. ### Smoke deflector experiments In 1926, the N15 class became the first in Britain equipped with smoke deflectors, with several designs tested. Experiments were undertaken throughout 1926 and included the fitting of a curved plate above the smokebox of No. E753 Melisande to channel air from below the chimney to lift the exhaust above the locomotive when on the move. Nos. E450 Sir Kay and E753 Sir Gillemere had air scoops attached to the chimney, whilst E772 Sir Percivale was fitted with large, square German-type smoke deflectors. Finally, No. E453 King Arthur was fitted with small, rectangular smoke deflectors fitted to the handrails on the smokebox sides. The experiments produced mixed results, and Maunsell requested the assistance of the University of London in staging wind tunnel tests . These resulted in a standard plate design (illustrated in the infobox), which was gradually fitted to the class from late 1927 onwards. ### Performance of the Urie batch and modifications Under LSWR ownership, the N15s were initially well received by crews, though the batch soon gained a reputation for poor steaming on long runs. Through running of the class into Exeter was stopped in favour of engine changes at Salisbury, and Urie attributed the problem to poor driving technique. A series of trial runs changed this assumption, and demonstrated that steam pressure gradually decreased on the flat. The trials also revealed that the robust construction of the motion produced the heaviest hammerblow of any British locomotive class, and had caused cracked frames on the test locomotive. Another criticism from locomotive crews concerned the exposed cab in bad weather, which necessitated the installation of a tarpaulin sheet over the rear of the cab and the front of the tender, restricting rearward vision. The 1921 Coal Miners’ strike meant that two class members (Nos. 737 and 739) were converted to oil-burning. One of the modified locomotives subsequently caught fire at Salisbury shed, and both were reverted to coal firing by the end of the year. When the LSWR was amalgamated into the Southern Railway in 1923, Urie had done little to remedy the shortcomings of the N15s, and it fell to his successor to improve the class. When Maunsell inherited the design as CME of the Southern Railway, he began trials using the weakest N15 (No. 742) in 1924. The results indicated that better performance could be obtained by altering the steam circuit, valve travel and draughting arrangements, although the first two recommendations were deemed too costly for immediate implementation by the Locomotive Committee. Eight extra King Arthur-type boilers were ordered from North British and fitted to N15s Nos. 737–742 by December 1925 in an effort to improve steaming. The remaining Urie boilers were fitted with standard Ross pop safety valves to ease maintenance. Maunsell also addressed draughting problems caused by the narrow Urie "stovepipe" chimney. The exhaust arrangements were modified on No. 737 using the King Arthur chimney design and reduced-diameter blastpipes. This proved successful, and all "Urie N15s" were modified over the period 1925–1929. The oil-burning equipment was refitted to Nos. 737 and 739 during the 1926 General Strike and removed in December of that year. Beginning in 1928, all but No. 755 had their cylinder diameter reduced from 22 inches (560 mm) to 21 inches (530 mm) when renewals were due, improving speed on flat sections of railway, but affecting their performance on the gradients west of Salisbury. No. 755 The Red Knight was modified in 1940 by Maunsell's successor, Oliver Bulleid with his own design of 21-inch (530 mm) cylinders and streamlined steam passages. This was married to a Lemaître multiple-jet blastpipe and wide-diameter chimney, allowing the locomotive to produce performances akin to the more powerful Lord Nelson class. Four other N15s were so modified with four more on order, though the latter were cancelled due to wartime shortages of metal. The soft exhaust of the Lemaître multiple-jet blastpipe precipitated an adjustment to the smoke deflectors on three converted locomotives, with the tops angled to the vertical in an attempt to improve air-flow along the boiler cladding. This failed to achieve the desired effect, and the final two modified locomotives retained the Maunsell-style deflectors. The final modifications to the "Urie N15s" involved the conversion of five locomotives (Nos. 740, 745, 748, 749 and 752) to oil-firing in 1946–1947. This was in response to a government scheme to address a post-war coal shortage. The oil tanks were fabricated from welded steel and fitted within the tender coal space. After initial problems with No. 740 Merlin were rectified, the oil-fired locomotives proved good performers on Bournemouth services. A further addition to the oil-fired locomotives was electric headcode and cab lighting, which was retained when the engines reverted to coal-firing in 1948. ### Performance of the Maunsell batches and modifications The improved front-end layout applied to the first batch of "Eastleigh Arthurs" (Nos. E448–E457) ensured continuous fast running on flat sections of track around London, although their propensity for speed was sometimes compromised over the hilly terrain west of Salisbury. The inside bearings of the Drummond "watercart" tenders proved problematic, as they were too small for the load carried and suffered from water ingress. The retention of the tall Drummond cab prevented use away from the Western section of the Southern Railway. Despite these problems, their operational reliability prompted the management to arrange the visit of No. E449 Sir Torre to the Darlington Railway Centenary celebrations in July 1925. No. E449 also recorded speeds of up to 90 mph (140 km/h) on the South West Mainline near Axminster in 1929. This proved that with the right components, Urie's original design could perform well. Despite the successful use of modified N15 components to rebuild Nos. E448–E457, the mechanically similar "Scotch Arthurs" proved disappointing when put into service from May 1925. The performance of those allocated to the Eastern section was indifferent, and failed to improve upon the double-headed ex-SECR 4-4-0s they were to replace. Reports of poor steaming and hot driving and tender wheel axleboxes were common from crewmen and shed fitters. After investigation, the problems were attributed to poor workmanship during construction as the North British Locomotive Company underquoted production costs to gain the contract. Defects were found in boiler construction across the batch, and necessitated six replacement boilers, re-riveting, re-fitting of tubes and replacement of firebox stays. The hot driving wheel axleboxes were caused by the main frames being out of alignment. A 1926 report suggested that all affected locomotives should be taken to Eastleigh for repair. Once repaired, the "Scotch Arthurs" proved as capable as the rest of the class in service. "Scotch Arthurs" Nos. E763–E772 received new tenders between 1928 and 1930 in a series of tender exchanges with the Lord Nelson and LSWR S15 classes. This ensured that they could exchange their Urie 5,000 imp gal (22,700 L) bogie tenders with the 4,000 imp gal (18,200 L) Ashford design for use on the shorter Eastern section routes. Whilst useful for the roster clerks at Battersea shed, any transfer to the Western section was hampered because of their shorter range. By 1937, all had reverted to the Urie 5,000 imp gal (22,700 L) bogie tenders, though Nos. E768–E772 were attached to new Maunsell flush-sided tenders with brake vacuum reservoirs fitted behind the coal space. These were again swapped with Maunsell LSWR-style bogie tenders fitted to the Lord Nelson class. The second batch of "Eastleigh Arthurs" displaced the ex-K class tanks and ex-LBSCR H2 "Atlantic" 4-4-2 locomotives on the Eastbourne and Bognor Regis routes respectively. They were well liked by crews and used on this part of the network until the arrival of electrification. No. E782 Sir Brian was used on the former Great Northern main line for performance trials against the SECR K and K1 class tanks following the Sevenoaks railway accident in 1927. The tests were supervised by the London and North Eastern Railway's CME, Sir Nigel Gresley, who commented that the class was unstable at high speeds. The instability was caused by motion hammerblow and exacerbated by irregularities in track-work. This caused excessive stress to the axleboxes and poor riding characteristics on the footplate. Despite this, the class benefited from an excellent maintenance regime. Maunsell's successor Oliver Bulleid believed that there was little need to improve draughting on this series. However, reports of poor steaming with No. 792 Sir Hervis de Revel gave him an opportunity to trial a Lemaître multiple-jet blastpipe and wide-diameter chimney on a Maunsell N15 in 1940. This did not enhance performance to the extent of No. 755 The Red Knight. Under British Railways ownership, the locomotive was re-fitted with the Maunsell chimney in March 1952 with no further problems reported. In another wartime experiment, Bulleid fitted No. 783 Sir Gillemere with three thin "stovepipe" chimneys in November 1940. These were set in a triangular formation to reduce visibility of exhaust from the air in response to attacks made by low-flying aircraft on Southern Railway trains. The "stovepipes" were reduced to two, producing a fierce exhaust blast that dislodged soot inside tunnels and under bridges. The experiment was discontinued in February 1941 and the locomotive re-fitted with a Maunsell King Arthur chimney. The last experiment was with spark-arresting equipment in response to lineside fires caused by poor quality coal. Nos. 784 Sir Nerovens and 788 Sir Urre of the Mount were fitted with new wide-diameter chimneys in late 1947. Test-trains showed mixed results and the trials were stopped in 1951 after improvements in coal quality and the fitting of internal smokebox spark-arrestors. ### Withdrawal The detail variations across the class meant the "Urie N15s" were placed into store over the winters of 1949 and 1952. The Maunsell King Arthur examples were easier to maintain, and the large number of modern Bulleid Pacific and British Railways Standard classes were able to undertake similar duties. The "Urie N15s" were brought into service during the summer months, although their deteriorating condition was demonstrated when No. 30754 The Green Knight was withdrawn with cracked frames in 1953. The slow running-down of the "Urie N15s" continued between 1955–1957, and several were stored prior to withdrawal. The last three were withdrawn from Basingstoke shed, with No. 30738 "King Pellinore" the final example to cease operation in March 1958. All were broken up for scrap, though their names were given to 20 BR Standard Class 5 locomotives allocated to the Southern Region between 1959–1962. The Maunsell King Arthur class also faced a decrease in suitable work on the Central and Eastern sections following the introduction of BR Standard class 5 and BR Standard Class 4 4-6-0s in 1955. The gradual withdrawal of the "Urie N15s", H15s and SR N15x classes presented an opportunity to replace the ageing Drummond "watercart" tenders fitted to Nos. 448–457 with Urie 5,000 imp gal (22,700 L) bogie tenders. This coincided with a 1958 programme to similarly change the 3,500 imp gal (15,900 L) Ashford tenders fitted to eight of the second batch "Eastleigh Arthurs". The class remained intact until the completion of the Eastern section electrification when 17 were made redundant in 1959. More withdrawals took place in 1960 when an increase in Bulleid Pacifics allocated to the Western section reduced available work. The ranks thinned to 12 in 1961, and further withdrawals reduced the class to one, No. 30770 Sir Prianius. The class outlasted the newer – but less numerous – Lord Nelson class by one month when No. 30770 was withdrawn from Basingstoke Shed in November 1962. ## Accidents and incidents - In 1940, No. 751 Etarre, No. 755 The Red Knight, No. 775 Sir Agravaine, and No. 776 Sir Galagars along with T14 No. 458 and N15X No. 2328 Hackworth suffered bomb damage during the air raid on Nine Elms shed. No. 458 was scrapped and the other engines were eventually repaired. - On 16 August 1944, 806 Sir Galleron was damaged by a V-1 flying bomb whilst pulling a passenger train in Upchurch; eight people were killed. The locomotive was eventually repaired and put back into service. - On 26 November 1947, locomotive No. 753 King Arthur was hauling a passenger train that was in a rear-end collision with another, the other being hauled by SR Lord Nelson Class 4-6-0 No. 860 Lord Hawke, at Farnborough, Hampshire due to a signalman's error. Two people were killed. - On 22 January 1955, locomotive No. 30783 Sir Gillemere collided with H15 No. 30485 at Bournemouth Central station after its driver misread signals. The locomotive was subsequently repaired; The H15 was condemned. - On 18 September 1962, locomotive No. 30770 Sir Prianus was hauling a newspaper train that caught fire between Knowle Junction and Botley. Four of the five carriages were destroyed. ## Livery and numbering ### LSWR and Southern Railway Under LSWR ownership, the "Urie N15s" were painted in Urie's LSWR sage green livery for passenger locomotives. This was distinct from Drummond's sage green because it was more olive in colour, and yellowed with cleaning and weathering. Black and white lining decorated the boiler bands and borders of the sage green panels. The lettering was in gilt: the initials "LSWR" located on the side of the tender, the locomotive number on the cabside. The first Southern livery continued that of the LSWR, though with primrose yellow transfers showing "SOUTHERN" and the locomotive number, placed on the tender. The lining separating the black border on tender and cab side panels was changed to yellow. Primrose yellow transfers, showing "SOUTHERN" and the locomotive number, were placed on the tender. An "E" prefix was located above the tender number (e.g. E749), denoting that the class was registered for maintenance at Eastleigh works. The gilt numerals on the cabside and tender rear were replaced by a cast oval plate with "Southern Railway" around the edge and the number located in the centre. Yellow numerals were painted onto the front buffer beam to ease identification. In February 1925 Maunsell developed a deeper green with black and white lining. This was applied to his new King Arthur class locomotives and the "Urie N15s" were similarly painted when overhauls were due. Wheels were olive-green with black tyres. From 1929 the "E" prefix was removed and the cast numerals on the tender rear were removed and replaced with yellow transfers (e.g. 749). In May 1938, after Bulleid's appointment as CME, No. 749 Iseult was trialled in bright unlined light green with yellow-painted block numerals replacing the cast numberplates. The tender was given two designs of lettering, with "SOUTHERN" on one side and the initials "SR" on the other. The Board of Directors disapproved and Bulleid repainted the locomotive in darker malachite green with black and white lining (this would later be applied to his Pacifics). The legend "SOUTHERN" in block-lettering remained on the tender, though the number was relocated to the cabside on one side and the smoke deflector on the other. Both were painted in a light "sunshine yellow". No. 749 was returned to Maunsell's green livery. Several variations of the Maunsell green, Urie sage green and Bulleid malachite green liveries were tried with black, white/black, and yellow lining, some sporting a green panel on the smoke deflectors. However, from 1942 to 1946, during the Second World War and its aftermath, members of the class under overhaul were turned out in unlined-black livery as a wartime economy measure, with green-shaded sunshine yellow lettering. The final Southern livery used from 1946 reverted to malachite green, with yellow/black lining, and sunshine yellow lettering. Some of the class (Nos. 782 and 800, Sir Brian and Sir Persant) did not receive this livery. ### British Railways British Railways gave the class the power classification of 5P after nationalisation in 1948. For the first 18 months the locomotives sported a transitional livery: Southern Railway malachite green with "BRITISH RAILWAYS" on the tender in sunshine yellow lettering. As each member of the class became due for a heavy general overhaul, they were repainted in the new standard British Railways express passenger livery of Brunswick green with orange and black lining from April 1949. Initially, the British Railways "Cycling Lion" crest was located on the tender, replaced from the 1957 by the later "Ferret and Dartboard" crest. Numbering was initially a continuation of the Southern Railway system, though an 'S' prefix was added to denote a pre-nationalisation locomotive, so that No. 448 would become No. s448. As each locomotive became due for overhaul and received its new livery, the numbering was changed to the British Railways standard numbering system, in the series 30448–30457 for the first ten and 30736–30806 for the rest. ## Operational assessment and preservation After the poor steaming of the Urie batch was addressed, the class proved popular amongst crews, mechanically reliable and capable of high speeds. However, their heavy hammerblow at speed meant that they were prone to rough riding and instability. The two Maunsell batches with their streamlined steam passages and better draughting arrangements were superior in performance, and were a popular choice when Bulleid's locomotives were unavailable. Their use of standard parts considerably eased maintenance, and the fitting of different tender and cab sizes meant few operational restrictions for the class on mainline routes. The class gave many years of service, and were noted for their ability to "do the job". The electrification of the Eastern and Central sections and the increasing number of Bulleid Pacifics in service meant the lack of a suitable role for the class under British Railways ownership. In spite of the reduction in work, high mileages were obtained with No. 30745 Tintagel achieving 1,464,032 miles (2,356,131 km) in service. The decision to preserve a member of the class was made in November 1960. It was first intended to preserve the King Arthur class doyen No. 30453 King Arthur, and it was stored for a time after withdrawal in 1961 pending restoration to museum condition. However, it was decided to restore the preserved locomotive to as-built condition, and the lack of a suitable Drummond "watercart" tender precluded this consideration. No. 30453 was subsequently scrapped and it was decided to preserve one of the North British-built batch, No. 30777 Sir Lamiel, withdrawn in October 1961, instead. Sir Lamiel was named after a character in Thomas Malory's Le Morte d'Arthur, Sir Lamiel of Cardiff. This locomotive was restored to Maunsell livery as No. E777, and became part of the National Collection. It was restored to the later British Railways livery in 2003. As of 2022, 30777 is under overhaul to service. ## Models Hornby Railways manufacture a model of the N15 in OO gauge. ## See also - List of King Arthur class locomotives
12,855,019
Meteorological history of Hurricane Dean
1,165,836,059
null
[ "Hurricane Dean", "Meteorological histories of individual tropical cyclones", "Tropical cyclones in 2007" ]
The meteorological history of Hurricane Dean began in the second week of August 2007 when a vigorous tropical wave moved off the west coast of Africa into the North Atlantic ocean. Although the wave initially experienced strong easterly wind shear, it quickly moved into an environment better suited for tropical development and gained organization. On the morning of August 13, the National Hurricane Center recognized the system's organization and designated it Tropical Depression Four while it was still more than 1,500 mi (2,400 km) east of the Lesser Antilles. A deep layered ridge to its north steered the system west as it moved rapidly towards the Caribbean and into warmer waters. On August 14 the depression gained strength and was upgraded to Tropical Storm Dean. By August 16, the storm had intensified further and attained hurricane status. Hurricane Dean continued to intensify as it tracked westward through the Lesser Antilles. Once in the Caribbean Sea, the storm rapidly intensified to a Category 5 hurricane on the Saffir-Simpson Hurricane Scale. Weakening slightly, it brushed the southern coast of Jamaica on August 19 as a Category 4 hurricane and continued towards the Yucatán Peninsula through even warmer waters. The favorable conditions of the western Caribbean Sea allowed the storm to intensify and it regained Category 5 status the next day before making landfall in southern Quintana Roo. Hurricane Dean was one of two storms in the 2007 Atlantic hurricane season to make landfall as a Category 5 hurricane and was the seventh most intense Atlantic hurricane ever recorded, tied with Camille and Mitch. After its first landfall, Hurricane Dean crossed the Yucatán Peninsula and emerged, weakened, into the Bay of Campeche. It briefly restrengthened in the warm waters of the bay before making a second landfall in Veracruz. Dean progressed to the northwest, weakening into a remnant low which finally dissipated over the southwestern United States. ## Formation On August 11, 2007, a vigorous tropical wave moved off the west coast of Africa, producing disorganized showers and thunderstorms. It encountered conditions favorable for gradual development, and on August 12 it gained organization and became a low. Strong upper-level easterly winds slowed development, but on August 13 the tropical wave gained enough organization that the National Hurricane Center designated it Tropical Depression Four. At this time it was centered about 520 mi (835 km) west-southwest of Cape Verde. The depression was already exhibiting persistent deep convection in the western portion of its circulation. It moved quickly westward, south of a deep layered ridge, escaping the easterly wind shear that had been slowing its development and moving over warmer waters. At 1500 UTC on August 14, the depression was upgraded to Tropical Storm Dean while still 1450 mi (2300 km) east of Barbados. Even as its convection waned slightly that afternoon, its intensity grew, and convection flared in the center that night. Dry air and cooler air inflow from the north slowed structural development; nevertheless, ragged bands began to form on August 15. By mid-morning, a rough banding eye had formed, and by the next morning a full eye developed. The storm was upgraded to Hurricane Dean at 0900 UTC August 16, 550 mi (890 km) east of Barbados. A strong ridge of high pressure continued to push the system west, towards the Caribbean Sea. That afternoon, convective banding and increasing upper-level outflow strengthened the storm to a Category 2 hurricane on the Saffir-Simpson Hurricane Scale. The eye disappeared briefly overnight, possibly as part of a diurnal fluctuation, but returned by the morning of August 17. ## Caribbean Sea and first landfall At 0930 UTC on August 17, the center of Hurricane Dean passed into the Caribbean Sea through the Saint Lucia Channel between the islands of Martinique and St. Lucia. The northern eyewall passed over Martinique where a weather station in the island's capital of Fort-de-France reported 13 in (33 cm) of rainfall. By this time the eyewall had closed, forming a distinct eye, and in an environment of low wind shear and increasing ocean temperature the hurricane began to intensify rapidly. Hurricane Dean strengthened to a Category 3 hurricane by the evening of August 17. Satellite imagery showed that a well defined eye and numerous cyclonically curved convective bands remained over the Lesser Antilles. That evening, another reconnaissance aircraft reached the hurricane and discovered that it had strengthened into a Category 4 hurricane, and by 0600 UTC on August 18, Dean reached Category 5 intensity for the first time with 165 mph (270 km/h) winds. The storm's wind radii increased in all quadrants as the storm grew in both intensity and size. At 0800 UTC August 18, Hurricane Dean passed directly over NOAA sea buoy 42059 which reported a significant wave height (average size of the largest 33% of waves) of 33 ft (10 m). On August 18, Hurricane Dean developed a double eyewall, indicating that an eyewall replacement cycle was taking place and causing short term fluctuations in intensity as Dean weakened back to a Category 4 hurricane. That afternoon the hurricane continued to improve its outflow, and its numerous spiral bands gave it a well defined satellite presentation. Hurricane Dean finished the eyewall replacement cycle early on August 19 with some trochoidal wobbles. On the morning of August 19, the storm remained slightly weakened from its peak strength. As a Category 4 hurricane with wind speeds between 140 mph (220 km/h) and 145 mph (230 km/h), the center of Hurricane Dean passed 90 mi (150 km) south of Haiti, and that evening passed 25 mi (40 km) south of Jamaica. Two weather stations on the island of Jamaica, one at Ingleside and the other at Morant Bay, both reported in excess of 13 in (33 cm) of rainfall. In contrast, the weather station at Les Cayes, Haiti recorded only 1.18 in (3 cm) of rainfall. Hurricane Dean intensified through the night of August 19 and reinforced its completed eyewall replacement cycle by forming a tight single-walled eye. At 0100 UTC August 20, the storm passed 120 mi (190 km) to the south of Sea Buoy 42056, which recorded a significant wave height of 36 ft (11 m). A concentric eyewall was briefly observed again on the morning of August 20, but it did not last long. In conditions of low wind shear, Hurricane Dean moved westward over waters with increasingly high heat content, and the storm exhibited a classic upper-tropospheric outflow pattern. The high pressure system over the southeastern United States continued to steer the storm west towards the Yucatán Peninsula. The eyewall became even better defined throughout the day. The cloud tops cooled, the minimum central pressure fell, and its winds increased to 160 mph (260 km/h), making Hurricane Dean a Category 5 hurricane once again. This time, it was less than 210 mi (335 km) from its first landfall. Although many of the convective bands were already located over the Yucatán Peninsula, Hurricane Dean continued to intensify until the eye made landfall. As the eye moved over Mexico near the town of Majahual in the Costa Maya area, the NHC estimated surface level winds of 175 mph (280 km/h), making Dean the first storm to make landfall as a Category 5 hurricane in the Atlantic basin since Hurricane Andrew in 1992. At the same time, a dropsonde reading from the hurricane's eye estimated a central pressure of 905 mbar, making Dean the third most intense landfalling Atlantic storm in history (after the Labor Day Hurricane of 1935 and Hurricane Gilbert of 1988) and tying Dean with Mitch as the eighth most intense hurricane ever recorded in the Atlantic basin. The landfall itself occurred in a sparsely populated area of the Costa Maya region of the Mexican state of Quintana Roo near 18.7 N 87.8 W at 0900 UTC August 21 and brought with it a storm surge of 12–18 ft (3.7–5.5 m). A weather station at Chetumal (the capital of Quintana Roo, Mexico) reported 6.65 in (17 cm) of rainfall during Hurricane Dean's landfall. As expected, the landfall caused significant weakening of the storm; the eye filled and the cold cloud-tops warmed. The land severely disrupted the storm's organization, and by the time Dean crossed the Yucatán Peninsula it had weakened to a Category 1 hurricane. ## Gulf of Mexico and demise Hurricane Dean emerged into the Bay of Campeche as a Category 1 hurricane on the afternoon of August 21. Its inner core was largely disrupted, so although a ragged eye reformed over the warm waters of the bay, the hurricane no longer had the structure to support its previous strength. Nevertheless, the warm waters of the bay proved conducive for some development and the eye contracted overnight, indicating that the hurricane was regaining structure. With better structure came stronger winds of 100 mph (160 km/h), and the storm was re-categorized as a Category 2 hurricane. The storm's strengthening pattern continued until Hurricane Dean made its second and final landfall at 1630 UTC August 22 near Tecolutla, Veracruz, just east of Gutiérrez Zamora and about 40 mi (65 km) south-southeast of Tuxpan. A weather station at Requetemu, San Luis Potosí, recorded 15.4 in (39 cm) of rainfall during the storm's second landfall. Dean weakened rapidly, losing its low level circulation within hours and its mid-level circulation the next day as it encountered the Sierra Madre Oriental mountain range. Its remnants passed over the mountains and into the eastern Pacific Ocean as a broad area of low pressure. Hurricane Dean's remnant low pressure system then drifted north into southern California, bringing thunderstorms to northern San Diego County, and more than 2 in (5 cm) of rain to Lake Wohlford. In Escondido almost 2 in (5 cm) of rain fell in 90 minutes. The remnant low pressure system weakened over western Arizona and southern California before finally dissipating on August 30. ## See also - List of Atlantic hurricanes - List of Category 5 Atlantic hurricanes
63,980,914
United States war plans (1945–1950)
1,172,606,473
Plans for a conflict with the Soviet Union
[ "20th-century military history of the United States", "Cold War", "Soviet Union–United States relations", "United States Department of Defense plans" ]
United States war plans for a conflict with the Soviet Union (USSR) were formulated and revised on a regular basis between 1945 and 1950. Although most were discarded as impractical, they nonetheless would have served as the basis for action had a conflict occurred. At no point was it considered likely that the Soviet Union or United States would resort to war, only that one could potentially occur as a result of a miscalculation. Planning was conducted by agencies of the Joint Chiefs of Staff, in collaboration with planners from the United Kingdom and Canada. American intelligence assessments of the Soviet Union's capabilities were that it could mobilize as many as 245 divisions, of which 120 could be deployed in Western Europe, 85 in the Balkans and Middle East, and 40 in the Far East. All war plans assumed that the conflict would open with a massive Soviet offensive. The defense of Western Europe was regarded as impractical, and the Pincher, Broiler and Halfmoon plans called for a withdrawal to the Pyrenees, while a strategic air offensive was mounted from bases in the United Kingdom, Okinawa, and the Cairo-Suez or Karachi areas, with ground operations launched from the Middle East aimed at southern Russia. By 1949, priorities had shifted, and the Offtackle plan called for an attempt to hold Soviet forces on the Rhine, followed, if necessary, by a retreat to the Pyrenees, or mounting an Operation Overlord–style invasion of Soviet-occupied Western Europe from North Africa or the United Kingdom. Despite doubts about its viability and effectiveness, a strategic air offensive was regarded as the only means of striking back in the short term. The air campaign plan, which steadily grew in size, called for the delivery of up to 292 atomic bombs and 246,900 short tons (224,000 t) of conventional bombs. It was estimated that 85 percent of the industrial targets would be completely destroyed. These included electric power, shipbuilding, petroleum production and refining, and other essential war fighting industries. About 6.7 million casualties were anticipated, of whom 2.7 million would be killed. There was conflict between the United States Air Force and the United States Navy over naval participation in the strategic bombing effort, and whether it was a worthwhile use of resources. The concept of nuclear deterrence did not figure in the plans. ## Background In 1944, at the height of World War II, the Joint Chiefs of Staff (JCS) forecast that the war would result in the United States and the Soviet Union becoming the leading world powers. While Britain was still an important power, its position was greatly diminished. On 5 February, the JCS produced an assessment of Soviet post-war intentions. It was expected that the Soviet Union would demobilize most of its forces to facilitate the reconstruction of its economy, which had been devastated by the war, and was not expected to recover before 1952. Until then, the Soviet Union would seek to avoid conflict, but for its own security it would attempt to control border states. Even after demobilization, the capabilities of the Soviet Union would be formidable. American intelligence reports estimated that it would retain over 4,000,000 troops under arms, with 113 divisions. Another 84 divisions would be available from satellite nations. During World War II, the United States mobilized the largest armed forces in American history. The United States Army, which at the time included the United States Army Air Forces (USAAF), had a strength of 8.3 million, of which 3 million were deployed in the European Theater of Operations, and the United States Navy and United States Marine Corps had a combined strength of 3.8 million. By early 1945, plans called for 21 divisions (about 1,000,000 personnel) to be redeployed from Europe to the Pacific via the United States for the invasion of Japan. About 400,000 personnel were to remain in Europe on military occupation duties, and the Army would release 2 million personnel from active duty under a points system whereby soldiers were awarded points based for length of service, length of overseas service, children and decorations. Those with the highest scores had priority for separation from the Army. By the time of the surrender of Japan in August 1945, 581,000 Army personnel had been separated. Under overwhelming public and political pressure, the demobilization of United States armed forces after World War II proceeded much faster than originally planned. By 30 June 1946, the strength of the Army had declined to 1,434,000, the Navy to 983,000 and the Marine Corps to 155,000; by 30 June 1947, the Army was down to 990,000, the Navy to 477,000 and the Marine Corps to 82,000, and only one division remained in Europe. Meanwhile, the economies of European nations were still recovering from the war, and their ability to maintain forces was constrained. The Joint Staff Planners (JSP) consulted with Vannevar Bush, the Chairman of the Joint Committee on New Weapons and Equipment, and Major General Leslie R. Groves, the director of the Manhattan Project, on the potential of new weapons then under development, in particular nuclear weapons and long-range missiles. Bush doubted that it was possible to build a missile like the German V-2 rocket of World War II, but with an extended range of 2,000 nautical miles (3,700 km). Even if the rocket was possible, it would still require allied overseas bases to reach the Soviet Union. Groves was more optimistic; while he agreed that long-range missiles were not technologically feasible in 1945, he thought that they might be in the next ten to twenty years. As for atomic bombs, he recommended that a stockpile be built up but warned that the destruction of a nation's industrial capacity would not affect the outcome of a war. The JCS fashioned a defense posture and war plans oriented toward a single contingency—an all-out global conflict. Mainly they relied on strategic bombardment with nuclear weapons as the country’s principal deterrent and first line of defense. This strategy was found as most practical, effective, and affordable form of defense, and laid foundation for a series of war plans developed over the next few years for dealing with a possible conflict with the Soviet Union. ## Pincher (1946) On 2 March 1946, the Joint War Plans Committee (JWPC) circulated a discussion paper for an outline war plan codenamed Pincher. The outline, which was revised in April and June, estimated that the Soviets could deploy 270 divisions in Europe, 42 in the Middle East, and 49 in the Far East sixty days after mobilization. The most likely flashpoint for hostilities was the Middle East, where Soviet ambitions might come into conflict with those of Britain. The United States would be neutral in such a conflict, but might eventually be drawn into it, as had occurred in 1917 and 1941. The Soviet Union had the resources to quickly overrun Europe east of the Rhine. The Rhine was a major barrier, but it was anticipated that it could not be held for long, forcing US and British forces to retreat to the Pyrenees. The planners believed that the Italian, Iberian, Danish and Scandinavian peninsulas could be held against superior numbers, but expected that the British and French forces would concentrate on defending their homelands, and would be unwilling to divert the resources required to hold Scandinavia, although they might assist in attempting to hold Spain and Italy. The Soviet drive into Western Europe would likely be accompanied by one into the Middle East. If the Soviets also attacked in the Far East, US forces would fall back to Japan. The objective of the United States forces would be to hold the British Isles, North Africa, India, China and Japan, from whence strategic air operations could be launched, while the Navy blockaded the Soviet Union's ports. The concept of launching a second Operation Overlord was rejected; it would involve fighting the Soviet forces where they were strongest, and far from their homeland. The possibility of recapturing Scandinavia was considered, but the logistical difficulties were great. The preferred course of action was therefore to attack the Dardanelles and Bosporus, and invade the Soviet Union via the Black Sea. The plan did not specifically call for the use of nuclear weapons, although it noted that bases within Boeing B-29 Superfortress range of key targets were lacking. At the time the B-29 was the USAAF's most advanced long-range bomber. There was no concept for operations beyond the initial counter-offensive, and the logistical implications of deploying a large force in the Middle East were not explored. Nonetheless, on 8 July 1946 the Joint Strategic Planners accepted Pincher as the basis for planning. The JWPC and the Joint Intelligence Committee (JIC) then produced a series of regional studies based upon Pincher. The first was Broadview, which was issued on 5 August, and revised on 24 October 1946. It dealt with the defense of North America. Long-range weapons meant that the heartland could no longer be considered invulnerable. In the immediate future, the Soviet Union was capable of conducting one-way air strikes and commando raids, and submarines could attack shipping and lay mines in American waters. The major threat, though, was seen as sabotage and subversion by Soviet agents. After 1950, there was a possibility the Soviet Union would develop nuclear weapons, and the long range aircraft or missiles to deliver them against cities in the United States. The possibility of an invasion of Alaska was also considered. To counter these threats, the United States would need mobile ground forces to counter raids, an air warning system, and antisubmarine forces. Plan Griddle, which was issued on 15 August 1946, dealt with the defense of Turkey. The Turkish Army was large, with 48 divisions, but it lacked modern equipment. The study estimated that the Soviet Union could deploy up to 110 divisions against Turkey without compromising operations elsewhere. A two-pronged advance was envisaged, with an attack on Eastern Thrace from Soviet-aligned Bulgaria, coupled with one into Anatolia from Soviet Transcaucasia. Airborne and amphibious forces could strike at both sides of the Dardanelles and Bosporus. From Turkey, Soviet forces could advance into Iraq and Iran. It was estimated that Turkey could hold out for at most 120 days before Turkish forces had to fall back to the western Anatolian coast. Nonetheless, Turkey figured large in American strategy as a potential base for air attacks on the Soviet Union, and blunting the Soviet drive into the Middle East. The study therefore called for increased military aid to Turkey, and development of Turkish airbases and ports. This led to the next study, which was issued on 2 November 1946. It was codenamed Caldron, and dealt with the Middle East. While the Soviet Union produced ample oil for its own peacetime needs, the planners felt that it had insufficient reserves for a major conflict, and therefore that seizing the oil resources of the Middle East would be a Soviet priority. Conversely, this would deny them to the Allies. The region was also considered as an important staging area for an attack on the Soviet Union, so it was expected that the Soviets would move to preclude that. Up to 85 Soviet divisions would be available for operations in the Middle East, where the British would have five divisions to stop them. Soviet forces were expected to reach Palestine within 60 days. About 14 Allied divisions could be concentrated in Egypt. Cockspur, a study of the threat to Italy, was dated 20 December 1946. It envisaged an attack on Italy by Yugoslav forces while Soviet forces concentrated on overrunning Germany, although once this was accomplished Soviet forces could then invade Italy from the north. The Allies had the option of trying to defend northern Italy, which was regarded as impractical, or conducting a fighting retreat. This raised the prospect of Allied forces there being overrun or destroyed, leaving nothing for the defense of Sicily. The study therefore recommended that the best course of action was to immediately withdraw to Sicily. Since the Pincher plan called for a withdrawal to the Pyrenees, the Iberian peninsula assumed considerable importance. Accordingly, Drumbeat, which was issued on 4 August 1947, dealt with its defense. It was estimated that Spain could mobilize 22 divisions in sixty days, but the quality of the Spanish Army was regarded as only fair. Portugal could mobilize another two divisions, and there were 5,000 British troops in Gibraltar. The JWPC did not believe that the Soviet Union would attack Spain, but the possibility was considered. It was estimated that as many as 20 Soviet divisions could reach the Pyrenees by D plus 45, and 50 by D plus 90. Nonetheless, the planners assessed that there was a chance that the Allies could hold Spain. The Far East was considered a theater of secondary importance. Plan Moonrise, which covered it, was presented to the Joint Chiefs of Staff on 29 August 1947. It was estimated that the Soviet Union could deploy about 45 divisions in the region. Given the limitations of the Soviet Pacific Fleet, the Soviet Union's main target would be China. The Nationalist Chinese Army was large, but also largely ineffective, and the Soviet forces would be augmented by 1.115 million Chinese Red Army troops and 2 million militia, and perhaps three divisions from Mongolia, a Soviet satellite state. The first phase of a Soviet attack was expected to target the Port Arthur area. Manchuria would soon be overrun and Beijing would fall in about ten days. The planners estimated that Soviet forces could reach the Yellow River by D plus 90, and Nanjing and Hankou in another three weeks. An advance to the Yangtze River was not anticipated. At the time, US forces still garrisoned Korea, but the plan called for the American forces there to be withdrawn to Japan. ## Broiler (1947) Although the Pincher studies were not accepted as a war plan by the Joint Chiefs of Staff, on 16 July 1947 the JWPC informed the Joint Staff Planners that sufficient progress had been made to formulate one. On 29 August, the Joint Strategic Plans Committee (JSPC), which had replaced the JSP with the enactment of the National Security Act of 1947, instructed the Joint Strategic Plans Group (JSPG) to develop one based on Pincher, with the assumptions that a war would occur in 1948, that the United States would be allied with Britain and Canada, and that atomic weapons would be used. The increased emphasis on atomic weapons represented an important change in emphasis. The resultant war plan was codenamed Broiler. Its starting point was estimates of the available strength of the US forces furnished by the three services; assessments of deficiencies would be based upon the plan at a later stage. The JSPG also drafted a longer-range war plan based on Broiler called Bushwacker, for a war starting on 1 January 1952, and one codenamed Charioteer, for one in 1955 that assumed that Western Europe had already been overrun and a strategic air campaign was called for. The planners had no political guidance as to what the ultimate objective of the war would be, so it was assumed that it would be to drive the Soviet Union back to its 1939 borders. The same scenario as Pincher was envisaged, and the Joint Intelligence Staff's assessment of Soviet Union's capabilities remained substantial: it could mobilize as many as 245 divisions. Of these, 120 could be deployed in Western Europe, 85 in the Balkans and Middle East, and 40 in the Far East. This gave it the capability of overrunning most of Europe in 45 days. In the longer term, the Soviet Union was expected to possess not just numerical superiority but technological equality as well. It was anticipated that the Soviet Union would have developed nuclear weapons by 1952, and long range bombers to deliver them to targets in the United States by 1956. To secure the United States, Greenland and Iceland would be occupied. The development of aerial refuelling capability would permit B-29 and the Boeing B-50 Superfortress, an improved version of the B-29, aircraft to attack twenty major urban areas in the Soviet Union. B-29s were converted to Boeing KB-29 Superfortress aerial tankers, but the first planes were not delivered until late 1948; 77 would be in service by May 1950. Major overseas bases would be established in the United Kingdom, Okinawa and Karachi; the Cairo-Suez and Basra areas were rejected as indefensible. The JSPG admitted that Karachi was far from ideal as a base, but at least it was defensible. The American mobilization plan, JCS 1725/1, submitted on 13 February 1947, was based on the assumption that nuclear weapons would not be used. It called for an Army of 13 divisions at the outbreak of hostilities, which would be increased to 45 divisions in a year, and 80 divisions in two years. This was similar to what had been achieved in World War II. In July the JWPC submitted a plan based on the use of nuclear weapons. On the assumption that between 100 and 200 nuclear weapons would be available, it called for 34 atomic bombs to be dropped on 24 Soviet cities; seven would be dropped on Moscow, three on Leningrad and two each on Kharkov and Stalingrad. It was reckoned that this would do massive damage to the Soviet Union's industries, and would kill or injure about one million of its citizens. The damage might well cause the Soviet Union to sue for peace. The assumption that a hundred atomic bombs were available was not correct. The number of bombs in the stockpile was a closely-guarded secret in 1947. President Harry S. Truman was shocked when he was informed how small the stockpile really was. By June 1948, components for about fifty Fat Man and two Little Boy bombs were on hand. These were not bombs, but components that had to be assembled by specially trained Armed Forces Special Weapons Project assembly teams known as special weapons units. A well-trained team could assemble a bomb in two days, but the teams that had assembled the bombs during the war had returned to civilian life. The first United States Army unit was formed in August 1947, followed by a second in December, and a third in March 1948. Bomb components would have to be delivered to the assembly teams at forward bases by transport aircraft. In 1948, training of a United States Navy special weapons unit began, as the Navy foresaw the delivering of Little Boy nuclear weapons from its Midway-class aircraft carriers with Lockheed P2V Neptune and North American AJ Savage bombers. An additional Army unit was created in May 1948, and two United States Air Force units in September and December 1948. The Air Force gradually became the agency most concerned with the delivery of nuclear weapons, and by the end of 1949 it had twelve special weapons units, with another three in training, while the Army had four, and the Navy three, one for each of the three Midway-class carriers. The Navy's claim to a role in strategic bombing caused friction with the Air Force. Only Silverplate B-29 bombers were capable of delivering Fat Man nuclear weapons, and of the 65 Silverplate B-29s that had been made, only 32 were still operational at the start of 1948, all of which were assigned to the 509th Bombardment Group, which was based at Roswell Army Airfield in New Mexico. Trained crews were also in short supply; at the beginning of 1948 only six crews were qualified to fly atomic bombing missions, although enough personnel had been trained to assemble an additional fourteen crews in an emergency. Up to 20 percent of the target cities were beyond the 3,000-nautical-mile (5,600 km) range of the B-29, requiring a one-way mission, which would expend the crew, bomb and aircraft. The Convair B-36 Peacemaker, with a range of 4,000 nautical miles (7,400 km), was in the process of being introduced to service in 1948, but was not atomic capable. There were also doubts about the ability of the B-29 to penetrate Soviet air space; as a propeller-driven bomber it was no match for Soviet jet fighters, even at night. The forced reliance on nuclear weapons represented an important doctrinal change. During World War II, the USAAF had devastated Axis cities, but had clung to the doctrine of precision bombing even as it had drifted away in practice to area bombing; the latter had now become doctrine. One reason for this was the paucity of intelligence on the precise location of the Soviet Union's industrial facilities. It was hoped that with the power of atomic bombs, just finding the right cities would be good enough. However, this was far from certain. In January 1949, Lieutenant General Curtis LeMay, who assumed command of the Strategic Air Command (SAC) in October 1948, ordered a practice attack on the Wright-Patterson Air Force Base as an exercise. A similar exercise had been conducted in May 1947, when 101 B-29s were ordered to attack New York; 30 had not left the ground due to mechanical problems. This time, the crews were ordered to attack at combat altitude of 30,000 feet (9,100 m) rather than the customary 10,000 to 15,000 feet (3,000 to 4,600 m), which was warmer and did not require cabin pressurization or the use of oxygen masks, and at night, using radar bombing techniques. They were given 1938 maps of Dayton, Ohio, which while old, were better than the ones they had of the Soviet Union. The high altitude and cold temperatures took their toll on aircraft and crews alike. Many sorties were cancelled due to freezing rain, and the radar had difficulty locating targets due to ground clutter and thunderstorm activity. Of the 303 simulated attacks on the target, two-thirds were more than 7,000 feet (2,100 m) from the target, and the average error was 10,000 feet (3,000 m). The atomic bombs of the era would have left the target unscathed. When the Joint Logistics Committee (JLC) studied the plan, it assessed that the Mediterranean line of communications to the Cairo-Suez area would require 912 ships after six months. If the Mediterranean were closed, the longer route around the Cape of Good Hope and the Red Sea would call for 1,042 ships. Over two years the need would rise to 2,252 and 3,848 ships respectively. On 2 September 1947, after a more detailed examination of requirements, the figure for support via the Red Sea after six months was raised from 1,042 to 1,788 ships. The JLC calculated that it would take 16 months to reactivate so many mothballed cargo ships. Allied forces would require 32,360 short tons (29,360 t) of supplies per day, but the combined capacity of the Red Sea ports was only 26,400 short tons (23,900 t). A 14 October 1947 assessment of aircraft requirements for the first twelve months came to 91,332 aircraft of all types, but a month later the Munitions Board (the successor to the Army-Navy Munitions Board under the National Security Act of 1947) reported that this was unrealistic. The JLC assessed the mobilization requirements as unrealistic too, as they would require 300,000 men per month to be inducted, which would put enormous strain on the training and logistical infrastructure. On 23 January 1948, the Munitions Board reported that Broiler and Charioteer called for resources that were not available. It was estimated that the stockpiling of equipment and activation of standby munitions plants would require \$139.6 million in fiscal year 1949 (equivalent to \$ million in ), but only \$37.6 million (equivalent to \$ million in ) was available. It recommended that a more realistic war plan be drafted. The JSPC presented Broiler to the Joint Chiefs of Staff on 10 March 1948. A slightly modified version, codenamed Frolic, was forwarded to the Secretary of Defense, James Forrestal. Although Broiler was accepted as an emergency war plan, all the Joint Chiefs had reservations about it. The Chief of Staff to the Commander in Chief, Fleet Admiral William D. Leahy, did not like the reliance on the use of nuclear weapons when it was uncertain that their use would be authorized. The Chief of Naval Operations, Admiral Louis E. Denfeld, did not agree with the concept of abandoning Western Europe, which he argued was contrary to the foreign policy and national objectives of the United States. He contended that a better strategy would be to build up US forces in Europe to the point that a stand could be made on the Rhine. ## Halfmoon (1948) By 1948, the United States had become enmeshed in great power politics. At the same time, severe limitations on defense spending created an ever-widening gap between capabilities and obligations. Western Europe remained weak and divided, and China was wracked with the Chinese Civil War. Planners from the United States, Britain and Canada met in Washington, D.C., from 12 to 21 April 1948, and they drew up an outline emergency war plan based on Broiler. This resulted in a new plan called Halfmoon (renamed Fleetwood in August 1948). The number of countries assumed to be on the Allied side was increased. It was assumed that the British Commonwealth, the Western Union countries (France, Belgium, the Netherlands and Luxembourg), and the entire Western Hemisphere would be allies of the United States, and that Turkey, Spain, Norway, Iraq, Iran, Pakistan, Afghanistan, Saudi Arabia, Egypt, Jordan, Syria, Lebanon, and Yemen would become allies if attacked by the Soviet Union. At the insistence of the British that it was feasible to hold the Cairo-Suez area, a base there was substituted for the one in Karachi, although the Americans retained the latter as a back up. A strategic air offensive would be launched with B-36 bombers from the United States and B-29 and B-50 bombers based in the United Kingdom, Cairo-Suez and Okinawa. Halfmoon assumed that atomic bombs would be used from the outset by the United States, and by the Soviet Union too once it had developed them. US intelligence (incorrectly) assessed that this would not be in 1949. In the absence of adequate conventional forces, the planners felt that they had no alternative. The President, Harry S. Truman, was briefed on Halfmoon on 6 May 1948, and he expressed misgivings. He asked Leahy to prepare an alternative plan to "resist a Russian attack without using atomic bombs for the reason that we might not have them available either because they might at that time be outlawed or because the people of the United States might not at the time permit their use for aggressive purposes." The Secretary of the Army, Kenneth C. Royall, was particularly disturbed by Truman's objection, and circulated a memo on 19 May calling for a review of national policy regarding the use of nuclear weapons. He raised the matter at a National Security Council (NSC) meeting the next day chaired by the Secretary of State, George C. Marshall, who considered Truman's policy of resisting the Soviet Union without the means to do so was "playing with fire while we have nothing with which to put it out." In July Forrestal told the Joint Chiefs to ignore the President's request for an alternative plan. The policy that Royall called for was drafted by the Air Force in July, updated with minor revisions in early September, and adopted by the NSC as NSC 30 on 16 September. It stated that: > 12\. It is recognized that, in the event of hostilities, the National Military Establishment must be ready to utilize, promptly and effectively all appropriate means available, including atomic weapons, in the interests of national security and must therefore plan accordingly. 13. The decision as to the employment of atomic weapons in the event of war is to be made by the Chief Executive when he considers such decision to be required. Thus, after three years, the inclusion of nuclear weapons in war plans was officially authorized. While the president remained the sole authority on their use, the selection of targets and the circumstances in which they would be used was in the hands of the planners. An updated version of Halfmoon was issued on 28 January 1949 called Trojan. This contained an annex detailing the strategic air offensive. This would target 70 Soviet cities with 133 nuclear weapons, of which eight would be dropped on Moscow and seven on Leningrad. The main change from Fleetwood was the addition of Greece, Italy, Iceland, Ireland, the Philippines and Switzerland to the list of allies, but at the same time dropping the assumption that Arab countries would be on the Allied side, due to the deterioration of relations with the United States in the wake of the 1948 Arab–Israeli War and the creation of the state of Israel. The JLC used Halfmoon as the basis for a new mobilization plan called Cogwheel in response to a request from the Secretary of Defense to provide details that could be used by the Munitions Board as a basis for industrial mobilization. Cogwheel detailed the requirements of the war plan for the first two years, assuming a start of hostilities on 1 July 1949. The only point of disagreement among the three services concerned the construction of additional aircraft carriers; the Army and Air Force believed that the Navy had sufficient on hand or in mothballs, and the additional carriers could not be completed in two years, by which time the Army and Air Force would have completely mobilized. This was submitted on 1 September 1948. On 6 December the Joint Chiefs of Staff ordered the three services to prepare revised mobilization plans. Before this could occur, the Munitions Board submitted its assessment of Cogwheel. It concluded that aircraft production would be 60 percent of requirements by the end of the first year. Further, it was assessed that demand for raw materials such as copper and aluminum would exceed supply, that those for munitions exceeded the capacity to produce them, and that the call for manpower exceeded the ability of the Selective Service System to process them. The Munitions Board therefore decided to base its planning on 50 percent of the requirements of Cogwheel. On 4 October 1948, Denfeld told the Senate Armed Services Committee: > The unpleasant fact remains that the Navy has honest and sincere misgivings as to the ability of the Air Force successfully to deliver the [atomic] weapon by means of unescorted missions flown by present-day bombers, deep into enemy territory in the face of strong Soviet air defenses, and to drop it on targets whose locations are not accurately known. The Joint Chiefs of Staff approved Trojan on 28 January 1949, but recognized that it fell short of a war plan that could be implemented. General Omar N. Bradley, the Chief of Staff of the United States Army, admitted that the Army had serious deficiencies of both personnel and equipment that it was unable to correct due to budget limitations. Similarly, Denfeld reported that the Navy did not have the resources to carry out its part, and the Chief of Staff of the United States Air Force, General Hoyt Vandenberg stated that the Air Force did not have the means to conduct Trojan. The Joint Chiefs of Staff estimated that \$29 billion (equivalent to \$ billion in ) was required for fiscal year 1950, but the administration would only support \$17 to \$18 billion (equivalent to \$ billion to \$ billion in ). ## Offtackle (1949) On 25 February 1949, the acting Chairman of the Joint Chiefs of Staff, General of the Army Dwight D. Eisenhower issued a directive providing more guidance for the strategic planners. He accepted that the Rhine could not be held with the available forces, but wanted a return to Western Europe at the earliest possible date. Over the next seven months, a new plan called Offtackle was drawn up. This was followed by a second round of planning conferences with British and Canadian representatives from 26 September to 4 October. For the first time, some political guidance was available from the National Security Council in the form of NSC 20/4. It stated that the policy of the United States would not initiate war with the Soviet Union, but a conflict could result from a miscalculation on the part of the Soviet Union, such as an underestimation of American resolve. In this, it affirmed an assumption that had already been built into the war plans from Pincher on. It also informed the Joint Chiefs that in the event of war with the Soviet Union, it would not be required to force an unconditional surrender or to conduct an occupation. Like previous war plans, Offtackle only dealt with the opening stages of the war and not with the concluding stages or post-conflict issues. The strategic air campaign outlined in Offtackle was even more ambitious than Trojan. The campaign called for the delivery of 292 atomic bombs and 246,900 short tons (224,000 t) of conventional bombs. It was estimated that 85 percent of the industrial targets would be completely destroyed. These included electric power, shipbuilding, petroleum production and refining, and other essential war fighting industries. Moreover, it was hoped that the campaign would not just cripple Soviet industry, but loosen the control of the government over the people, undermine the determination to prosecute the war, disrupt mobilization, and retard the advance of Soviet ground forces into Western Europe. However, SAC lacked the resources to implement the plan. With its available aircraft, it could carry out only 2,000 sorties in the first two months, far short of the 6,000 called for in the plan. Supporting them would require 360 Douglas C-54 Skymaster sorties or their equivalent, but only 260 were allocated to SAC. Spare parts for the B-50s were in short supply, and the amount of avgas in the war reserve was insufficient. The first steps towards developing an alliance of western countries for an organized and coordinated defense came with the formation of the Western Union on 17 March 1948. This was a mutual defensive alliance consisting of the United Kingdom, France, Belgium, the Netherlands, and Luxembourg. It was followed by the formation of the North Atlantic Treaty Organization (NATO) on 4 April 1949. On 6 October 1949, Truman signed into law the Mutual Defense Assistance Act, which provided \$1 billion (equivalent to \$ billion in ) for NATO allies to purchase weapons and equipment. The American ground forces consisted of one division and three regiments in Europe, and five divisions in the United States. The whole NATO alliance could field ten divisions in West Germany, but it was estimated that at least eighteen were required to halt a Soviet advance on the Rhine. Despite misgivings, in December 1949 the twelve NATO allies accepted a common defense plan. To fulfil Eisenhower's directive, the planners considered other options. The main ones were to fall back to the Pyrenees and hold there, or to mount an Operation Overlord–style invasion of Soviet-occupied Western Europe from North Africa or the United Kingdom. The forces for the former were lacking, so the latter strategy was adopted. Field Marshal Lord Montgomery, the Chairman of the Commanders-in-Chief Committee of the Western European Union, reported on 15 June 1950 that "as things stand today and in the foreseeable future, there would be scenes of appalling and indescribable confusion in Western Europe if we were ever attacked by the Russians." He felt that NATO forces were incapable of holding the Rhine, and sought a new directive; in the end the Western European Union ordered him to hold the Rhine. Forrestal expressed doubts about the plan, which he thought relied too much on the Soviets doing what they were expected to do. He questioned whether strategic bombardment could win a war. Before he committed to the purchase of millions of dollars worth of aircraft, he wanted some assurances. An interservice committee chaired by Lieutenant General Hubert R. Harmon, USAF, was formed to investigate. The report reiterated that in the absence of adequate conventional forces, the strategic air campaign was all that there was. It estimated that it would result in a 30 to 40 percent decrease in Soviet industrial capacity. About 6.7 million casualties were anticipated, of whom 2.7 million would be killed. The survivors would face life without electric power or fuel. Nonetheless, the Harmon committee doubted that it would destroy civilian morale; based on World War II experience, the opposite would be more likely. A separate question, left unanswered, was whether it could be successfully conducted, given the poor state of intelligence regarding the Soviet Union. Denfeld, for one, doubted that it could, and proposed that instead a tactical air campaign be conducted to retard the Soviet advance into Western Europe. The Air Force pressed ahead with procurement of the long-range B-36 bomber, which led to a confrontation between the Navy and the Air Force known as the Revolt of the Admirals, and to the relief of Denfeld, who was replaced by Admiral Forrest Sherman. ## Outcome Interservice conflict was relieved by the outbreak of the Korean War and the consequent increase in the defense budget from \$14.258 billion in fiscal year 1950 (equivalent to \$ billion in ) to \$53.208 billion in fiscal year 1951 (equivalent to \$ billion in ) and \$65.992 billion in fiscal year 1952 (equivalent to \$ billion in ). This allowed the Joint Chiefs of Staff to contemplate a 21-division Army, 143-wing Air Force and 402-ship Navy. The Soviet Union's detonation of its first atomic bomb in August 1949—a year before the earliest date that the Joint Intelligence Committee had assessed as possible and four years earlier than the date it regarded as most probable on 22 March 1948—led to a revision of estimates of the Soviet nuclear stockpile. It was now expected to have 10 to 20 bombs by mid-1950, 45 to 90 by mid-1952, and 120 to 200 by mid-1954. This was immediately incorporated into the next draft of the Offtackle war plan dated 25 October. Both sides were assumed to use nuclear weapons from the commencement of hostilities. The United States now had to contemplate the air defense of North America. On 30 March 1949, Truman signed into law legislation authorizing the establishment of 75 radar stations at a cost of \$85.5 million (equivalent to \$ million in ). No money was appropriated, so only some site selection had been carried out by January 1950. The Operation Sandstone nuclear tests in April and May 1948 had demonstrated improved designs, with the X-Ray and Yoke tests having yields of 37 kilotons of TNT (150 TJ) and 41 kilotons of TNT (170 TJ) respectively—nearly twice that of the older Mark 3 Fat Man devices in the inventory. The new Mark 4 nuclear bomb, which entered service in March 1949, was a more practical piece of ordnance than its predecessor, and its composite uranium-plutonium core made more economical use of the available fissile material. In May 1948, the Los Alamos Scientific Laboratory commenced work on the design of the Mark 5 nuclear bomb, a smaller and lighter weapon. The development of Soviet atomic bombs provided the impetus for the development of even more destructive thermonuclear weapons. On 31 December 1949, the Strategic Air Command had 521 B-29s, B-36s and B-50s capable of delivering atomic bombs. It was estimated that SAC bombers would suffer 35 percent casualties at night, and fifty percent if missions had to be conducted in daylight. The delivery of the 292 atomic bombs called for by the Offtackle plan was regarded as practical, but there would be no ability to launch follow-up raids. A jet bomber, the Boeing B-47 Stratojet was under development, but would not become operational until 1953. The concept of nuclear deterrence did not figure in the war plans; nuclear weapons were seen purely as weapons of war. In 1947 the United States European Command (EUCOM) ordered the sole American division stationed in Europe, the 1st Infantry Division, which was scattered about the US Occupation Zone in West Germany, to reassemble to constitute a theater reserve. It was relieved of its occupation duties and its commander, Major General Frank W. Milburn, was ordered to resume its tactical training. By 1950, it was still scattered, while EUCOM looked for suitable locations for its consolidation. To build up the US ground forces, the Joint Chiefs decided to deploy four more divisions to Europe in 1951. Plans for the defense of the Rhine were still regarded as unsound as NATO was short 8,000 tanks, 9,200 half-tracks and 3,200 artillery pieces. Equipment that had been purchased during World War II was increasingly becoming obsolescent or unserviceable. The commander of the Army's Logistics Division, Major General Henry S. Aurand, reported in September 1948 that the Army had 15,526 tanks, but only 1,762 were serviceable. By June 1950, four hundred M26 Pershing tanks had been rebuilt as the new M46 Patton. The war plans of the late 1940s were never put to the test. It is not known whether the Soviet Union would have overrun Western Europe, whether the strategic air offensive would have succeeded, or even who precisely would have won such a war. The disparity between military means and political commitments weighed heavily on the planners. They concentrated on Soviet capabilities rather than intentions. The assessment of the intent of the Soviet Union was that it did not want to risk a war due to the devastated state of its economy. A calculated risk was taken on that basis. "As long as we can outproduce the world, can control the sea and strike inland with the atomic bomb," Forrestal noted in December 1947, "we can assume certain risks otherwise unacceptable in an effort to restore world trade, to restore the balance of power-military power and to eliminate some of the conditions that breed war." ## See also - Operation Dropshot - Plan Totality - Operation Unthinkable
1,456,653
Startling Stories
1,136,292,180
US science fiction magazine
[ "Bimonthly magazines published in the United States", "Defunct science fiction magazines published in the United States", "Fantasy fiction magazines", "Magazines disestablished in 1955", "Magazines established in 1939", "Pulp magazines", "Science fiction magazines established in the 1930s", "Startling Stories" ]
Startling Stories was an American pulp science fiction magazine, published from 1939 to 1955 by publisher Ned Pines' Standard Magazines. It was initially edited by Mort Weisinger, who was also the editor of Thrilling Wonder Stories, Standard's other science fiction title. Startling ran a lead novel in every issue; the first was The Black Flame by Stanley G. Weinbaum. When Standard Magazines acquired Thrilling Wonder in 1936, it also gained the rights to stories published in that magazine's predecessor, Wonder Stories, and selections from this early material were reprinted in Startling as "Hall of Fame" stories. Under Weisinger the magazine focused on younger readers and, when Weisinger was replaced by Oscar J. Friend in 1941, the magazine became even more juvenile in focus, with clichéd cover art and letters answered by a "Sergeant Saturn". Friend was replaced by Sam Merwin Jr. in 1945, and Merwin was able to improve the quality of the fiction substantially, publishing Arthur C. Clarke's Against the Fall of Night, and several other well-received stories. Much of Startling's cover art was painted by Earle K. Bergey, who became strongly associated with the magazine, painting almost every cover between 1940 and 1952. He was known for equipping his heroines with brass bras and implausible costumes, and the public image of science fiction in his day was partly created by his work for Startling and other magazines. Merwin left in 1951, and Samuel Mines took over; the standard remained fairly high but competition from new and better-paying markets such as Galaxy Science Fiction and The Magazine of Fantasy & Science Fiction impaired Mines' ability to acquire quality material. In mid-1952, Standard attempted to change Startling's image by adopting a more sober title typeface and reducing the sensationalism of the covers, but by 1955 the pulp magazine market was collapsing. Startling absorbed its two companion magazines, Thrilling Wonder and Fantastic Story Magazine, in early 1955, but by the end of that year it too ceased publication. Ron Hanna of Wild Cat Books revived Startling Stories in 2007. Wild Cat Books folded in 2013. A statement of the closure is still posted on the Facebook page All Pulp dated March 12, 2013 (as of January 29, 2019). The magazine was again revived by John Gregory Betancourt's Wildside Press in February 2021, with Douglas Draa as editor. ## Publication history Although science fiction had been published before the 1920s, it did not begin to coalesce into a separately marketed genre until the appearance in 1926 of Amazing Stories, a pulp magazine published by Hugo Gernsback. By the end of the 1930s the field was booming. Standard Magazines, a pulp publishing company owned by Ned Pines, acquired its first science fiction magazine, Thrilling Wonder Stories, from Gernsback in 1936. Mort Weisinger, the editor of Thrilling Wonder, printed an editorial in February 1938 asking readers for suggestions for a companion magazine. Response was positive, and the new magazine, titled Startling Stories, was duly launched, with a first issue (pulp-sized, rather than bedsheet-sized, as many readers had requested), dated January 1939. Initial pay rates were half a cent per word, lower than the leading magazines of the day. Startling was launched on a bimonthly schedule, alternating months with Thrilling Wonder Stories, though in 1940 Thrilling moved to a monthly schedule that lasted for over a year. The first editor was Mort Weisinger, who had been an active fan in the early 1930s and had joined Standard Magazines in 1935, editing Thrilling Wonder from 1936. Weisinger left in 1941 to take a new post as editor of Superman, and was replaced by Oscar J. Friend, who was an established writer of pulp fiction, though his experience was in western fiction rather than sf. During Friend's tenure Startling slipped from bimonthly to quarterly publication. Friend lasted for a little over two years, and was replaced by Sam Merwin Jr., as of the Winter 1945 issue. Merwin succeeded in making Startling popular and successful, and the bimonthly schedule was resumed in 1947. At the start of 1952 Startling switched to a monthly schedule; this was unusual in that Startling was notionally junior to Thrilling Wonder, its sister magazine, which remained bimonthly. Merwin left shortly before this switch, in order to spend more time on his own writing. He was replaced by Samuel Mines, who had worked with Standard's Western magazines, though he was a science fiction aficionado. Street & Smith, one of the longest established and most respected publishers, shut down all of their pulp magazines in the summer of 1949. The pulps were dying, partially as a result of the success of paperbacks. Standard continued with Startling and Thrilling, but the end came only a few years later. In 1954, Fredric Wertham published Seduction of the Innocent, a book in which he asserted that comics were inciting children to violence. A subsequent Senate subcommittee hearing led to a backlash against comics, and the publishers dropped titles in response. The financial impact spread to pulp magazines, since often a publisher would publish both. A 1955 strike by American News Corporation, the main distributor in the U.S., meant that magazines remained in warehouses and never made it to the newsstands; the unsold copies represented a significant financial blow and contributed to publishers' decisions to cancel magazines. Startling was one of the casualties. The schedule had already returned from monthly to bimonthly in 1953, and it became a quarterly in early 1954. Thrilling Wonder published its last issue in early 1955, and was then merged with Startling, as was Fantastic Story Magazine, another companion publication, but the combined magazine lasted only three more issues. Mines left the magazine at the end of 1954; he was succeeded for two issues by Theron Raines, who was followed by Herbert D. Kastle for the last two. The final issue was dated Fall 1955. ## Contents and reception ### War years From the beginning, every issue of Startling contained a complete novel, along with one or two short stories; long stories did not appear since the publisher's policy was to avoid serials. When Standard Magazines had bought Wonder Stories in 1936, they had also acquired rights to reprint the stories that had appeared in it and in its predecessor magazines, Air Wonder Stories and Science Wonder Stories, and so Startling also included a "Hall of Fame" reprint from one of these magazines in every issue. The first lead novel was The Black Flame, a revised version of "Dawn of Flame", a story by Stanley Weinbaum that had previously appeared only in an edition limited to 250 copies. There was also a tribute to Weinbaum, written by Otto Binder; Weinbaum had died in 1935 and was well regarded, so even though the story was not one of his best, it was excellent publicity for the magazine. Otto and his brother, Earl, also contributed a story, "Science Island", under their joint pseudonym Eando Binder. The "Hall of Fame" reprint was D.D. Sharp's "The Eternal Man", from 1929. Other features included a pictorial article on Albert Einstein, and a set of biographical sketches of scientists, titled "Thrills in Science". The letter column was called "The Ether Vibrates", and there was a regular fanzine review column, providing contact information so that readers could obtain the fanzines directly. Initially the stories for the "Hall of Fame" were chosen by the editor, but soon Weisinger recruited well-known science fiction fans to make the choices. Startling was popular, and soon "became one of the core science fiction magazines", according to science fiction historian Mike Ashley. The target audience was younger readers, and the lead novels were often space operas by well-known pulp writers such as Edmond Hamilton and Manly Wade Wellman. In addition to space opera, some more fantastical fiction began to appear, contributed by writers such as Henry Kuttner. These early science fantasy stories were popular with the readers, and contrasted with the hard science fiction that John W. Campbell was pioneering at Astounding. Weisinger set out to please the younger readers, and when Friend became editor in 1941, he went further in this direction, giving the magazine a strongly juvenile flavor. For example, Friend introduced "Sergeant Saturn", a character (originally from Thrilling Wonder Stories) who answered readers' letters and appeared in other features in the magazine. Many subscribers found the approach irritating. The interior artwork was initially done by Hans Wessolowski (more usually known as "Wesso"), Mark Marchioni and Alex Schomburg, and occasionally Virgil Finlay. The initial cover art was mostly painted by Howard Brown, but when Earle K. Bergey began to paint covers for Startling in 1940, soon after its launch, Bergey quickly became identified with the magazine; between 1940 and 1952 (the year of Bergey's death) he painted the great majority of covers. Bergey's covers were visually striking: in the words of science fiction editor and critic Malcolm Edwards, they typically featured "a rugged hero, a desperate heroine (in either a metallic bikini or a dangerous state of déshabillé) and a hideous alien menace". The brass bra motif came to be associated with Bergey, and his covers did much to create the image of science fiction as it was perceived by the general public. ### Merwin and after When Merwin became editor in 1945 he brought changes, but artist Earle K. Bergey retained the creative freedom he had come to expect given his relationship with Standard. Some argue that Bergey's covers became more realistic, and Merwin managed to improve the interiors of Startling to the point of being a serious rival to Astounding, acknowledged leader of the field. Critics' opinions vary on the relative quality of the magazines of this era; Malcolm Edwards regards Startling as second only to Astounding, but Ashley considers Thrilling Wonder to be Astounding's closest challenger in the late 1940s. Merwin's discoveries included Jack Vance, whose first story, "The World Thinker", appeared in the Summer 1945 issue. He also regularly published work by Henry Kuttner and C.L. Moore, who wrote both under Kuttner's name and as "Keith Hammond": in a four-year period from 1946 to 1949 the writing team of Kuttner and Moore had seven novels published in Startling, mostly science fantasy, a subgenre not common at that time. Notable novels that appeared in the late 1940s include Fredric Brown's What Mad Universe and Charles L. Harness's Flight Into Yesterday, later published in book form as The Paradox Men. Arthur C. Clarke's novel The City and the Stars first saw print in Startling in abbreviated form, in the November 1948 issue, under the title Against the Fall of Night. One novel that did not appear in Startling was Isaac Asimov's Pebble in the Sky, which Merwin had commissioned from Asimov in the early summer of 1947. After the unusual step of allowing the editor to twice read the work-in-progress and receiving nothing but approval, Asimov delivered a completed draft in September. This time, Merwin asked for revisions: Leo Margulies, Merwin's boss, had decided that Startling needed to focus more on action and adventure in the style of Amazing, and less on cerebral stories in the style of Astounding. Asimov, "for the first and only time of [his] life...openly lost [his] temper with an editor", stalked out of the room with his manuscript and never submitted anything to Merwin again, though he later expressed a softening of feeling and admitted Merwin had been within his rights. Another title in the Standard Magazines stable was Captain Future, which had been launched a year after Startling, and featured the adventures of the superhero after whom the magazine was named. When it folded with its Spring 1944 issue, the series of novels was continued for some time in the pages of Startling; over the next six years ten more "Captain Future" novels appeared, with the last one, Birthplace of Creation, printed in the May 1951 issue. Merwin's successor, Mines, also published some excellent work, though increased competition in the early 1950s from Galaxy and The Magazine of Fantasy & Science Fiction did lead to some dilution of quality, and Startling's rates—one to two cents per word—could not compete with the leading magazines. However, Startling's editorial policy was more eclectic: it did not limit itself to one kind of story, but printed everything from melodramatic space opera to sociological sf, and Mines had a reputation as having "the most catholic tastes and the fewest inhibitions" of any of the science fiction magazine editors. In late 1952, Mines published Philip José Farmer's "The Lovers", a taboo-breaking story about aliens who can reproduce only by mating with humans. Illustrated with an eye-popping cover by Bergey, Farmer's ground-breaking story integrated sex into the plot without being prurient, and was widely praised. Farmer, partly as a consequence, went on to win a Hugo Award as "Most Promising New Writer". New authors first published by Mines include Frank Herbert, who debuted with "Looking for Something?" in April 1952, and Robert F. Young, whose first story, "The Black Deep Thou Wingest", appeared in June 1953. The artwork was also high quality; Virgil Finlay's interior illustrations were "unparalleled", according to science fiction historian Robert Ewald. Other well-known artists who contributed interior work included Alex Schomburg and Kelly Freas. Startling's instantly recognizable title logo was redolent of the magazine's pulp roots, and in early 1952 Mines decided to replace it with a more staid typeface. The covers became more sober, with spaceships replacing the women in brass bras. With the Spring 1955 issue, at the start of its final year, Startling dropped its long-standing policy of printing a novel in every issue, but only three issues later it ceased publication. ## Bibliographic details The editorial succession at Startling was as follows: - Mort Weisinger: January 1939 – May 1941. - Oscar J. Friend: July 1941 – Fall 1944. - Sam Merwin Jr.: Winter 1945 – September 1951. - Samuel Mines: November 1951 – Fall 1954. - Theron Raines: Winter 1955 – Spring 1955. - Herbert D. Kastle: Summer 1955 – Fall 1955. Startling was a pulp-sized magazine for all of its 99 issues. It initially was 132 pages, and was priced at 15 cents. The page count was reduced to 116 pages with the Summer 1944 issue and then increased to 148 pages with the March 1948 issue, at which time the price went up to 20 cents. The price increased again, to 25 cents, in November 1948, and the page count increased again to 180 pages. This higher page count did not last; it was reduced to 164 in March 1949 and then again to 148 pages in July 1951. The October 1953 issue saw the page count drop again, to 132, and a year later the Fall 1954 issue cut the page count to 116. The magazine remained at 116 pages and a price of 25 cents for the rest of its existence. The original bimonthly schedule continued until the March 1943 issue, which was followed by June 1943 and then Fall 1943. This inaugurated a quarterly schedule that ran until Fall 1946, except that an additional issue, dated March, was inserted between the Winter 1946 and Spring 1946 issues. The next issue, January 1947, began another bimonthly sequence, which ran without interruption until November 1951. With the following issue, January 1952, Startling switched to a monthly schedule, which lasted until the June 1953 issue which was followed by August and October 1953 and then January 1954. The next issue was Spring 1954, and the magazine stayed on a quarterly schedule from then until the last issue, Fall 1955. There was a British reprint edition from Pembertons between 1949 and 1954. These were heavily cut, with sometimes only one or two stories and usually only 64 pages, though the October and December 1952 issues both had 80 pages. It was published irregularly; initially once or twice a year, and then more or less bimonthly beginning in mid-1952. The issues were numbered from 1 to 18. Three different Canadian reprint editions also appeared for a total of 21 or 22 issues (sources differ on the correct number). Six quarterly issues appeared from Summer 1945 through Fall 1946 from Publication Enterprises, Ltd.; then another three bimonthly issues appeared, from May to September 1948, from Pines Publications. Finally 12 more bimonthly issues appeared from March 1949 to January 1951, from Better Publications of Canada. All these issues were almost identical to the American versions, although they are 0.5 inches (1.3 cm) taller. A Mexican magazine, Enigmas, ran for 16 issues from August 1955 to May 1958; it included many reprints, primarily from Startling and from Fantastic Story Magazine. ### Derivative anthologies Two anthologies of stories from Startling have been published. In 1949 Merlin Press brought out From Off This World, edited by Leo Margulies and Oscar Friend, which included stories that had appeared in the "Hall of Fame" reprint section of the magazine. Then in 1954 Samuel Mines edited The Best from Startling Stories, published by Henry Holt; despite the title, the stories were reprinted from both Startling and its sister magazine, Thrilling Wonder Stories. The anthology was reprinted twice in the UK under different titles; as Startling Stories in 1954, published by Cassell, and then in 1956 as a Science Fiction Book Club edition titled Moment in Time. P. Schuyler Miller praised it as "an excellent collection by anyone's standards."
194,350
Black vulture
1,170,143,934
New World vulture
[ "Articles containing video clips", "Birds described in 1793", "Birds of prey of the Americas", "Birds of the Caribbean", "Birds of the Rio Grande valleys", "Cathartidae", "Least concern biota of North America", "Least concern biota of the United States", "Native birds of the Southeastern United States", "New World vultures", "Taxa named by Johann Matthäus Bechstein" ]
The black vulture (Coragyps atratus), also known as the American black vulture, Mexican vulture, zopilote, urubu, or gallinazo, is a bird in the New World vulture family whose range extends from the southeastern United States to Perú, Central Chile and Uruguay in South America. Although a common and widespread species, it has a somewhat more restricted distribution than its compatriot, the turkey vulture, which breeds well into Canada and all the way south to Tierra del Fuego. It is the only extant member of the genus Coragyps, which is in the family Cathartidae. Despite the similar name and appearance, this species is unrelated to the Eurasian black vulture, an Old World vulture, of the family Accipitridae (which includes raptors like the eagles, hawks, kites, and harriers). For ease of locating animal corpses (their main source of sustenance), black vultures tend to inhabit relatively open areas with scattered trees, such as chaparral, in addition to subtropical forested areas and parts of the Brazilian pantanal. With a wingspan of 1.5 m (4.9 ft), the black vulture is an imposing bird, though relatively small for a vulture, let alone a raptor. It has black plumage, a featherless, grayish-black head and neck, and a short, hooked beak. These features are all evolutionary adaptations to life as a scavenger; their black plumage stays visibly cleaner than that of a lighter-colored bird, the bare head is designed for easily digging inside animal carcasses, and the hooked beak is built for stripping the bodies clean of meat. The absence of head-feathers helps the birds stay clean and remain (more or less) free of animal blood and bodily fluids, which could become problematic for the vultures and attract parasites; most vultures are known to bathe religiously after eating, provided there is a water source. This water source can be natural or man-made, such as a stream or a livestock water tank. The black vulture is a scavenger and feeds on carrion, but will also eat eggs, small reptiles, or small newborn animals (livestock such as cattle, or deer, rodents, rabbits, etc.), albeit very rarely. They will also opportunistically prey on extremely weakened, sick, elderly, or otherwise vulnerable animals. In areas populated by humans, it also scavenges at dumpster sites and garbage dumps. It finds its meals either by using its keen eyesight or by following other (New World) vultures, which all possess a keen sense of smell. Lacking a syrinx—the vocal organ of birds—its only vocalizations are grunts or low hisses. It lays its eggs in caves, in cliffside rock crevasses, dead and hollow trees or, in the absence of predators, on the bare ground, generally raising two chicks each year. The parents feed their young by regurgitation from their crop, an additional digestive organ unique to birds, used for storing excess food; their “infant formula”, of sorts, is thus called “crop milk”. In the United States, the vulture receives legal protection under the Migratory Bird Treaty Act of 1918 despite the fact it does not migrate whatsoever. This vulture also appeared in Mayan codices. ## Taxonomy The American naturalist William Bartram wrote of the black vulture in his 1791 book Bartram's Travels, calling it Vultur atratus "black vulture" or "carrion crow". Bartram's work has been rejected for nomenclatoríal purposes by the International Commission on Zoological Nomenclature as the author did not consistently use the system of binomial nomenclature. The German ornithologist Johann Matthäus Bechstein formally described the species using the same name in 1793 in his translation of John Latham's A General Synopsis of Birds. The common name "vulture" is derived from the Latin word vulturus, which means "tearer" and is a reference to its feeding habits. The species name, ātrātus, means "clothed in black", from the Latin āter 'dull black'. Veillot defined the genus Catharista in 1816, listing as its type C. urubu. French naturalist Emmanuel Le Maout placed in its current genus Coragyps (as C. urubu) in 1853. Isidore Geoffroy Saint-Hilaire has been listed as the author in the past, but he did not publish any official description. The genus name means "raven-vulture", from a contraction of the Greek corax/κόραξ and gyps/γὺψ for the respective birds. The American Ornithologists' Union used the name Catharista atrata initially, before adopting Veillot's name (Catharista urubu) in their third edition. By their fourth edition, they had adopted the current name. The black vulture is basal (the earliest offshoot) to a lineage that gave rise to the turkey and greater and lesser yellow-headed vultures, diverging around 12 million years ago. Martin Lichtenstein described C. a. foetens, the Andean black vulture, in 1817, and Charles Lucien Bonaparte described C. a. brasiliensis, from Central and South America, in 1850 on the basis of smaller size and minor plumage differences. However it has been established that the change between the three subspecies is clinal (that is, there is no division between the subspecies), and hence they are no longer recognised. "Black vulture" has been designated the official name by the International Ornithologists' Union (IOC). "American black vulture" is also commonly used, and in 2007 the South American Classification Committee (SACC) of the American Ornithological Society unsuccessfully proposed it to be the official name of the species. ### Evolutionary history of Coragyps From the Early to the Late Pleistocene, a prehistoric species of black vulture, C. occidentalis, known as the Pleistocene black vulture or—somewhat in error—the "western black vulture", occurred across the present species' range. This bird did not differ much from the black vulture of today except in size; it was some 10–15% larger, and had a relatively flatter and wider bill. It filled a similar ecological niche as the living form but fed on larger animals, and was previously thought to have evolved into it by decreasing in size during the last ice age. However, a 2022 genetic study found C. occidentalis to be nested within the South American clade of black vultures; C. occidentalis had evolved from the modern black vulture about 400,000 years ago and developed a larger and more robust body size when it colonized high-altitude environments. C. occidentalis may have interacted with humans; a subfossil bone of the extinct species was found in a Paleo-Indian to Early Archaic (9000–8000 years BCE) midden at Five Mile Rapids near The Dalles, Oregon. Fossil (or subfossil) black vultures cannot necessarily be attributed to the Pleistocene or the recent species without further information: the same size variation found in the living bird was also present in its larger prehistoric relative. Thus, in 1968, Hildegarde Howard separated the Mexican birds as C. occidentalis mexicanus as opposed to the birds from locations farther north (such as Rancho La Brea) which constituted the nominate subspecies C. o. occidentalis. The southern birds were of the same size as present-day northern black vultures and can only be distinguished by their somewhat stouter tarsometatarsus and the flatter and wider bills, and even then only with any certainty if the location where the fossils were found is known. As the Pleistocene and current black vultures form an evolutionary continuum rather than splitting into two or more lineages, some include the Pleistocene taxa in C. atratus, which is further affirmed by phylogenetic studies indicating that it forms a clade within the South American C. atratus. An additional fossil species from the Late Pleistocene of Cuba, C. seductus, was described in 2020. ## Description The black vulture is a fairly large scavenger, measuring 56–74 cm (22–29 in) in length, with a 1.33–1.67 m (52–66 in) wingspan. Weight for black vultures from North America and the Andes ranges from 1.6 to 3 kg (3.5 to 6.6 lb) but in the smaller vultures of the tropical lowlands it is 1.18–1.94 kg (2.6–4.3 lb). 50 vultures in Texas were found to average 2.15 kg (4.7 lb) while 119 birds in Venezuela were found to average 1.64 kg (3.6 lb). The extended wing bone measures 38.6–45 cm (15.2–17.7 in), the shortish tail measures 16–21 cm (6.3–8.3 in) and the relatively long tarsus measures 7–8.5 cm (2.8–3.3 in). Its plumage is mainly glossy black. The head and neck are featherless and the skin is dark gray and wrinkled. The iris of the eye is brown and has a single incomplete row of eyelashes on the upper lid and two rows on the lower lid. The legs are grayish white, while the two front toes of the foot are long and have small webs at their bases. The nostrils are not divided by a septum, but rather are perforate; from the side one can see through the beak. The wings are broad but relatively short. The bases of the primary feathers are white, producing a white patch on the underside of the wing's edge, which is visible in flight. The tail is short and square, barely reaching past the edge of the folded wings. A leucistic C. atratus brasiliensis was observed in Piñas, Ecuador in 2005. It had white plumage overall, with only the tarsus and tail as well as some undertail feathers being black. It was not an albino as its skin seemed to have had the normal, dark color and it was part of a flock of some twenty normally plumaged individuals. ## Distribution and habitat The black vulture has a Nearctic and Neotropic distribution. Its range includes the mid-Atlantic States, the southernmost regions of the Midwestern United States, the southern United States, Mexico, Central America and most of South America. It is usually a permanent resident throughout its range, although birds at the extreme north of its range may migrate short distances, and others across their range may undergo local movements in unfavourable conditions. In South America, its range stretches to Peru, central Chile and Uruguay. It also is found as a vagrant on the islands of the Caribbean. It prefers open land interspersed with areas of woods or brush. It is also found in moist lowland forests, shrublands and grasslands, wetlands and swamps, pastures, and heavily degraded former forests. Preferring lowlands, it is rarely seen in mountainous areas. It is usually seen soaring or perched on fence posts or dead trees. ## Ecology and behavior The black vulture soars high while searching for food, holding its wings horizontally when gliding. It flaps in short bursts which are followed by short periods of gliding. Its flight is less efficient than that of other vultures, as the wings are not as long, forming a smaller wing area. In comparison with the turkey vulture, the black vulture flaps its wings more frequently during flight. It is known to regurgitate when approached or disturbed, which assists in predator deterrence and taking flight by decreasing its takeoff weight. Like all New World vultures, the black vulture often defecates on its own legs, using the evaporation of the water in the feces and/or urine to cool itself, a process known as urohidrosis. It cools the blood vessels in the unfeathered tarsi and feet, and causes white uric acid to streak the legs. Because it lacks a syrinx, the black vulture, like other New World vultures, has very few vocalization capabilities. It is generally silent, but can make hisses and grunts when agitated or while feeding. The black vulture is gregarious, and roosts in large groups. In areas where their ranges overlap, the black vulture will roost on the bare branches of dead trees alongside groups of turkey vultures. The black vulture generally forages in groups; a flock of black vultures can easily drive a rival turkey vulture, which is generally solitary while foraging, from a carcass. Like the turkey vulture, this vulture is often seen standing in a spread-winged stance. The stance is believed to serve multiple functions: drying the wings, warming the body, and baking off bacteria. This same behavior is displayed by other New World vultures, Old World vultures, and storks. ### Breeding The timing of black vultures' breeding season varies with the latitude at which they live. In the United States, birds in Florida begin breeding as early as January, for example, while those in Ohio generally do not start before March. In South America, Argentinian and Chilean birds begin egg-laying as early as September, while those further north on the continent typically wait until October. Some in South America breed even later than that—black vultures in Trinidad typically do not start until November, for example, and those in Ecuador may wait until February. Pairs are formed following a courtship ritual which is performed on the ground: several males circle a female with their wings partially open as they strut and bob their heads. They sometimes perform courtship flights, diving or chasing each other over their chosen nest site. The black vulture lays its eggs on the ground in a wooded area, a hollow log, or some other cavity, seldom more than 3 m (10 ft) above the ground. While it generally does not use any nesting materials, it may decorate the area around the nest with bits of brightly colored plastic, shards of glass, or metal items such as bottle caps. Clutch size is generally two eggs, though this can vary from one to three. The egg is oval and on average measures 7.56 cm × 5.09 cm (2.98 in × 2.00 in). The smooth, gray-green, bluish, or white shell is variably blotched or spotted with lavender or pale brown around the larger end. Both parents incubate the eggs, which hatch after 28 to 41 days. Upon hatching, the young are covered with a buffy down, unlike turkey vulture chicks which are white. Both parents feed the nestlings, regurgitating food at the nest site. The young remain in the nest for two months, and after 75 to 80 days they are able to fly skillfully. Predation of black vultures is relatively unlikely, though eggs and nestlings are readily eaten if found by mammalian predators such as raccoons, coatis and foxes. Due to its aggressiveness and size, few predators can threaten the fully-grown vulture. However, various eagles may kill vultures in conflicts and even the ornate hawk-eagle, a slightly smaller bird than the vulture, has preyed on adult black vultures, as well as the two eagles native to North America (north of Mecoco). ### Feeding In natural settings, the black vulture eats mainly carrion. In areas populated by humans, it may scavenge at garbage dumps, but also takes eggs, fruit (both ripe and rotting), fish, dung and ripe/decomposing plant material and can kill or injure newborn or incapacitated mammals. Like other vultures, it plays an important role in the ecosystem by disposing of carrion which would otherwise be a breeding ground for disease. The black vulture locates food either by sight or by following New World vultures of the genus Cathartes to carcasses. These vultures—the turkey vulture, the lesser yellow-headed vulture, and the greater yellow-headed vulture—forage by detecting the scent of ethyl mercaptan, a gas produced by the beginnings of decay in dead animals. Their heightened ability to detect odors allows them to search for carrion below the forest canopy. The black vulture is aggressive when feeding, and may chase the slightly larger turkey vulture from carcasses. The black vulture also occasionally feeds on livestock or deer. It is the only species of New World vulture which preys on cattle. It occasionally harasses cows which are giving birth, but primarily preys on newborn calves, as well as lambs and piglets. In its first few weeks, a calf will allow vultures to approach it. The vultures swarm the calf in a group, then peck at the calf's eyes, or at the nose or the tongue. The calf then goes into shock and is killed by the vultures. Black vultures have sometimes been observed removing and eating ticks from resting capybaras and Baird's tapir (Tapirus bairdii). These vultures are known to kill baby herons and seabirds on nesting colonies, and feed on domestic ducks, small birds, skunks, opossums, other small mammals, lizards, small snakes, young turtles and insects. Like other birds with scavenging habits, the black vulture presents resistance to pathogenic microorganisms and their toxins. Many mechanisms may explain this resistance. Anti-microbial agents may be secreted by the liver or gastric epithelium, or produced by microorganisms of the normal microbiota of the species. ## Legal protections It receives special legal protections under the Migratory Bird Treaty Act of 1918 in the United States, by the Convention for the Protection of Migratory Birds in Canada, and by the Convention for the Protection of Migratory Birds and Game Mammals in Mexico. In the United States it is illegal to take, kill, or possess black vultures without a permit and violation of the law is punishable by a fine of up to US\$15,000 and imprisonment of up to six months. It is listed as a species of Least Concern by the IUCN Red List. Populations appear to remain stable, and it has not reached the threshold of inclusion as a threatened species, which requires a decline of more than 30% in ten years or three generations. ## Relationship with humans The black vulture is considered a threat by cattle ranchers due to its predation on newborn cattle. The droppings produced by black vultures can harm or kill trees and other vegetation. As a defense, the vultures also “regurgitate a reeking and corrosive vomit." The bird can be a threat to the safety of aerial traffic, especially when it congregates in large numbers in the vicinity of garbage dumps—as is the case in the Rio de Janeiro Tom Jobim International Airport. The black vulture can be held in captivity, though the Migratory Bird Treaty Act only allows this in the case of animals which are injured or unable to return to the wild. ## In popular culture The black vulture appears in a variety of Maya hieroglyphics in Mayan codices. It is normally connected with either death or as a bird of prey. The vulture's glyph is often shown attacking humans. This species lacks the religious connections that the king vulture has. While some of the glyphs clearly show the black vulture's open nostril and hooked beak, some are assumed to be this species because they are vulture-like but lack the king vulture's knob and are painted black. Black vultures are an important cultural symbol in Lima, Peru. This vulture has appeared on two stamps: those of Suriname in 1990 and Nicaragua in 1994. It is the mascot of the Brazilian soccer team Flamengo.
4,812,509
Ken "Snakehips" Johnson
1,170,107,802
British jazz band leader and dancer
[ "1914 births", "1941 deaths", "20th-century British LGBT people", "20th-century British male musicians", "British LGBT artists", "British LGBT musicians", "British civilians killed in World War II", "British jazz bandleaders", "Deaths by airstrike during World War II", "Guyanese emigrants to England", "Guyanese musicians", "LGBT Black British people", "Musicians from Georgetown, Guyana", "People educated at Sir William Borlase's Grammar School" ]
Kenrick Reginald Hijmans Johnson (10 September 1914 – 8 March 1941), known as Ken "Snakehips" Johnson, was a swing band leader and dancer. He was a leading figure in black British music of the 1930s and early 1940s before his death while performing at the Café de Paris, London, when it was hit by a German bomb in the Blitz during the Second World War. Johnson was born in Georgetown, British Guiana (present-day Guyana). He showed some musical ability, but his early interest in a career in dancing displeased his father, who wished him to study medicine. He was educated in Britain, but instead of continuing on to university, he travelled to New York, perfecting dance moves and immersing himself in the vibrant jazz scene in Harlem. Tall and elegant, he modelled himself professionally on Cab Calloway. He returned to Britain and set up the Aristocrats (or Emperors) of Jazz, a mainly black swing band, with Leslie Thompson, a Jamaican musician. In 1937 he took control of the band through a legal loophole, resulting in the departure of Thompson and several musicians. Johnson filled the vacancies with musicians from the Caribbean; the band's popularity grew and its name changed to the West Indian Dance Orchestra. From 1938 the band started broadcasting on BBC Radio, recorded their first discs and appeared in an early television broadcast. Increasingly popular, they were employed as the house band at the Café de Paris, an upmarket and fashionable nightclub located in a basement premises below a cinema. A German bombing raid on London in March 1941 hit the cinema, killing at least 34 and injuring dozens more. Johnson and one of the band's saxophonists were among those killed; several other band members were injured. The West Indian Dance Orchestra were the leading swing band in Britain at the time, well-known and popular through their radio broadcasts, but their impact was more social than musical. As leader of a mainly black orchestra playing the most up-to-date music of the time, Johnson was seen as a pioneer for black musical leaders in the UK. When the band broke up after Johnson's death, the members had an impact on the nature and sound of British jazz. In 1940 Johnson had begun a relationship with Gerald Hamilton, a man twenty years his senior. After Johnson's death Hamilton never travelled without a framed photograph of him, always referring to him as "my husband". ## Biography ### Early life Kenrick Reginald Hijmans Johnson was born in Georgetown, British Guiana (present-day Guyana), on 10 September 1914. His parents were Dr Reginald Fitzherbert Johnson, a doctor and medical officer of health from British Guiana, and Anna Delphina Louisa Hijmans, a nurse from Dutch Guiana (now Suriname). His uncle was the pianist Oscar Dummett. Johnson appeared in a comb and paper band at his Georgetown school, Queen's College, and played the violin. His early interest in dancing was opposed by his father, who considered a future in the medical profession more appropriate for his son. To give him a British education and to further the possibility of a medical career, Kenrick was sent to the UK at the age of 14—arriving at Plymouth on 31 August 1929—for schooling at Sir William Borlase's Grammar School near Marlow, Buckinghamshire. He played cricket and football at the school; as a tall boy—he was eventually 6 feet 4 inches (1.93 m)—he was an excellent goalkeeper. He also played the violin in the school chapel, and danced for his friends. On leaving school in 1931, Johnson studied law at the University of London, but gave up to work as a dancer. He worked with travelling revue troupes and took professional lessons. His main influence was Buddy Bradley, a well known African American dancer and choreographer who ran a dance school in the West End of London. Through Bradley's influence, Johnson was recorded in 1934 for the film Oh, Daddy!, and in December 1934 he travelled to Trinidad. He toured the Caribbean, dancing on stage, before moving on to the US, where he visited Harlem, New York. He spent his time in the US honing his tap dancing skills, and studying the styles of the local African American dancers. According to Val Wilmer, the writer on jazz, it was here that he "learnt to wind his hips in the suggestive manner that his nickname implied". According to Andrew Simons, Head of Music at the British Library, it is likely that Johnson also saw the act of Bill "Bojangles" Robinson, who performed a "stair dance" that was well-known on the New York vaudeville stage. Johnson met Fletcher Henderson, who encouraged him in a future band-leading career and allowed him to conduct his orchestra. While in the US he featured in two short films. He appeared on stage in August 1935 for a one-night performance in British Guiana; posters advertised him as "Ken 'Snakehips' Johnson, Direct in from Hollywood after contract with Warner Bros. Studios". He returned to Britain in 1936. ### Career Johnson's experiences in Harlem motivated him to start his own swing band. According to Wilmer, British dance bands of the time "were technically proficient but generally lacked the ability then to 'swing' like African Americans". Johnson saw his music in "the context of black internationalism and Pan-Africanism that shaped London in the 1930s". Wanting to model himself on the entertainer-bandleader model, such as the American Cab Calloway—an elegant figure who led his swing orchestra in tails—Johnson began to build an all-black band. In 1936 he teamed up with the Jamaican trumpeter Leslie Thompson to form an all-black jazz band, the Aristocrats (or Emperors) of Jazz, sometimes the "Jamaican Emperors", who made their debut that April. Thompson was the musical leader of the band. Wanting to achieve the same sounds as the American big bands, he said "I made them rehearse to get that lift that Jimmie Lunceford and [Duke] Ellington were getting on their records"; he described Johnson as "a stick wagger—he was no musician". While Johnson left the musical practice for Thompson to direct, he rehearsed his showmanship and dance moves. On saxophone the band included three Jamaicans (Bertie King, Louis Stephenson and Joe Appleton) and Robert Mumford-Taylor, who was of Sierra Leonean descent. Thompson was joined on trumpet by the Trinidadian Wally Bowen, the Jamaican Leslie "Jiver" Hutchinson and Arthur Dibbin, who was born in South Wales of West African descent. On double bass they employed either the South African Bruce Vanderpoye or Abe "Pops" Clare from the Caribbean. Yorke de Souza, a Jamaican, was the pianist; Joe Deniz, who was born in South Wales of a father from the Cape Verde Islands, was the guitarist. As Thompson could not find suitable black trombonists, he employed Reg Amore and Freddie Greenslade, both of whom were white but would wear blackface to ensure the band were seen as an all-black ensemble. Although the group struggled financially when they first started, they soon built a reputation and following. First performing in cinemas in outer London from April 1936, the band toured Britain, appearing on the variety circuit. Towards the end of 1936 Johnson and the band were recruited for a residency as the house band at the Old Florida club in Old Bruton Mews, Mayfair for a six-week trial—with an associated income of four or five times more than most night club bands. Johnson received £20 a week; the others a little less, but all did well at a time when the average wage was £5 a week. In February 1937 Johnson and his manager, Ralph Deene, renegotiated the band's contract with the club in their names, omitting Thompson from the contract, effectively taking ownership of the orchestra. Thompson left, taking several members loyal to him. To fill the gaps in the orchestra, Johnson recruited four musicians he knew from Trinidad: the saxophonists George Roberts and Dave "Baba" Williams; Dave Wilkins on trumpet and Carl Barriteau on clarinet, although several of those who left returned over time. With Thompson gone from the band, Hutchinson took over the role of musical leader. The band continued performing at the Old Florida Club and began taking day-time stage work. They were soon scouted at the Shepherd's Bush Empire by Leslie Perowne of the BBC's Variety Department. This led to their first radio broadcast on 11 January 1938 for a 30-minute segment on the BBC Regional Programme. It was the first of 43 broadcasts they were to make. The following month they recorded their first discs, "Goodbye" and "Remember", although neither was issued. In July that year they recorded their first releases, "Washington Squabble" and "Please be Kind". Johnson and the West Indian Dance Orchestra, as the band were now known, appeared in an early television broadcast on the BBC in either 1938 or 1939. Towards the end of 1938 Johnson began making plans for an overseas tour, focused on Scandinavia and the Netherlands; he also planned to attend the 1939 New York World's Fair, appearing in the West Indian section. The outbreak of the Second World War curtailed these plans. In 1939 the band appeared as a backing orchestra for the film Traitor Spy. Johnson did not appear in the film, and his post as band leader was taken by "Jiver" Hutchinson. In April 1939 the band began a residence at a new club, Willerby's. As well as providing a show for the audience, they played music for dancing. The music magazine Melody Maker considered that the move to music for dancing was advantageous for the band as "their music has a dance-inducing quality which cannot fail to please, while, as an entertaining unit, the band ranks high". Due to the threat of bombing, Willerby's closed in October 1939, but the band were in demand and began an engagement at the Café de Paris, an upmarket nightclub in Coventry Street, London. The band's popularity rose, as did their profile: the Café de Paris was equipped to broadcast on the BBC, and they regularly performed on radio across the UK. The demand for their employment was aided by British musicians being conscripted for war service, which the largely West Indian orchestra were not. In 1940 Johnson began a relationship with Gerald Hamilton, a man twenty years his senior; the couple lived for a while in Kinnerton Street, Belgravia. When the Blitz started, they took a cottage in Bray, Berkshire, on the banks of the River Thames, and Johnson would commute into London to perform, returning to Bray to arrive in the early hours of the morning. According to Tom Cullen, Hamilton's biographer, Johnson: > was amused by Gerald's Edwardian airs and malicious anecdotes and considered him to be 'a real cool cat'; while Gerald, for his part, undertook to educate Ken's palate in the mysteries of wine ('I can conceive no greater pleasure than that of instructing a willing pupil in the glories of a worthwhile cellar', as Gerald expresses it)." ### Death London's West End and club-land continued to party late into the night, despite the nightly raids by German bombers. Clubs prospered as Londoners and visitors revelled for any excuse. > Mad to celebrate this or that—a call-up, a promotion, an unexpected week-end pass, or a hasty marriage—they groped their way through the black-out to the Savoy and the Café de Paris ... and enjoyed the added thrill of dancing the night away while anti-aircraft guns thudded away outside. The Café de Paris capitalised on the situation. With the club underground, beneath the Rialto cinema, the Café's manager, Martin Poulsen, advertised it as "the safest and gayest restaurant in town – even in air raids. Twenty feet below ground". In reality all that stood between the club and the German bombs were the glass roof of the Rialto and the club's ceiling. On 8 March 1941 Johnson had drinks with friends at the Embassy Club, near the Café de Paris. It was a night of heavy bombing in central London and his friends tried to persuade him to stay. Johnson was resolved to make his entrance, so ran to the club through the blackout to arrive in time for his 9:45 pm entrance. As the band began playing its signature song, "Oh Johnny", at least one 50 kilograms (110 lb) high-explosive bomb hit the building. At least 34 people died in the club, and dozens were injured. Johnson was killed instantly, as was the saxophonist 'Baba' Williams, who was cut in half by the blast; Poulsen was also killed. The band's guitarist Deniz later recounted: > As we started playing there was an awful thud, and all the lights went out. The ceiling fell in and the plaster came pouring down. People were yelling. A stick of bombs went right across Leicester Square, through the Café de Paris and further up to Dean Street. The next thing I remember was being in a small van which had been converted into an ambulance. Then someone came to me and said: "Joe, Ken's dead." It broke me up. Several other members of the band were also injured in the explosion. Barriteau's wrist was broken; Deniz and Bromley each had a broken leg; de Souza had splinters of glass in his eye, near the pupil. According to the screenwriter Sid Colin, "The West End paused for a moment of horrified silence—then the dance went on". The Café reopened in 1948 and continued trading until December 2020, when it closed because of bankruptcy brought on by the COVID-19 lockdown. </ref> The following morning, Hamilton was phoned by the police and asked to go to the Westminster mortuary to identify Johnson. He wrote in his diary "Again that awful feeling of nausea which I had felt when France fell, and again the sensation of the ground slipping from beneath my feet". Hamilton was devastated by the loss of his partner and never travelled without a framed photograph of Johnson in evening dress, always referring to him as "my husband". Johnson's funeral took place on 14 March 1941 at Golders Green Crematorium; his ashes were placed in the Borlase School chapel following a memorial service on 8 March 1942. Melody Maker published coverage on Johnson and his band for three weeks after his death. The BBC waited until September 1941 to broadcast a memorial to him on the Radio Rhythm Club programme; it drew a 15.3 per cent listenership, which was high for a late-night broadcast on the BBC Forces Programme. That October Melody Maker arranged a jam session at HMV Recording Studios, Abbey Road. Many of Johnson's former colleagues played—Deniz and Bromley still showing the leg injuries they sustained—and they played several songs together, with other musicians filling in the gaps in the group. The BBC also broadcast two further programmes in February 1942, once when Perowne played Johnson's records, and once when the band reunited under Barriteau for a one-off performance. ## Impact and legacy The West Indian Dance Orchestra became the leading swing band in Britain, and one of the first British bands to play the style in the manner of the US bands. According to the musicologist Catherine Tackley, by 1941 Johnson and his orchestra were "a unique ensemble in Britain". Wilmer considers the impact they made was "wider and more complex" than just as entertainers. Culturally the orchestra made an impact in society: the apparently all-black outfit was the only one in the country. According to Wilmer: > Johnson's was neither the first black British band nor the first all-black ensemble to appear in Britain. He played some excellent musical arrangements, but as these owed strict allegiance to prevailing American principles and style, his significance in maintaining the first established black British band was social as much as musical. The historian Peter Fraser wrote that Johnson became both a pioneer and model for later British black musicians. His impact on London clubland, and the social changes brought about from the war, led to the emergence of later racially mixed bands. Within a month of his death, several of his band members had been employed by bandleaders whose band composition had been white until their introduction. Such racial integration in mainstream British jazz and dance orchestras increased over the following years, although many bands, including those led by Hutchinson, still faced what was called the "colour bar" when trying to gain bookings in clubs. The band broke up with Johnson's death, devastated and traumatised. Al Bowlly, the singer who sometimes performed with the band, was killed in an air raid the month after Johnson; others moved on to work with other bands: Harry Parry, the Welsh bandleader, hired Deniz, De Souza and Wilkins for his Radio Rhythm Sextet, and Barriteau started a mixed swing orchestra in 1942. Hutchinson worked with the bandleader Geraldo for three years, before he formed another all-black band, the "All-Coloured Orchestra", or "All-Star Coloured Band", that comprised many members of Johnson's group, including Williams, Stephenson, Roberts, Appleton, de Souza, Deniz and Coleridge Goode. The musical historian Roberta Freund Schwartz writes that the movement of "surviving members ... arguably improved the overall sound of native jazz". In 2013 the BBC screened Dancing on the Edge by Stephen Poliakoff. The series centred on a fictional jazz band in the early 1930s, led by Louis Lester (played by Chiwetel Ejiofor). The character was a composite of several band leaders of the time, including Johnson. The same year the broadcaster Clemency Burton-Hill presented Swinging in the Blitz for the BBC, an exploration of the role of jazz in Britain during the Second World War; Johnson and his band's history was the focus for much of the programme. In 2019 the actor and writer Clarke Peters presented the BBC Radio 4 series Black Music in Europe: A Hidden History; the episode covering the Second World War included a history of Johnson and his band. ## Approach and style Johnson was tall, elegant and handsome. His image to the public is described by Bourne as "a gentleman about town". Both he and his all-black band dressed in white jackets—Johnson would wear white tails and conduct the orchestra using an extra-long baton. His band wore white dinner jackets and, according to Wilmer, "For the general public, the sight of twelve disciplined men of African descent, dressed smartly in white band jackets, was exciting and memorable." One of Johnson's aims was to ensure the band had a strong visual impact, just as the American swing bands did. This included choreographing the movements of the musicians, as well as incorporating his own dance moves into the music. According to the writer Amon Saba Saakana, Johnson's "brilliant dancing and showmanship established the band's reputation as one of the best in Britain". Although Johnson was not as musically talented as the musicians he led—one of his former colleagues said of him "he couldn't tell a B flat from a pig's foot!"—he had, as his business manager said, "the gift of imparting his terrific enthusiasm to those who were [talented]". Introducing a black musician into a largely or all-white group was difficult, and several London venues blocked the inclusion of a black musician. Those clubs and band-leaders that did include black musicians would use one—often seen as a novelty for a band—but struggle to include a second against a club manager's veto unless the musician was better than a white musician. When Johnson discussed the problem with Bert Firman, who preceded him at the Café de Paris, Johnson related he had also faced the obstruction: > So what real chance has your ordinary, competent but day-to-day coloured musician got unless, of course, he is American? There is such an inferiority complex about Americans that a lousy musician would still get by so long as he had a Yank accent ... But I'm talking about West Indians. What real chance does a West Indian got? Not much. Put us in a group, stress that we are a West Indian Dance Orchestra and then we become a big novelty. Those clever fellows with their natural rhythms. Just make sure everyone gets the point, call me Ken Snakehips Johnson! ## Professional output ### Recordings Johnson's first band—the Aristocrats (or Emperors) of Jazz—did not make any recordings. The discs produced by the West Indian Dance Orchestra were commercial issues for listeners of dance band music, not swing. Because the 78 record had a 3 minute 20 second limit on the recording, "the band's special sense of swing was sometimes dampened", according to Simons. Nevertheless, initially recording for Decca Records, their work was promoted by the company in their "Swing" series. Some of the arrangements of Johnson's music were done by the American musician Adrian de Haas, others by Barriteau and some by Kenny Baker—who later appeared in the Ted Heath Orchestra. The jazz musician Soweto Kinch considers Johnson's recordings contain aspects of Calypso music. The musical historian Jason Toynbee considers the music an authentic swing, with sophisticated arrangement "but still very much American in its derivation". For some of the recordings, Johnson employed his friend, Al Bowlly—who also accompanied the band at the Café de Paris—and the Henderson Twins on vocals. One of the recordings, "Exactly Like You" features the whole band singing in syncopated vocal. ### Broadcasts Johnson both appeared with his band on BBC Radio and acted as a disc jockey, presenting programmes such as Calypso and other West Indian Music; the broadcast he made on 24 June 1939 preceded the cricket Test match between the West Indies and the MCC. Although the BBC eschewed jazz and ensured their modern music output was more conventional dance music, Johnson played swing, advertising the programme as "ultra-modern dance music". Johnson had recordings made of his radio broadcasts onto acetate discs, giving some of these to the band members. Some of these recordings are now held by the British Library Sound Archive.
190,057
Symphony No. 4 (Mahler)
1,173,450,996
Symphony by Gustav Mahler
[ "1900 compositions", "Compositions in G major", "Death in music", "Symphonies by Gustav Mahler" ]
The Symphony No. 4 in G major by Gustav Mahler was composed from 1899 to 1900, though it incorporates a song originally written in 1892. That song, "Das himmlische Leben", presents a child's vision of heaven and is sung by a soprano in the symphony's Finale. Both smaller in orchestration and shorter in length than Mahler's earlier symphonies, the Fourth Symphony was initially planned to be in six movements, alternating between three instrumental and three vocal movements. The symphony's final form—begun in July 1899 at Bad Aussee and completed in August 1900 at Maiernigg—retains only one vocal movement (the Finale) and is in four movements: Bedächtig, nicht eilen (sonata form); In gemächlicher Bewegung, ohne Hast (scherzo and trio); Ruhevoll, poco adagio (double theme and variations); and Sehr behaglich (strophic variations). The premiere was performed in Munich on 25 November 1901 by the composer and the Kaim Orchestra, but it was met with negative audience and critical reception over the work's confusing intentions and perceived inferiority to the more well-received Second Symphony. The premiere was followed by a German tour, a 1901 Berlin premiere, and a 1902 Vienna premiere, which were met with near-unanimous condemnation of the symphony. Mahler conducted further performances of the symphony, sometimes to warm receptions, and the work received its American and British premieres in 1904 and 1905. The symphony's first edition was published in 1902, but Mahler made several more revisions up until 1911. After Mahler's death, the symphony continued to receive performances under conductors such as Willem Mengelberg and Bruno Walter, and its first recording is a 1930 Japanese rendition conducted by Hidemaro Konoye that is also the first electrical recording of any Mahler symphony. The musicologist Donald Mitchell believes the Fourth and its accessibility were largely responsible for the post-war rise in Mahler's popularity. The symphony uses cyclic form throughout its structure, such as in the anticipations of the Finale's main theme in the previous three movements. The first movement has been characterized as neoclassical in style, save for its complex development section. The second movement consists of scherzos depicting Death at his fiddle, which are contrasted with Ländler-like trios. The third movement's two themes are varied alternately before reaching a triple forte coda, and the Finale comprises verses from "Das himmlische Leben" sung in strophes that are separated by refrains of the first movement's opening. Certain themes and motifs in the Fourth Symphony are also found in Mahler's Second, Third, and Fifth Symphonies. ## History ### Composition Gustav Mahler's Fourth Symphony is the last of the composer's three Wunderhorn symphonies (the others being his Second and Third Symphonies). These works incorporated themes originating in Mahler's Des Knaben Wunderhorn (The Boy's Magic Horn), a song cycle setting poems from the folk poetry collection of the same name. The core of the Fourth Symphony is an earlier song, "Das himmlische Leben" ("The Heavenly Life"), set to text from Des Knaben Wunderhorn but not included in Mahler's song cycle. Mahler considered the song both the inspiration and goal of the Fourth Symphony, calling it the "tapering spire of the edifice." Fragments of it are heard in the first three movements before it is sung in its entirety by a solo soprano in the fourth movement. Mahler completed "Das himmlische Leben" in 1892, as part of a collection of five Humoresken (Humoresques) for voice and orchestra. He adapted the text of "Das himmlische Leben" from the original Bavarian folk song "Der Himmel hängt voll Geigen" ("Heaven is Hung with Violins" or "The World through Rose-colored Glasses") in Des Knaben Wunderhorn. The poem describes scenes and characters from a child's vision of heaven. In 1895, Mahler considered using the song as the sixth and final movement of his Third Symphony. While remnants of "Das himmlische Leben" can be found in the Third Symphony's first, fourth, and fifth movements—including a quotation of the song in the fifth movement's "Es sungen drei Engel" ("Three Angels were Singing")—Mahler eventually decided to withdraw the song from the work. He instead opted to use the song as the finale of a new symphony, his Fourth. Consequently, there are particularly strong thematic and programmatic connections between the Third and the Fourth through "Das himmlische Leben", though the composer also realized that the Fourth was closely related to his First and Second Symphonies as well. In conversation with Natalie Bauer-Lechner in the summer of 1900, Mahler described the Fourth Symphony as the conclusion to the "perfectly self-contained tetralogy" of his first four symphonies: she later expanded on this to suggest that the First depicts heroic suffering and triumph; the Second explores death and resurrection; the Third contemplates existence and God; and the Fourth, as an extension of the Third's ideas, explores life in heaven. According to Paul Bekker's 1921 synopsis of the symphony, Mahler made an early program sketch titled Sinfonie Nr. 4 (Humoreske) that has the following six-movement form: The sketch indicates that Mahler originally planned for the Fourth Symphony to have three purely symphonic movements (first, third, and fifth) and three orchestra songs: "Das irdische Leben" (composed c. 1893 as a Des Knaben Wunderhorn song), "Morgenglocken" (completed in 1895 as the Third Symphony's "Es sungen drei Engel"), and "Das himmlische Leben". However, the symphony would be modified until only the program sketch's first and last movements would be realized as their respective movements in the symphony's final form, resulting in a Fourth Symphony of normal symphonic length (around 45 minutes) as opposed to the composer's significantly longer earlier symphonies. During Mahler's 1899 summer vacation in Bad Aussee, the Fourth Symphony, in Bauer-Lechner's words, "fell into his lap just in the nick of time" in late July. The vacation served as Mahler's only chance during the entire year when he was free to compose, but his productivity heretofore was hindered by poor weather and listening to what he called "ghastly health-resort music". As the vacation neared its end, Mahler worked on the symphony for ten days, during which he drafted "about half" of the three instrumental movements and sketched the variations of the Adagio third movement, according to Bauer-Lechner. Mahler finished the Fourth during his summer vacation in Maiernigg the next year; following another bout of unproductivity that summer, Mahler eventually found his working rhythm and completed the symphony's Partiturentwurf (first full orchestral score) on 5 August 1900. The symphony's completion suddenly left Mahler feeling "empty and depressed because life has lost all meaning", and Bauer-Lechner reports that he was "deeply upset to have lost such an important part of his life" composing the work. Later that year during the Christmas holidays, Mahler revised the Scherzo second movement, finalizing its orchestration on 5 January 1901. Though Mahler published his programs for the First and Second symphonies, he refrained from publishing a program for the Fourth. In the words of the musicologist James L. Zychowicz, Mahler intended for "the music to exist on its own." Mahler was also opposed to giving any titles for the symphony's movements, despite having "devised some marvelous ones", because he did not want critics and audiences to "misunderstand and distort them in the worst possible way." ### Premiere During the first half of 1901, Richard Strauss considered conducting the first complete performance of Mahler's Third Symphony. However, Strauss, unsure whether he had enough time to prepare the Third's premiere, wrote to Mahler on 3 July asking whether he could conduct the Fourth's premiere instead. Mahler in his response revealed that he had already promised the premiere to Munich, "where the Kaim Orchestra and the Odeon are having such a tug-of-war over it that I'm finding it hard to try to choose between them". The Vienna Philharmonic had also asked Mahler several times whether they could perform the symphony's premiere, but Mahler by then had promised the premiere to Felix Weingartner, head of the Kaim Orchestra. Not long after the exchanges with the Philharmonic, the composer asked for Weingartner's permission sometime in August or September 1901 to conduct the premiere himself, citing his anxiety over the symphony and its performance. Eventually, it was planned for Mahler to conduct the Kaim Orchestra in Munich for the world premiere, after which Weingartner and the Kaim Orchestra would perform the work on tour in various German cities and Mahler himself would conduct another performance in Berlin. To review the symphony's orchestration before its publication, Mahler arranged a reading rehearsal with the Vienna Philharmonic on 12 October that doubled as a rehearsal of the Vienna premiere scheduled for January next year. Mahler was not satisfied with the results, making corrections to the score and fully rehearsing the work four times before the symphony's premiere. Though the Munich premiere was originally planned for 18 November, Mahler requested in late October that Weingartner postpone the performance to 25 November, citing "insurmountable difficulties". He also opposed including a vocal work before the symphony in the premiere's program, as he wanted the Finale's soprano "to come as a complete surprise". Henry-Louis de La Grange writes: "the Fourth Symphony had cost Mahler more toil and anguish than the monumental symphonies that had preceded it, and, notwithstanding he was apprehensive of the reactions of its first audience, he secretly hoped that its modest dimensions and the clarity of its style would finally win him the approval of both the public and the musicians." The world premiere of the symphony was performed on 25 November 1901 in Munich at the Kaim-Saal, with Mahler conducting the Kaim Orchestra and Margarete Michalek as soprano. Bauer-Lechner writes that the first movement was met with both applause and boos since a number in the audience were "unable to follow the complexity of events in the development". The Scherzo proved more confusing to the audience and received further vocal derision. Michalek's performance in the Finale "saved the day"; her youth and charm was said to have "poured oil on the troubled waters". Despite this, the premiere left many in its audience incensed, as the Munich press was quick to report. The Allgemeine Zeitung, though praising the first movement, described the symphony as "not readily accessible and, in any case, impossible to judge after only one hearing". It also criticized the work's "pretensions" and unjustified use of "the grotesquely comic" before accusing it of "[trespassing] against the Holy Spirit of music". The Münchener Zeitung [de] and the Bayerischer Kurier both expressed disappointment when comparing Mahler's Fourth to what they considered his superior Second Symphony; the former assessed the Fourth to be a "succession of disjointed and heterogeneous atmospheres and expressions mixed with instrumental quirks and affectations" while the latter said the work was full of "incredible cacophony". Likewise, Die Musik claimed that "the bad seeds" in parts of the Second grew into "immense spiky thistles" in the Fourth. The symphony did find some praise in the Kleine Journal—which lauded the Finale as "quite simply a work of genius" despite calling the whole work "transparent, sensitive, almost hysterical"—and the Münchener Post, which hailed the symphony to be a "great step forward on the road to artistic clarity". ### Subsequent performances and reception Weingartner and the Kaim Orchestra's tour of the symphony, with Michalek as soloist, performed in Nuremberg (26 November 1901), Darmstadt (27 November), Frankfurt (28 November), Karlsruhe (29 November), and Stuttgart (30 November). Most of the cities gave unanimously negative receptions towards the Fourth, with Stuttgart being the sole exception. A false report of a successful Munich premiere prompted some applause after the Nuremberg performance, but the city's General-Anzeiger gave a harsh review of Mahler's "Vaudeville-Symphony", praising only its orchestration. In Frankfurt, the audience's "angry and violent" hissing was likened to "the sound of an autumn wind blowing through the dead leaves and dried twigs of a forest" by the Musikalisches Wochenblatt [de]. In Karlsruhe, the concert began with a near-empty audience, and Weingartner chose only to conduct the symphony's Finale. The Stuttgart press was mixed: the Schwäbischer Merkur [de] praised Mahler as a rising star and considered the work a "wreath of good-humored melodies and folk dances"; on the other hand, the Neues Taggblatt condemned the symphony for its "vulgar passages". The tour's failure discouraged Mahler and traumatized Weingartner; the latter never conducted a piece by Mahler again. The Berlin premiere was performed on 16 December 1901 at the Berlin Opera, with Mahler conducting the Berliner Tonkünstler Orchestra and Thila Plaichinger as soprano. The work's reception was hostile; La Grange writes that "the Berlin press took a malicious delight in tearing the new work to shreds", with negative reviews in the Berliner Börsen-Zeitung [de], Berliner Tageblatt, and Vossische Zeitung. Mahler also conducted for the Vienna premiere on 12 January 1902 at the Großer Musikvereinsaal, which was performed by the Vienna Philharmonic and Michalek. Once again, the reception was a near-unanimous condemnation of the symphony, including criticism from reviewers Max Kalbeck, Theodor Helm, Richard Heuberger, and Max Graf. Mahler conducted a 23 January 1903 performance at the Kurhaus, Wiesbaden, where he was surprised by the friendly reception. That year later saw a performance in Düsseldorf. On 23 March 1904, the composer conducted the Fourth at the Staatstheater Mainz, which received warm applause but reviews criticizing the work's "naïveté". This was followed by a number of international performances. In 1904, Mahler traveled to Amsterdam to conduct a double performance of the symphony on 23 October at the Royal Concertgebouw with the Concertgebouw Orchestra and the soloist Alida Lütkemann. The American premiere on 6 November 1904 in New York City saw Walter Damrosch conduct the New York Symphony Society and the soprano Etta de Montjau. The British premiere on 25 October 1905 was a Prom concert given by Henry Wood, who conducted the New Queen's Hall Orchestra and his wife, Olga Wood, as soprano. Mahler conducted another performance on 18 January 1907, this time in Frankfurt's Saalbau. Mahler's last performances of the symphony were with the New York Philharmonic and the soprano Bella Alten in Carnegie Hall on 17 and 20 January 1911. In the Amsterdam Mahler Festival of May 1920, the Concertgebouw Orchestra under Willem Mengelberg's direction performed nine concerts during which Mahler's complete opus was played for the first time. Mahler's protégé Bruno Walter conducted the symphony in Moscow in 1923, but he had to convince the concert's Russian organizers not to alter the religious references in "Das himmlische Leben". During the 1940s, the Fourth received performances from the London Philharmonic Orchestra conducted by Anatole Fistoulari and the BBC Symphony Orchestra conducted by Adrian Boult, contributing to what Donald Mitchell calls "the Mahler 'boom' in England". Despite Mahler's contemporaries' negative criticism, Mitchell believes that the Fourth "above all [was] the agent of changed attitudes to Mahler in the years after the Second World War" because its relatively modest resources and length, its approachability, and its appeal eventually won "admiring audiences". In 1973, Kurt Blaukopf stated that of Mahler's symphonies, the Fourth "became popular most quickly". In 2005, Zychowicz wrote that the Fourth, in which the composer was "uncannily concise", remains one of Mahler's most accessible compositions. ## Instrumentation The symphony is scored for a smaller orchestra compared to Mahler's other symphonies, and there are no parts for trombone or tuba. Paul Stefan notes the "fairly numerous" woodwinds and strings, while Michael Steinberg calls the percussion section "lavish". The instrumentation is as follows: Woodwinds 4 flutes (3rd and 4th doubling piccolos) 3 oboes (3rd doubling cor anglais) 3 bassoons (3rd doubling contrabassoon) Brass 4 horns 3 trumpets Percussion 4 timpani bass drum cymbals triangle sleigh bells tam-tam glockenspiel Voices soprano solo (used only in fourth movement) Strings harp 1st violins 2nd violins violas cellos double basses ## Structure Although Mahler described the symphony's key as G major, the work employs a progressive tonal scheme of B minor/G major to E major, as classified in The New Grove Dictionary of Music and Musicians. The symphony is in four movements: Mahler attempted to unify the four movements through cyclic form, linking movements by reusing themes such as that of the bells from the first movement's opening and "Das himmlische Leben" from the last movement. Deryck Cooke estimates the symphony's duration to be 50 minutes, a moderate length for a symphony that Mahler considered to be "of normal dimensions". La Grange gives the following movement durations based on Mahler's 1904 Amsterdam performance, which took a longer 57 minutes: ### I. Bedächtig, nicht eilen Cooke characterizes the first movement as a "pastoral 'walk through the countryside' movement", and it is one of Mahler's shortest first movements. The introduction in B minor is played by flutes and sleigh bells: The first theme in G major is then heard, marked Recht gemächlich (very leisurely): Constantin Floros calls the first theme "remarkably short", and Theodor Adorno notices a Schubert-like sound in it. La Grange compares the first theme to a similar passage in the first movement exposition of Schubert's Piano Sonata in E-flat major, D. 568. The second theme is in D major, marked Breit gesungen (broadly sung): Floros identifies a similarity between this theme and a theme from the first movement of Beethoven's Piano Sonata No. 13. The exposition closes with a coda marked Wieder sehr ruhig (very calm again). Mitchell finds that the themes, textures, and rhythms of the exposition suggest Neoclassicism, but Mahler's style changes in the ensuing development section when "a radically different sound-world manifests itself". Floros comments on the development's "extraordinary complexity" in his analysis; he divides the development into eight parts, some of which explore distant minor keys and distort the main theme from the Finale. The development climaxes in its eighth part on a dissonant fortissimo followed by a trumpet fanfare that Mahler named "Der kleine Appel" ("The little summons" or "The little call to order"); he later used this trumpet call as the opening theme to the Fifth Symphony. The recapitulation section reaches what Stefan describes as "an almost Mozartian jubilation" towards its end, and the movement concludes with a calm and slow coda. ### II. In gemächlicher Bewegung, ohne Hast The second movement has a five-part structure, beginning with a scherzo part in C minor that alternates with a trio part in F major. The scherzo's prelude presents a horn call, followed by what Stefan terms a "ghostly theme" in a solo scordatura violin that begins the scherzo's first section in C minor. A brighter middle section in C major is then heard, before a reprisal of the C minor section. The scherzo closes with a horn postlude. The two trios between the movement's three scherzos have the character of a Ländler and are in a "lazily cheerful" style that contrasts with the scherzo's grotesqueness. La Grange describes the second movement as Mahler's "only true ländler movement" since the First Symphony's Scherzo. Floros finds that certain melodies in the trio anticipate themes from the Finale. The scherzo was originally named "Freund Hein spielt auf" ("Friend Hein Strikes Up", or "Death takes the fiddle" as paraphrased by Cooke). Freund Hein is a personification of Death in German folklore, and his fiddling is represented in the music by the harsh sound of the scordatura violin. The printed program for the 1904 Amsterdam performance even included the title "Todtentanz" ("Dance of Death") for the movement, though this was never published in the symphony's first edition. According to Mahler's widow, Alma, the composer took inspiration for this movement from the 1872 painting Self-Portrait with Death Playing the Fiddle by the Swiss artist Arnold Böcklin. Blaukopf writes that the violin passages betray "Mahler's penchant for the ludicrous and the eerie". Despite this, he notes that Freund Hein "is not frightening in effect" but is instead "uncanny". Stefan characterizes Mahler's depiction of Death as "very good-natured". ### III. Ruhevoll, poco adagio The third movement is an adagio set of double theme and variations. La Grange however believes that this "variations on two themes" interpretation of the movement is inaccurate because the second theme is "not genuinely 'varied', but only amplified when restated". Mahler called the movement his "first real variations", and he composed it under the inspiration of "a vision of a tombstone on which was carved an image of the departed, with folded arms, in eternal sleep". The musicologist Philip Barford deems the music to be "of profound, meditative beauty". Floros divides the movement into five main parts (A – B – A<sup>1</sup> – B<sup>1</sup> – A<sup>2</sup>) followed by a coda. The first theme in G major is played in the beginning of part A by the cellos: Cooke calls the opening "a transfigured cradle song". Floros views part A's structure as bar form (two Stollen—the first theme followed by its variation—and an Abgesang) with an Appendix. Part A closes with a bass motif, which both Bekker and Floros find "bell-like"; to the latter, the motif is reminiscent of the bell motif from Wagner's Parsifal. La Grange writes that the ostinato bass motif is "always present in some form or other" and gives the movement "a strong passacaglia feeling". Part B, marked Viel langsamer (much slower), is in three sections and also resembles the bar form structure. In the first section, the oboe introduces the lamenting E minor second theme, which is varied in the second section that climaxes in a fortissimo. Part B's final section, an Abgesang, is described by Floros as "symbolic of deepest mourning". Part A<sup>1</sup> is a variation of part A marked Anmutig bewegt (graceful and lively), and Floros describes part B<sup>1</sup> as "a very free" and far more intense variation of part B. La Grange identifies part A<sup>2</sup> as the "first variation proper" of part A; the first theme undergoes four variations of increasing tempo, reaching Allegro molto before a sudden return to Andante. Part A<sup>2</sup> concludes with a variation of part A's Abgesang that fades away into the coda. Floros calls the coda's introduction "the most splendid passage ... of the entire Symphony". A triple forte E major chord is played by the winds and strings, and the bass motif is reprised by the timpani and double basses. The horns and trumpets then play the main theme of the Finale before the volume rapidly decreases. As the music slows down and dies away, the movement's final passage includes what Zychowicz refers to as the Ewigkeit (eternity) motif, which Mahler first used in the Finale of his Second Symphony. ### IV. Sehr behaglich La Grange analyzes the fourth movement as a strophic durchkomponiert in three main sections separated by orchestral refrains and ending with a coda. An orchestral prelude in G major begins the Finale: The soprano then sings the first verse or strophe of "Das himmlische Leben", beginning with the movement's main theme over "Wir genießen die himmlischen Freuden" (We revel in heavenly pleasures): The verse is sung "with childishly gay expression" over text describing the joys of heaven. It closes with a suddenly slower choralelike figure over "Sankt Peter im Himmel sieht zu" (Saint Peter in Heaven looks on), leading into a lively orchestral interlude that reprises the bell opening from the first movement. A contrasting verse (second strophe) depicts a heavenly feast and is sung in E minor; this verse closes with another choralelike figure before the bell refrain returns for the interlude. The third strophe (comprising the third and fourth verses of the text) is in G major again and is sung over a variation of the first strophe's theme and form. After the bell refrain is played once more, the coda's pastoral introduction in E major is heard. This orchestral passage is marked Sehr zart und geheimnisvoll bis zum Schluß (very gentle and mysterious until the end), and the final strophe that follows is sung in E major over a variation of the main theme. This strophe corresponds to the text's final verse with images of "most gentle restfulness", and Mitchell calls it "an extraordinary experience without parallel elsewhere in Mahler". The coda ends on an orchestral postlude in pianissimo that gradually fades away. Adorno finds that there is an ambiguity as to whether the music and its heavenly vision has "fallen asleep for ever", and David Schiff interprets the Finale's depiction of Heaven as "untouchable, outside experience". Both agree that the Finale's promises of joy are present yet unattainable. ## Fourth movement text ## Revisions and publication Following the 1901 world premiere, Mahler revised the symphony a number of times, including changes in instrumentation, dynamics, and articulation for Julius Buths (c. 1903); revisions for a 3 November 1905 performance in Graz; changes made in the summer of 1910; and Mahler's last autographed revisions in 1911, made after his final performances of the symphony in New York. The symphony's first edition was published in 1902 by Ludwig Doblinger [de] of Vienna as a quarto score. The symphony was later taken up by the Vienna publisher Universal Edition, which reprinted the score in octavo format (c. 1905). Universal Edition published a subsequent edition in 1906, incorporating Mahler's early revisions, and reprinted this edition in 1910 and 1925. However, Universal Edition failed to carry out any of Mahler's further changes since the 1906 edition. Publishing rights of some of the symphony's editions were later transferred to Boosey & Hawkes, but their 1943 edition also failed to include the final revisions. Universal Edition eventually published a new edition in 1963, which saw Erwin Ratz incorporate Mahler's yet unincluded revisions. These changes were met with criticism from Hans Redlich, who wrote in 1966: "Only the musical texts of the Symphony published between 1902 and 1910 carry full authenticity for posterity." Josef Venantius von Wöss arranged the symphony for four-hands piano, a version which was on sale at the time of the symphony's first publishing. Erwin Stein's 1921 and Klaus Simon's 2007 arrangements both call for a reduced orchestration, though the former is scored for a smaller ensemble than the latter since Stein omits the bassoon and horn parts. ## Recordings The Fourth Symphony was first commercially recorded on 28 and 29 May 1930, with Hidemaro Konoye conducting the New Symphony Orchestra of Tokyo and the soprano Sakaye Kitasaya. The recording was released under Japanese Parlophone and is the first electrical recording of any Mahler symphony. Since then, the symphony has been recorded by ensembles in Europe, the United States, and Japan, including multiple recordings each from the New York Philharmonic, the Vienna Philharmonic, and the Concertgebouw Orchestra. In his 2020 Gramophone review of Fourth Symphony recordings, David Gutman selects Iván Fischer (2008), Willem Mengelberg (1939), Lorin Maazel (1983), and Claudio Abbado's (2009) interpretations of the symphony for his choice recordings, while also sampling recordings conducted by Simon Rattle (1997), Leonard Bernstein (1960), Otto Klemperer (1961), and Michael Tilson Thomas (2003).
1,180,624
Albert Ball
1,162,364,582
Recipient of the Victoria Cross, British WWI flying ace
[ "1896 births", "1917 deaths", "Aviators killed in aviation accidents or incidents in France", "British Army personnel of World War I", "British Army recipients of the Victoria Cross", "British World War I flying aces", "British World War I recipients of the Victoria Cross", "British military personnel killed in World War I", "Burials in Commonwealth War Graves Commission cemeteries in France", "Companions of the Distinguished Service Order", "English aviators", "Knights of the Legion of Honour", "Military personnel from Nottingham", "People educated at Nottingham High School", "People educated at The King's School, Grantham", "People educated at Trent College", "People from Lenton, Nottingham", "Recipients of the Military Cross", "Royal Flying Corps officers", "Royal Flying Corps recipients of the Victoria Cross", "Sherwood Foresters officers", "Victims of aviation accidents or incidents in 1917" ]
Albert Ball, (14 August 1896 – 7 May 1917) was a British fighter pilot during the First World War. At the time of his death he was the United Kingdom's leading flying ace, with 44 victories, and remained its fourth-highest scorer behind Edward Mannock, James McCudden, and George McElroy. Born and raised in Nottingham, Ball joined the Sherwood Foresters at the outbreak of the First World War and was commissioned as a second lieutenant in October 1914. He transferred to the Royal Flying Corps (RFC) the following year, and gained his pilot's wings on 26 January 1916. Joining No. 13 Squadron RFC in France, he flew reconnaissance missions before being posted in May to No. 11 Squadron, a fighter unit. From then until his return to England on leave in October, he accrued many aerial victories, earning two Distinguished Service Orders and the Military Cross. He was the first ace to become a British national hero. After a period on home establishment, Ball was posted to No. 56 Squadron, which deployed to the Western Front in April 1917. He crashed to his death in a field in France on 7 May, sparking a wave of national mourning and posthumous recognition, which included the award of the Victoria Cross for his actions during his final tour of duty. The famous German flying ace Manfred von Richthofen remarked upon hearing of Ball's death that he was "by far the best English flying man". ## Early life and education Albert Ball was born on 14 August 1896 at a house on Lenton Boulevard in Lenton, Nottingham. After a series of moves throughout the area, his family settled at Sedgley in Lenton Road. His parents were Albert Ball, a successful businessman who rose from employment as a plumber to become Lord Mayor of Nottingham, and who was later knighted, and Harriett Mary Page. Albert had two siblings, a brother and a sister. His parents were considered loving and indulgent. In his youth, Ball had a small hut behind the family house where he tinkered with engines and electrical equipment. He was raised with a knowledge of firearms, and conducted target practice in Sedgleys gardens. Possessed of keen vision, he soon became a crack shot. He was also deeply religious. This did not curb his daring in such boyhood pursuits as steeplejacking; on his 16th birthday, he accompanied a local workman to the top of a tall factory chimney and strolled about unconcerned by the height. Ball studied at the Lenton Church School, The King's School, Grantham and Nottingham High School before transferring to Trent College in January 1911, at the age of 14. As a student he displayed only average ability, but was able to develop his curiosity for things mechanical. His best subjects were carpentry, modelling, violin and photography. He also served in the Officers' Training Corps. When Albert left school in December 1913, aged 17, his father helped him gain employment at Universal Engineering Works near the family home. ## First World War ### Initial war service Following the outbreak of the First World War in August 1914, Ball enlisted in the British Army, joining the 2/7th (Robin Hood) Battalion of the Sherwood Foresters (Nottinghamshire and Derbyshire Regiment). Soon promoted to sergeant, he gained his commission as a second lieutenant on 29 October. He was assigned to training recruits, but this rear-echelon role annoyed him. In an attempt to see action, he transferred early the following year to the North Midlands Cyclist Company, Divisional Mounted Troops, but remained confined to a posting in England. On 24 February 1915, he wrote to his parents, "I have just sent five boys to France, and I hear that they will be in the firing line on Monday. It is just my luck to be unable to go." In March 1915, Ball began a short-lived engagement to Dorothy (Dot) Elbourne. In June, he decided to take private flying lessons at Hendon Aerodrome, which would give him an outlet for his interest in engineering and possibly help him to see action in France sooner. He paid to undertake pilot training in his own time at the Ruffy-Baumann School, which charged £75 to £100 for instruction (£5,580 to £7,440 in 2010 prices). Ball would wake at 3:00 am to ride his motorcycle to Ruffy-Baumann for flying practice at dawn, before beginning his daily military duty at 6:45 am. His training at Ruffy-Baumann was not unique; Edwin Cole was learning to fly there at the same time. In letters home Ball recorded that he found flying "great sport", and displayed what Peter de la Billière described as "almost brutal" detachment regarding accidents suffered by his fellow trainees: > Yesterday a ripping boy had a smash, and when we got up to him he was nearly dead, he had a two-inch piece of wood right through his head and died this morning. If you would like a flight I should be pleased to take you any time you wish. ### Military flight training and reconnaissance work Although considered an average pilot at best by his instructors, Ball qualified for his Royal Aero Club certificate (no. 1898) on 15 October 1915, and promptly requested transfer to the Royal Flying Corps (RFC). He was seconded to No. 9 (Reserve) Squadron RFC on 23 October, and trained at Mousehold Heath aerodrome near Norwich. In the first week of December, he soloed in a Maurice Farman Longhorn after standing duty all night, and his touchdown was rough. When his instructor commented sarcastically on the landing, Ball angrily exclaimed that he had only 15 minutes experience in the plane, and that if this was the best instruction he was going to get, he would rather return to his old unit. The instructor relented, and Ball then soloed again and landed successfully in five consecutive flights. His rough landing was not the last Ball was involved in; he survived two others. He completed his training at Central Flying School, Upavon, and was awarded his wings on 22 January 1916. A week later, he was officially transferred from the North Midlands Cyclist Company to the RFC as a pilot. On 18 February 1916, Ball joined No. 13 Squadron RFC at Marieux in France, flying a two-seat Royal Aircraft Factory B.E.2c on reconnaissance missions. He survived being shot down by anti-aircraft fire on 27 March. Three days later, he fought the first of several combats in the B.E.2; he and his observer, Lieutenant S. A. Villiers, fired a drum and a half of Lewis gun ammunition at an enemy two-seater, but were driven off by a second one. After this inconclusive skirmish, Ball wrote home in one of his many letters, "I like this job, but nerves do not last long, and you soon want a rest". In letters home to his father, he discouraged the idea of his younger brother following him into the RFC. Ball and Villiers tried unsuccessfully to shoot down an enemy observation balloon in their two-seater on 10 April. Ball's burgeoning skills and aggressiveness gained him access to the squadron's single-seat Bristol Scout fighter later that month. April 1916 also saw Ball's first mention in a letter home of plans for "a most wonderful machine ... heaps better than the Hun Fokker". It is now generally believed that these "plans" were unconnected with the design of the Austin-Ball A.F.B.1, with which he later became involved. ### Initial fighter posting On 7 May 1916, Ball was posted to No. 11 Squadron, which operated a mix of fighters including Bristol Scouts, Nieuport 16s, and Royal Aircraft Factory F.E.2b "pushers". After his first day of flying with his new unit, he wrote a letter home complaining about fatigue. He was unhappy with the hygiene of his assigned billet in the nearest village, and elected to live in a tent on the flight line. Ball built a hut for himself to replace the tent and cultivated a garden. Throughout his flying service Ball was primarily a "lone-wolf" pilot, stalking his prey from below until he drew close enough to use his top-wing Lewis gun on its Foster mounting, angled to fire upwards into the enemy's fuselage. According to fellow ace and Victoria Cross recipient James McCudden, "it was quite a work of art to pull this gun down and shoot upwards, and at the same time manage one's machine accurately". Ball was as much a loner on the ground as in the air, preferring to stay in his hut on the flight line away from other squadron members. His off-duty hours were spent tending his small garden and practising the violin. Though not unsociable per se, he was extremely sensitive and shy. Ball acted as his own mechanic on his aircraft and, as a consequence, was often untidy and dishevelled. His singularity in dress extended to his habit of flying without a helmet and goggles, and he wore his thick black hair longer than regulations generally permitted. While flying a Bristol Scout on 16 May 1916, Ball scored his first aerial victory, driving down a German reconnaissance aircraft. He then switched to Nieuports, bringing down two LVGs on 29 May and a Fokker Eindecker on 1 June. On 25 June he became a balloon buster and an ace by destroying an observation balloon with phosphor bombs. During the month he had written to his parents admonishing them to try and "take it well" if he was killed, "for men tons better than I go in hundreds every day". He again achieved two victories in one sortie on 2 July, shooting down a Roland C.II and an Aviatik to bring his score to seven. Ball then requested a few days off but, to his dismay, was temporarily reassigned to aerial reconnaissance duty with No. 8 Squadron, where he flew B.E.2s from 18 July until 14 August. During this posting, Ball undertook an unusual mission. On the evening of 28 July, he flew a French espionage agent across enemy lines. Dodging an attack by three German fighters, as well as anti-aircraft fire, he landed in a deserted field, only to find that the agent refused to get out of the aircraft. While he was on reconnaissance duties with No. 8 Squadron, the London Gazette announced that he had been awarded the Military Cross "for conspicuous skill and gallantry on many occasions," particularly for "one occasion [when] he attacked six in one flight". This was not unusual; throughout his career, Ball generally attacked on sight and heedless of the odds. He professed no hatred for his opponents, writing to his parents "I only scrap because it is my duty ... Nothing makes me feel more rotten than to see them go down, but you see it is either them or me, so I must do my duty best to make it a case of them". Ball's 20th birthday was marked by his promotion to temporary captain and his return to No. 11 Squadron. He destroyed three Roland C.IIs in one sortie on 22 August 1916, the first RFC pilot to do so. He ended the day by fighting 14 Germans some 15 miles (24 km) behind their lines. With his aircraft badly damaged and out of fuel, he struggled back to Allied lines to land. He transferred with part of No. 11 Squadron to No. 60 Squadron RFC on 23 August. His new commanding officer gave Ball a free rein to fly solo missions, and assigned him his own personal aircraft and maintenance crew. One of the squadron mechanics painted up a non-standard red propeller boss; A201 became the first of a series of Ball's aeroplanes to have such a colour scheme. He found that it helped his fellow squadron members identify his aircraft and confirm his combat claims. By end of the month, he had increased his tally to 17 enemy aircraft, including three on 28 August. Ball then took leave in England. His feats in France had received considerable publicity. He was the first British ace to become a household name, and found that his celebrity was such that he could not walk down the streets of Nottingham without being stopped and congratulated. Prior to this the British government had suppressed the names of its aces—in contrast to the policy of the French and Germans—but the losses of the Battle of the Somme, which had commenced in July, made politic the publicising of its successes in the air. Ball's achievements had a profound impact on budding flyer Mick Mannock, who would become the United Kingdom's top-scoring ace and also receive the Victoria Cross. Upon return to No. 60 Squadron in France, Ball scored morning and evening victories on 15 September, flying two different Nieuports. On the evening mission, he armed his aircraft with eight Le Prieur rockets, fitted to the outer struts and designed to fire electrically. He intended to use them on an observation balloon. As it happened, he spotted three German Roland C.IIs and broke their formation by salvoing his rockets at them, then picked off one of the pilots with machine-gun fire. After this he settled into an improved aeroplane, Nieuport 17 A213. He had it rigged to fly tail-heavy to facilitate his changing of ammunition drums in the machine-gun, and had a holster built into the cockpit for the Colt automatic pistol that he habitually carried. Three times during September he scored triple victories in a day, ending the month with his total score standing at 31, making him Britain's top-scoring ace. By this time he had told his commanding officer that he had to have a rest and that he was taking unnecessary risks because of his nerves. On 3 October, he was sent on leave, en route to a posting at the Home Establishment in England. A French semi-official report of Ball's successes was issued the same day; it was picked up and repeated in the British aviation journal Flight nine days later. ### Home front Ball had been awarded the Distinguished Service Order (DSO) and bar simultaneously on 26 September 1916. The first award was "for conspicuous gallantry and skill" when he took on two enemy formations. The bar was also "for conspicuous skill and gallantry" when he attacked four enemy aircraft in formation and then, on another occasion, 12 enemy machines. He was awarded the Russian Order of St. George the same month. Now that Ball had been posted back to England, he was lionised as a national hero with a reputation as a fearless pilot and expert marksman. A crowd of journalists awaited him on his family's doorstep. In an interview, he mentioned being downed six times in combat. On 18 November, he was invested with his Military Cross and both DSOs by King George V at Buckingham Palace. A second bar to the DSO, for taking on three enemy aircraft and shooting one down, followed on 25 November, making him the first three-time recipient of the award. Ball was promoted to the substantive rank of lieutenant on 8 December 1916. Instead of returning to combat after his leave, Ball was posted to instructional duties with No. 34 (Reserve) Squadron RFC, based at Orford Ness, Suffolk. About this time he was debriefed by flying instructor Philip Gribble, who was charged with discovering the tactics of ace fighter pilots; Gribble decided Ball operated on "paramount courage and a bit of luck". Ball asked Gribble to let him try a Bristol Scout, which he landed badly, seriously damaging the undercarriage; Ball asked for another machine to try again, with the same result, after which he consoled himself by eating "seven pounds of chocolate". It was while serving on the home front that he was able to lobby for the building and testing of the Austin-Ball A.F.B.1 fighter. He hoped to be able to take an example of the type to France with him, but the prototype was not completed until after his death in action. In November he was invited to test fly the prototype of the new Royal Aircraft Factory S.E.5 single-seat scout, apparently the first service pilot to do so. He was unimpressed, finding the heavier, more stable fighter less responsive to the controls than the Nieuports he was used to. His negative assessment of other aspects of the S.E.'s performance, on the other hand, contrasted markedly with the reactions of fellow pilots who tested the prototype about this time. Ball was to maintain his opinion of the S.E. as a "dud", at least until he had scored several victories on the type after his return to France. On 19 February 1917, in a tribute from his native city, Ball became an Honorary Freeman of Nottingham. Around this time he met James McCudden, also on leave, who later reported his impressions in most favourable terms. In London, Ball also encountered Canadian pilot Billy Bishop, who had not as yet seen combat. He immediately liked Bishop, and may have helped the latter secure a posting to No. 60 Squadron. On 25 March, while off-duty, Ball met 18-year-old Flora Young. He invited her to fly with him, and she accepted, wearing a leather flying coat that they had borrowed. On 5 April, they became engaged; she wore his silver identification wrist bracelet in lieu of an engagement ring. ### Second fighter posting Inaction chafed Ball, and he began agitating for a return to combat duty. He finally managed to obtain a posting as a flight commander with No. 56 Squadron RFC, considered to be as close to an elite unit as any established by the RFC. Ball was still first among Britain's aces, and some documents hint that his attachment to No. 56 Squadron was planned to be temporary. According to one account he had been slated to serve with the unit for only a month to mentor novice pilots. The latest type from the Royal Aircraft Factory, the S.E.5, had been selected to equip the new squadron. This choice was viewed with some trepidation by the RFC high command, and Ball himself was personally far from happy with the S.E.5. After some intense lobbying he was allowed to retain his Nieuport 17 no. B1522 when the unit went to France; the Nieuport was for his solo missions, and he would fly an S.E.5 on patrols with the rest of the squadron. This arrangement had the personal approval of General Hugh Trenchard, who went on to become the first Chief of the Air Staff of the Royal Air Force. No. 56 Squadron moved to the Western Front on 7 April 1917. On arrival Ball wrote to his parents, "Cheero, am just about to start the great game again". S.E.5 no. A4850, fresh from its packing crate, was extensively modified for Ball: in particular he had the synchronised Vickers machine gun removed, to be replaced with a second Lewis gun fitted to fire downwards through the floor of the cockpit. He also had a slightly larger fuel tank installed. On 9 April, A4850 was refitted, and the downward-firing Lewis gun removed and replaced by the normal Vickers gun mounting. In a letter to Flora Young on 18 April, Ball mentioned getting his own hut on the flight line, and installing the members of his flight nearby. On 23 April 1917, Ball was under strict orders to stay over British lines, but still engaged the Germans five times in his Nieuport. In his first combat that day, using his preferred belly shot, he sent an Albatros into a spin, following it down and continuing to fire at it until it struck the ground. It was No. 56 Squadron's first victory. Regaining an altitude of 5,000 feet (1,500 m), he tried to dive underneath an Albatros two-seater and pop up under its belly as usual, but he overshot, and the German rear gunner put a burst of 15 bullets through the Nieuport's wings and spars. Ball coaxed the Nieuport home for repairs, returning to battle in an S.E.5. In his third combat of the day, he fired five rounds before his machine gun jammed. After landing to clear the gun, he took off once more, surprising five Albatros fighters and sending one down in flames. His fifth battle, shortly thereafter, appeared inconclusive, as the enemy plane managed to land safely. However, its observer had been mortally wounded. Three days later, on 26 April, Ball scored another double victory, flying S.E.5 no. A4850, and one more on 28 April. This last day's fighting left the S.E.5 so battered by enemy action that it was dismantled and sent away for repair. The following month, despite continual problems with jamming guns in the S.E.5s, Ball shot down seven Albatroses in five days, including two reconnaissance models on 1 May, a reconnaissance plane and an Albatros D.III fighter on 2 May; a D.III on 4 May, and two D.IIIs the next day, 5 May. The second of these victims nearly rammed Ball as they shot it out in a head-on firing pass. As they sped past one another, Ball was left temporarily blinded by oil spraying from the holed oil tank of his craft. Clearing the oil from his eyes, he flew his S.E.5 home with zero oil pressure in an engine on the brink of seizure. He was so overwrought that it was some time after landing before he could finish thanking God, then dictating his combat report. While squadron armourers and mechanics repaired the faulty machine-gun synchroniser on his most recent S.E.5 mount, A8898, Ball had been sporadically flying the Nieuport again, and was successful with it on 6 May, destroying one more Albatros D.III in an evening flight to raise his tally to 44. He had continued to undertake his habitual lone patrols, but had of late been fortunate to survive. The heavier battle damage that Ball's aircraft were now suffering bore witness to the improved team tactics being developed by his German opponents. Some time on 6 May, Ball had visited his friend Billy Bishop at the latter's aerodrome. He proposed that the pair attack the Red Baron's squadron at its airfield at dawn, catching the German pilots off guard. Bishop agreed to take part in the daring scheme at the end of the month, after he returned from his forthcoming leave. That night, in his last letter to his father, Ball wrote "I do get tired of always living to kill, and am really beginning to feel like a murderer. Shall be so pleased when I have finished". ### Final flight and aftermath On the evening of 7 May 1917, near Douai, 11 British aircraft from No. 56 Squadron led by Ball in an S.E.5 encountered German fighters from Jasta 11. A running dogfight in deteriorating visibility resulted, and the aircraft became scattered. Cecil Arthur Lewis, a participant in this fight, described it in his memoir Sagittarius Rising. Ball was last seen by fellow pilots pursuing the red Albatros D.III of the Red Baron's younger brother, Lothar von Richthofen, who eventually landed near Annœullin with a punctured fuel tank. Cyril Crowe observed Ball flying into a dark thundercloud. A German pilot officer on the ground, Lieutenant Hailer, then saw Ball's plane falling upside-down from the bottom of the cloud, at an altitude of 200 feet (61 m), with a dead prop. Brothers Franz and Carl Hailer and the other two men in their party were from a German reconnaissance unit, Flieger-Abteilung A292. Franz Hailer noted, "It was leaving a cloud of black smoke... caused by oil leaking into the cylinders." The engine had to be inverted for this to happen. The Hispano engine was known to flood its inlet manifold with fuel when upside down and then stop running. Franz Hailer and his three companions hurried to the crash site. Ball was already dead when they arrived. The four German airmen agreed that the crashed craft had suffered no battle damage. No bullet wounds were found on Ball's body, even though Hailer went through Ball's clothing to find identification. Hailer also took Ball to a field hospital. A German doctor subsequently described a broken back and a crushed chest, along with fractured limbs, as the cause of death. The Germans credited Richthofen with shooting down Ball, but there is some doubt as to what happened, especially as Richthofen's claim was for a Sopwith Triplane, not an S.E.5, which is a biplane. Given the amount of propaganda the German High Command generated touting the younger Richthofen, a high-level decision may have been taken to attribute Ball's death to him. It is probable that Ball was not shot down at all, but had become disoriented and lost control during his final combat, the victim of a form of temporary vertigo that has claimed other pilots. Ball's squadron harboured hopes that he was a prisoner of war, and the British government officially listed him as "missing" on 18 May. There was much speculation in the press; in France, the Havas news agency reported: "Albert Ball, the star of aviators... has been missing since the 7th May. Is he a prisoner or has he been killed? If he is dead, he died fighting for his forty-fifth victory." It was only at the end of the month that the Germans dropped messages behind Allied lines announcing that Ball was dead, and had been buried in Annoeullin with full military honours two days after he crashed. Over the grave of the man they dubbed "the English Richthofen", the Germans erected a cross bearing the inscription Im Luftkampf gefallen für sein Vaterland Engl. Flieger-Hauptmann Albert Ball, Royal Flying Corps ("Fallen in air combat for his fatherland English pilot Captain Albert Ball"). Ball's death was reported worldwide in the press. He was lauded as the "wonder boy of the Flying Corps" in Britain's Weekly Dispatch, the "Ace of English Aces" in Portugal, the "heroe aviador" in South America, and the "super-airman" in France. On 7 June 1917, the London Gazette announced that he had received the Croix de Chevalier, Legion d'Honneur from the French government. The following day, he was awarded the Victoria Cross for his "most conspicuous and consistent bravery" in action from 25 April to 6 May 1917. On 10 June 1917, a memorial service was held for Ball in the centre of Nottingham at St Mary's Church, with large crowds paying tribute as the procession of mourners passed by. Among those attending were Ball's father Albert, Sr. and brother Cyril, now also a pilot in the RFC; his mother Harriett, overwhelmed with grief, was not present. Ball was posthumously promoted to captain on 15 June. His Victoria Cross was presented to his parents by King George V on 22 July 1917. The following year he was awarded a special medal by the Aero Club of America. ### Posthumous tributes In 1918, Walter A. Briscoe and H. Russell Stannard released a seminal biography, Captain Ball VC, reprinting many of Ball's letters and prefaced with encomiums by Prime Minister David Lloyd George, Field Marshal Sir Douglas Haig, and Major General Sir Hugh Trenchard. Lloyd George wrote that "What he says in one of his letters, 'I hate this game, but it is the only thing one must do just now', represents, I believe, the conviction of those vast armies who, realising what is at stake, have risked all and endured all that liberty may be saved". Haig spoke of Ball's "unrivalled courage" and his "example and incentive to those who have taken up his work". In Trenchard's opinion, Ball had "a wonderfully well-balanced brain, and his loss to the Flying Corps was the greatest loss it could sustain at that time". In the book proper, Briscoe and Stannard quote Ball's most notable opponent, Manfred von Richthofen. The Red Baron, who believed in his younger brother's victory award, considered Ball "by far the best English flying man". Elsewhere in the book, an unidentified Royal Flying Corps pilot who flew with Ball in his last engagement was quoted as saying, "I see they have given him the V.C. Of course he won it a dozen times over—the whole squadron knows that." The authors themselves described the story of Ball's life as that of "a young knight of gentle manner who learnt to fly and to kill at a time when all the world was killing... saddened by the great tragedy that had come into the world and made him a terrible instrument of Death". Linda Raine Robertson, in The Dream of Civilised Warfare, noted that Briscoe and Stannard emphasised "the portrait of a boy of energy, pluck, and humility, a loner who placed his skill in the service of his nation, fought—indeed, invited—a personal war, and paid the ultimate sacrifice as a result", and that they "struggle to paste the mask of cheerful boyishness over the signs of the toll taken on him by the stress of air combat and the loss of friends". Alan Clark, in Aces High: The War in the Air Over the Western Front, found Ball the "perfect public schoolboy" with "the enthusiasms and all the eager intelligence of that breed" and that these characteristics, coupled with a lack of worldly maturity, were "the ingredients of a perfect killer, where a smooth transition can be made between the motives that drive a boy to 'play hard' at school and then to 'fight hard' against the King's enemies". Biographer Chaz Bowyer considered that "to label Albert Ball a 'killer' would be to do him a grave injustice", as his "sensitive nature suffered in immediate retrospect whenever he succeeded in combat". ## Post-war legacy After the war the British discovered Ball's grave, which had been behind enemy lines, in the Annoeullin Cemetery. In December 1918, personnel of No. 207 Squadron RAF erected a new cross in place of the one left by the Germans. The Imperial War Graves Commission (now Commonwealth War Graves Commission) were working at the time to consolidate the British war graves into fewer cemeteries; 23 British bodies in graves in the location where Ball was buried were moved to the Cabaret Rouge British Cemetery, but at his father's request Ball's grave was allowed to remain. Albert Sr. paid for a private memorial to be erected over Ball's grave, No. 643, in what later became the Annoeullin Communal Cemetery and German Extension. Ball's is the only British grave from the First World War in this extension, the rest being German. Ball's father also bought the French field where his son had died and erected a memorial stone on the crash site. Memorials to Ball in his native Nottingham include a monument and statue in the grounds of Nottingham Castle. The monument, which was commissioned by the city council and funded by public subscription, consists of a bronze group on a carved pedestal of Portland stone and granite. The bronze group, by the sculptor Henry Poole, shows a life-size figure of Ball with an allegorical female figure at his shoulder. The monument was unveiled on 8 September 1921 by Air Marshal Trenchard, with military honours including a flypast by a squadron of RAF aircraft. In 1929 the bronze model for Ball's statue was presented by his father to the National Portrait Gallery in London, where it is on display. In further remembrance of his son, Albert Ball, Sr. commissioned the building of the Albert Ball Memorial Homes in Lenton to house the families of local servicemen killed in action. The Lenton War Memorial, located in front of the homes, includes Ball's name and was also paid for by the Ball family. The homes were Grade-II listed for historic preservation in 1995. A memorial to Ball, along with his parents, and a sister who died in infancy, appears on the exterior wall of the southwest corner of Holy Trinity Church in Lenton. Another memorial tablet is present inside the same church, mounted on the north wall and bearing the RFC and RAF motto Per Ardua ad Astra, along with decorations of medals and royal arms. In 1967, the Albert Ball VC Scholarships were instituted at his alma mater, Trent College. A propeller from one of Ball's aircraft and the original cross from his grave in France are displayed at the college's library and chapel, respectively. One of the houses at Nottingham High's Junior School is also named after Ball. In 2006, Ball was one of six recipients of the Victoria Cross to be featured on a special commemorative edition of Royal Mail stamps marking the 150th anniversary of the award. In 2015, Ball was featured on a £5 coin (issued in silver and gold) in a six-coin set commemorating the Centenary of the First World War by the Royal Mint. His Victoria Cross is displayed at the Nottingham Castle Museum along with his other medals and memorabilia, including a bullet-holed Avro windshield, a section of engine piping from one of his damaged Nieuports, his Freedom of Nottingham Scroll and Casket, and various letters and other papers. A portrait study by Noel Denholm Davis is in the collection of Nottingham City Museums and Galleries. ## Award citations - Victoria Cross > Lt. (temp. Capt.) Albert Ball, D.S.O., M.C., late Notts. and Derby. R., and R.F.C. > > For most conspicuous and consistent bravery from the 25th of April to the 6th of May, 1917, during which period Capt. Ball took part in twenty-six combats in the air and destroyed eleven hostile aeroplanes, drove down two out of control, and forced several others to land. > > In these combats Capt. Ball, flying alone, on one occasion fought six hostile machines, twice he fought five and once four. When leading two other British aeroplanes he attacked an enemy formation of eight. On each of these occasions he brought down at least one enemy. > > Several times his aeroplane was badly damaged, once so seriously that but for the most delicate handling his machine would have collapsed, as nearly all the control wires had been shot away. On returning with a damaged machine he had always to be restrained from immediately going out on another. > > In all, Capt. Ball has destroyed forty-three German aeroplanes and one balloon, and has always displayed most exceptional courage, determination and skill. - Distinguished Service Order (DSO) > For conspicuous gallantry and skill. Observing seven enemy machines in formation, he immediately attacked one of them and shot it down at 15 yards range. The remaining machines retired. Immediately afterwards, seeing five more hostile machines, he attacked one at about 10 yards range and shot it down, flames coming out of the fuselage. He then attacked another of the machines, which had been firing at him, and shot it down into a village, when it landed on the top of a house. He then went to the nearest aerodrome for more ammunition, and, returning, attacked three more machines, causing them to dive under control. Being then short of petrol he came home. His own machine was badly shot about in these fights. - Distinguished Service Order (DSO) Bar > For conspicuous skill and gallantry. When on escort duty to a bombing raid he saw four enemy machines in formation. He dived on to them and broke up their formation, and then shot down the nearest one, which fell on its nose. He came down to about 500 feet to make certain it was wrecked. On another occasion, observing 12 enemy machines in formation, he dived in among them, and fired a drum into the nearest machine, which went down out of control. Several more hostile machines then approached, and he fired three more drums at them, driving down another out of control. He then returned, crossing the lines at a low altitude, with his machine very much damaged. - Distinguished Service Order (DSO) Bar > For conspicuous gallantry in action. He attacked three hostile machines and brought one down, displaying great courage and skill. He has brought down eight hostile machines in a short period, and has forced many others to land. - Military Cross (MC)' > For conspicuous skill and gallantry on many occasions, notably when, after failing to destroy an enemy kite balloon with bombs, he returned for a fresh supply, went back and brought it down in flames. He has done great execution among enemy aeroplanes. On one occasion he attacked six in one flight, forced down two and drove the others off. This occurred several miles over the enemy's lines. ## List of victories Confirmed victories numbered; unconfirmed victories marked "u/c". Except where noted, data from Shores et al.
440,375
Jarome Iginla
1,172,661,944
Canadian ice hockey player (b. 1977)
[ "1977 births", "21st-century Canadian philanthropists", "Art Ross Trophy winners", "Black Canadian ice hockey players", "Boston Bruins players", "Calgary Flames captains", "Calgary Flames players", "Canadian Christians", "Canadian expatriate ice hockey players in the United States", "Canadian ice hockey right wingers", "Canadian people of African-American descent", "Canadian people of Yoruba descent", "Canadian sportspeople of Nigerian descent", "Colorado Avalanche players", "Dallas Stars draft picks", "Hockey Hall of Fame inductees", "Ice hockey people from Calgary", "Ice hockey people from Edmonton", "Ice hockey people from St. Albert, Alberta", "Ice hockey players at the 2002 Winter Olympics", "Ice hockey players at the 2006 Winter Olympics", "Ice hockey players at the 2010 Winter Olympics", "Kamloops Blazers players", "King Clancy Memorial Trophy winners", "Lester B. Pearson Award winners", "Living people", "Los Angeles Kings players", "Medalists at the 2002 Winter Olympics", "Medalists at the 2010 Winter Olympics", "National Hockey League All-Stars", "National Hockey League first-round draft picks", "Olympic gold medalists for Canada", "Olympic ice hockey players for Canada", "Olympic medalists in ice hockey", "Pittsburgh Penguins players", "Rocket Richard Trophy winners", "Yoruba sportspeople" ]
Jarome Arthur-Leigh Adekunle Tig Junior Elvis Iginla (/dʒəˈroʊm ɪˈɡɪnlə/; born July 1, 1977) is a Canadian former professional ice hockey winger. He played over 1,500 games in the National Hockey League (NHL) for the Calgary Flames, Pittsburgh Penguins, Boston Bruins, Colorado Avalanche and Los Angeles Kings between 1996 and 2017. He is widely regarded as one of the best players of his generation. In junior, Iginla was a member of two Memorial Cup winning teams with the Kamloops Blazers and was named the Western Hockey League (WHL) Player of the Year in 1996. He was selected 11th overall by the Dallas Stars in the 1995 NHL Entry Draft but was traded to Calgary before making his NHL debut. Nicknamed "Iggy", he led the NHL in goals and points in 2001–02, and won the Lester B. Pearson Award as its most valuable player as voted by the players. In 2003–04, Iginla led the league in goals for the second time and captained the Flames to the Stanley Cup Finals, leading the playoffs in goals. A six-time NHL All-Star, Iginla is the Flames' all-time leader in goals, points, and games played, and is second in assists to Al MacInnis. Iginla scored 50 goals in a season on two occasions and is one of seven players in NHL history to score 30 goals in 11 consecutive seasons. He is one of 20 players in NHL history to score over 600 goals and is one of 34 players to record 1,300 points in his career. He is a past winner of the Mark Messier Leadership Award and has been recognized by both the Flames and the league for his community work; while a member of the Flames, Iginla donated \$2,000 to the children's charity Kidsport for every goal he scored. His number 12 was retired by the Flames during a pre-game ceremony on March 2, 2019. Internationally, Iginla has represented Canada on numerous occasions. He was a member of championship teams at the 1996 World Junior and 1997 World Championships as well as the 2004 World Cup of Hockey. He is a three-time Olympian and two-time gold medal winner, including at the 2002 Winter Olympics where he helped lead Canada to its first Olympic hockey championship in 50 years. Iginla was selected for the Hockey Hall of Fame in 2020, during his first year of eligibility. Iginla is the fourth Black player inducted after Grant Fuhr, women's hockey pioneer Angela James, and Willie O'Ree. ## Early life Iginla was born in Edmonton, Alberta, and raised in the adjoining city of St. Albert. His father, a lawyer, was originally from Nigeria and changed his first name from Adekunle to Elvis when he arrived in Canada. His surname means "big tree" in Yoruba, his father's native language. Iginla's mother, Susan Schuchard, is originally from Oregon, and has worked as a massage therapist and music teacher. Iginla grew up with his mother and grandparents after his parents divorced when he was a year old. In addition to hockey, Iginla played baseball as a young man and was the catcher on the Canadian national junior team. Before hockey, baseball was Iginla's favourite sport and his earliest sports memories were of attending amateur baseball tournaments in Western Canada. He played baseball until he was about 17 years old and later in life told Sports Illustrated that he had hoped to become a two-sport professional athlete like Bo Jackson. Iginla credits his grandfather for his hockey career, as with his mother working and father attending law school, he would not have had the opportunity to play sports at a high level if not for his grandfather's support. Iginla grew up admiring other Black hockey players, including Edmonton Oilers goaltender Grant Fuhr. Emulating Fuhr, Iginla played goaltender in his first two years of organized hockey before switching to the right wing. He played his entire minor hockey career in St. Albert, leading the Alberta Midget Hockey League in scoring as a 15-year-old with 87 points for the St. Albert Midget Raiders in 1992–93. ## Playing career ### Junior Iginla played three years with the Kamloops Blazers of the Western Hockey League (WHL). As a 16-year-old in 1993–94, he recorded six goals and 29 points in 48 regular season games before playing an additional 19 in the playoffs. The Blazers captured both the league title and the 1994 Memorial Cup, Canada's national junior championship. In reference to the Blazers' dominance of the league at the time (they had won their third WHL title in five seasons), Iginla described the expectations of success as being similar to those placed on the Montreal Canadiens, the NHL's most successful franchise: "When you put on a Blazers jersey, it's like putting on the Canadians'. You've got to perform." Iginla scored 33 goals and 71 points in 1994–95, his first full WHL season. The Blazers repeated as league champions, earning a trip to the 1995 Memorial Cup. Iginla scored five goals in the tournament to lead the Blazers to a second consecutive national championship. He received the George Parsons Trophy as the most sportsmanlike player of the tournament. The Dallas Stars selected Iginla with their first pick, 11th overall, in the 1995 NHL Entry Draft; however, on December 20, 1995, they traded him to the Calgary Flames, along with Corey Millen, for the rights to forward Joe Nieuwendyk, who was then in a contract dispute with the Flames. In his final season in Kamloops in 1995–96, Iginla finished fourth in the league scoring 136 points, including 63 goals in 63 games played, and was awarded the Four Broncos Memorial Trophy as the league's most outstanding player. The Blazers were upset in the Western Conference Final by the Spokane Chiefs, but Iginla still finished fourth in playoff scoring, recording 29 points in 16 games. His performance during the season earned him an invitation to play for Team Canada at the 1996 World Junior Ice Hockey Championships in Boston, where he led the tournament in scoring with 12 points and helped Canada to its fourth consecutive gold medal. ### Calgary Flames Iginla made his NHL debut in the 1996 Stanley Cup playoffs, as he was signed to a contract and flown to Calgary immediately after his junior season ended in Kamloops. He appeared in two games for the Flames in their series against the Chicago Blackhawks. In doing so, he became the first 18-year-old to play for the Flames since Dan Quinn in 1983. In his first NHL game, Iginla assisted on a Theoren Fleury goal to record his first point; he scored his first goal in his second game. He remained with the Flames, and played his first NHL season in 1996–97. He earned a spot on that year's NHL All-Rookie Team and finished as the runner-up to Bryan Berard in voting for the Calder Memorial Trophy as rookie of the year after leading all first-year players in scoring with 50 points. By his third season, 1998–99, Iginla led the Flames in goals with 28. His success complicated negotiations for a new contract, as he and the Flames struggled to agree on a new deal following the season. Hoping to help resolve the contract impasse, Iginla agreed to attend training camp without a contract and purchased his insurance as the team would not have been responsible financially if he suffered an injury. He remained without a contract at the start of the 1999–2000 season and missed the first three games as a holdout before signing a three-year deal worth US\$4.9 million, plus bonuses. He finished the year with career highs in goals (29) and points (63). He then topped both marks in 2000–01 by recording 31 goals and 71 points. After participating in Canada's Olympic summer camp before the season, Iginla again set new personal highs in 2001–02 when he registered 52 goals and 96 points. This season elevated Iginla to superstar status. He earned the Art Ross and Maurice Richard trophies as the NHL's leading point and goal scorer, respectively. He was also awarded the Lester B. Pearson Award as the league's most valuable player as voted by his peers, and was a nominee for both the Hart Memorial Trophy and the King Clancy Memorial Trophy. The Hart Trophy voting proved to be controversial: Iginla tied Canadiens goaltender José Théodore in voting points, but received fewer first-place votes than Théodore. However, one voter rumoured to be from Quebec—Théodore and the Canadiens' home province—inexplicably left Iginla off his ballot. As a result of the controversy that followed, the Professional Hockey Writers Association changed the rules on how its members voted for the award to prevent a recurrence. There were fears Iginla would again hold out after his contract expired following the season. They were unfounded, however, as he signed a two-year, \$13 million deal before the season and was looked on to again lead the Flames offensively. Iginla fell back to 67 points in 2002–03 as injuries, including a lingering finger dislocation following a fight, diminished his play. His 35 goals were still enough to lead the Flames for the fourth time in five seasons. Despite his offensive contributions, the Flames missed the playoffs. #### Flames captaincy At the start of the 2003–04 season, Iginla was named the 18th captain in Flames franchise history, and 14th since the team moved to Calgary from Atlanta in 1980. His predecessor as captain, Craig Conroy, cited Iginla's experience and leadership for his decision to relinquish the captaincy. "He was a leader on that team and old enough to where he'd been there a long time. It was time for him. He took us to the Stanley Cup Finals that year so it worked out pretty well." Iginla was reported to be the first black captain in NHL history, though former Blackhawks captain Dirk Graham, who is of African descent, has also been said to hold that honour. Iginla responded to being named captain by capturing his second Rocket Richard Trophy, sharing the goal-scoring title with Ilya Kovalchuk and Rick Nash with 41 goals. The Flames qualified for the 2004 Stanley Cup playoffs as the sixth seed, the team's first playoff appearance in eight years. Iginla led all playoff scorers with 13 goals as he captained the Flames to their first Stanley Cup Finals appearance in 15 years. The Flames were unable to defeat the Tampa Bay Lightning, however, falling to the Eastern Conference champions in seven games. A dejected Iginla sat in the Flames locker room after the final game and was met by his father, who told his son that "I'm proud of you. All of Canada is proud of you." While he was hailed as the best player in the world following his performance in the playoffs, Iginla spent the 2004–05 NHL lock-out focused on improving his game further. Following the lock-out, he was named as one of six player representatives on the newly created NHL competition committee, with a mandate to suggest recommendations for ways to improve the game. He held this position until early 2008. On December 7, 2006, Iginla reached career milestones when he scored his 300th career goal and 600th career point against the Minnesota Wild. He was expected to play in the 2007 NHL All-Star Game in Dallas; however, he missed the game with a knee injury. The injury kept him out of 12 games in 2006–07. He nevertheless scored 94 points, including a career-high 55 assists. The 2007–08 season saw Iginla post his second career 50-goal season, adding 48 assists for a career-high 98 points, good for third overall in the league. He was voted to the starting line-up of the 2008 NHL All-Star Game along with teammate Dion Phaneuf, and was named captain of the Western All-Star team. He broke the Flames' franchise record for games played when he played his 804th career game on November 29, 2007, against the Anaheim Ducks. He also broke Theoren Fleury's franchise record for goals when he scored his 365th on March 10, 2008, against the St. Louis Blues. Iginla was nominated as a Hart Trophy finalist for league most valuable player for the third time, though he again did not win the award. During the season, he signed a five-year contract extension with the Flames at \$7 million per season. Iginla continued his pursuit of Fleury's franchise record of 830 points in 2008–09. He recorded his 800th point with a first-period assist against the Chicago Blackhawks on December 19, 2008. He ended 2008 with a career-high five points in a New Year's Eve game against the Edmonton Oilers. He had 14 previous four-point games. In January, he was named to the 2009 NHL All-Star Game in Montreal, his fifth such selection. Representing the Western Conference, Iginla scored his first career NHL All-Star Game goal in a 12–11 shootout loss. He passed Fleury as the Flames' all-time scoring leader on March 1, 2009, by recording five points, including his 400th career goal, in an 8–6 loss to the Lightning. He finished the season with 35 goals and 89 points, but a disappointing playoff performance led to questions of whether he had been playing with an injury. Iginla quickly denied the rumour, admitted that he had not played with the level of consistency he expected and stated that he would spend the summer focused on improving his play in 2009–10. #### Milestones The Flames struggled in 2009–10, failing to qualify for the playoffs for the first time since 2003. Iginla accepted responsibility for the team's failure, admitting that finishing around 70 points for the season was "not enough". The Flames' declining fortunes and Iginla's season led to increasing questions on whether he could be traded from the team with whom he has played his entire NHL career. Iginla, who would have to approve any trade the team attempts to make due to a no-movement clause in his contract, expressed that he did not wish to leave Calgary, but would accommodate a trade if the Flames wished to do so. Former Flames' general manager Craig Button argued against trading Iginla, blaming a lack of complementary players for both Iginla and Calgary's failures: "There's nothing easier in hockey than to be able to shut down one player. And the Calgary Flames, I would argue, have made it really easy for teams to shut down Jarome." The Flames publicly stated that they had no plans to trade him. Individually, Iginla reached 900 career points in a two-goal, two-assist effort against the Oilers on January 30, 2010. Six nights later, he played his 1,000th career game against the Florida Panthers. Iginla struggled offensively to begin the 2010–11 season, and with the Flames falling to the bottom of the standings, there was renewed speculation over his future in Calgary. Team management repeatedly reiterated that they were not interested in moving him to another team. Improving his game as the season wore on, Iginla reached another personal milestone, recording his 500th career assist on January 11, 2011, the same day he was named to play in his sixth All-Star Game. He announced several days later that he had declined to play in the All-Star Game as he wished to spend the time with his ailing grandmother. Iginla scored his 30th goal of the season on a penalty shot against the Nashville Predators on March 6, 2011, and in doing so became the 10th player in NHL history to score at least 30 goals in ten consecutive seasons. A month later, he scored his 1,000th career point, notching the game-winning goal against the St. Louis Blues in a 3–2 win on April 1, 2011. Iginla scored his 500th goal on January 7, 2012, against Niklas Bäckström of the Minnesota Wild in a 3–1 victory. He was the 42nd player in league history to achieve the feat, and the 15th to do so with one organization. Midway through the 2011–12 season, Iginla was named an All-Star for the seventh time in his career (the sixth played), representing the Flames at the 2012 All-Star Game. Iginla scored his 30th goal of the 2011–12 season in a 3–2 win against goaltender Antti Niemi of the San Jose Sharks on March 13, 2012. He is the seventh player in league history to score 30 goals in 11 consecutive seasons. ### Pittsburgh and Boston `Playing the final year of his contract in 2012–13 and with the team languishing near the bottom of the NHL standings, speculation about Iginla's future in Calgary was again raised as April 3, 2013, trade deadline neared. National media outlets reported that Iginla, who had a clause in his contract preventing the Flames from moving him to another team without his permission, had given the organization a list of four teams he would be willing to accept a trade with: the Chicago Blackhawks, Los Angeles Kings, Boston Bruins, or Pittsburgh Penguins. Those four teams had won the last four championships and all four would go on to make the conference finals that season. The Bruins were considered the leading contender to acquire Iginla's services, and after he was held out of the line-up of Calgary's March 27, 2013, game against the Colorado Avalanche, it was reported that a trade between the two teams had been completed. Instead, Iginla's 16-year career in Calgary ended when he was sent to the Penguins in exchange for Pittsburgh's first round selection at the 2013 NHL Entry Draft and college prospects Kenny Agostino and Ben Hanowski. Iginla stated that playing with Sidney Crosby and Evgeni Malkin played a factor in his decision to move to the Penguins. The Bruins and Penguins met in the 2013 Eastern Conference Finals. Despite having the top-scoring offence in the league, the Penguins lost the series without winning a game. Iginla, along with Crosby, Malkin, James Neal and Kris Letang, registered a combined 0 points in the series. Iginla was moved to the third line after a 6–1 game 2 loss. Bruins forward Milan Lucic said after the series that Iginla's spurning of Boston ignited the series sweep: "When a guy chooses another team over your team, it does light a little bit of a fire underneath you."` As a free agent following the season, Iginla chose to go to Boston and signed a one-year, \$6 million contract with the Bruins. He required nine games before scoring his first goal as a Bruin, as part of a 2–1 win over the San Jose Sharks, but later settled in on Boston's first line with Milan Lucic and David Krejčí. He made his first return to Calgary on December 10, 2013, where the fans greeted him with a long standing ovation prior to the game as the Flames played a video tribute. Following the contest, a 2–1 Bruins victory, Iginla was named the game's third star and took two laps around the rink to more cheers from the crowd. He recorded his 600th career assist in a 3–1 victory over the Vancouver Canucks on February 4, 2014. ### Colorado and Los Angeles Salary cap constraints prevented the Bruins from re-signing Iginla. Consequently, he left the team as a free agent and signed a three-year, \$16 million contract with the Colorado Avalanche. The Avalanche disappointed in 2014–15; by mid-February, they stood in last place in the Central Division, though Iginla himself was among the team's leading scorers. He led the team with 29 goals, however, the Avalanche failed to qualify for the playoffs. On January 4, 2016, Iginla became the 19th player in NHL history to score 600 career goals. His milestone marker came in a 4–1 victory over the Los Angeles Kings. On December 10, 2016, Iginla played in his 1,500th NHL game, a 10–1 loss to the Montreal Canadiens. He is the 16th player to reach this milestone. On March 1, 2017, Iginla was traded to the Los Angeles Kings for a 2018 conditional fourth-round pick. He chose to wear the number 88, as number 12 was already taken by Marián Gáborík. As a 10-year-old, Iginla had purchased a Kings jersey and placed his name and the number 88 on the back after Wayne Gretzky was traded to the team. Kings general manager Dean Lombardi hoped a fresh start for Iginla would ignite him after playing for a struggling team in Colorado. Iginla was not re-signed by the Kings for the 2017–18 season. It was reported that he had hip surgery in the autumn of 2017, but that he hoped to make a return to the NHL when interviewed during a practice that he took part in with the Providence Bruins in February 2018. On July 30, 2018, Iginla announced his retirement. On June 24, 2020, Iginla was selected for the Hockey Hall of Fame, in his first year of eligibility. ## International play Iginla first represented Canada at the 1994 Nations Cup, an unsanctioned tournament for players under the age of 18. He led Canada in scoring with five goals and nine points as it won the gold medal. Two years later, he joined the national junior team at the 1996 World Junior Ice Hockey Championships. He led the tournament in scoring with five goals and 12 points as Canada won its fourth consecutive gold medal. He was named an all-star and the tournament's top forward. One year later, Iginla played in his first tournament with the senior team, competing at the 1997 World Championships as a 19-year-old, the youngest player on the team. He recorded two goals and three assists in 11 games as Canada won the gold medal. A late invitation to join Team Canada's summer camp in preparation for the 2002 Winter Olympics helped Iginla emerge as a star player. He was so surprised by the invitation he initially thought one of his Calgary Flames teammates were playing a prank on him. He scored two goals in the gold medal game, a 5–2 victory over the United States, as Canada won its first Olympic gold medal in 50 years. With this win, Iginla became the first Black man to win a gold medal at the Winter Olympics. Iginla also represented Canada at the 2004 World Cup of Hockey as an alternate captain, playing on a line with Joe Sakic and Mario Lemieux. Canada won the gold medal. Iginla participated in his second Olympics and was an alternate captain at the 2006 Turin games, recording three points in six games. The Canadians were unable to defend their 2002 gold medal, losing to Russia in the quarter-finals. Named an alternate captain once again for the 2010 team in Vancouver, he opened the tournament with a hat trick against Norway. He finished as the tournament leader with five goals, and assisted on Sidney Crosby's overtime winning goal in the gold medal final against the United States. ## Playing style In his prime, Iginla was considered to be one of the NHL's most prominent power forwards. Upon entering the league, he tried to emulate players like Brendan Shanahan and Keith Tkachuk, hoping to match their combination of finesse and physicality. He was one of the most consistent scorers in the league; between 1998 and 2008, only Jaromír Jágr scored more NHL goals than Iginla. Even so, scouting reports have argued that Iginla's lack of speed makes it easier for opponents to isolate him and restrict his ability to move if his teammates rely on him too much to lead the offence. The abuse he faced at the hands of opponents early in his NHL career prompted Iginla's coaches to work at developing his physical play. While he was not enthusiastic about fighting, Iginla accepted then head coach Brian Sutter's arguments that he needed to adopt a more aggressive style to improve as a player. Iginla was most effective when he had room to manoeuvre, and to create that space, he had to intimidate his opponents. "You've got a power forward who does it all," said Craig Conroy. "I mean, he'll fight, and hit, and score goals. Maybe it's not the end-to-end rushes, but he does all those little things that win games and get things done." His opponents also respect his play. Rob Blake said that while Iginla is not known for fancy play, "he'll run you over. Or he'll fight somebody. And then he'll score a goal. He does pretty much everything you'd want a guy to do." Iginla recorded several Gordie Howe hat tricks – a fight, a goal and an assist in the same game – and as it is not an official statistic, The Hockey News estimated that as of 2012, he was the active leader with nine. His fights, including one with Tampa Bay Lightning star Vincent Lecavalier in the 2004 Stanley Cup Finals, have had a motivating effect on his play and that of his teammates. Iginla suffered injuries as a result of his fighting, including a broken hand from a 2003 fight with Bill Guerin of the Dallas Stars. Iginla's truculent style of play gained approval from hockey commentator Don Cherry. In 2008 during a ceremonial handshake initiated by Iginla to Trevor Linden in his last game, Linden told Iginla he was the best player in the game at that time. He commands the respect of his peers and has been known to stand up to the coaching staff to defend a fellow player. Former teammate Andrew Ference — a former Bruins player himself, before Iginla's arrival on the Boston team's roster — once described following Iginla as like "following a friend." Preferring to lead by example, Iginla is not regarded as a vocal captain. He likes to speak with players individually and tries to ensure that all of his teammates are comfortable. He was named the recipient of the Mark Messier Leadership Award in 2009. ## Personal life Iginla married his high school sweetheart, Kara, and the couple have three children: daughter Jade and sons Tij and Joe. They had been dating since they were in grade eight. His daughter Jade attended and played hockey for Shattuck-Saint Mary's and Kelowna's RINK Hockey Academy, before attending Brown University and playing for the Brown Bears in the NCAA. Internationally she has played for Team Canada. Tij was taken by the Seattle Thunderbirds in the first round of the 2021 WHL Bantam Draft, and will make his debut in the 2021–22 season. Joe played hockey for Kelowna’s RINK Hockey Academy, with Jarome as his head coach, and was taken by the Edmonton Oil Kings in the first round of the 2023 WHL Bantam Draft. Iginla has four paternal half-siblings; two brothers, Jason and Stephen, and two sisters, Theresa and Elizabeth. Theresa played for the University of Saskatchewan Huskies women's hockey team for three seasons from 2004 to 2007. Jarome is an avid golfer and a regular participant in the Calgary Flames Celebrity Charity Golf Classic. Iginla is a Christian. He has spoken about his faith in Jesus by saying, “I believe He died for us, and I believe He’s there for us and we can lean on Him. And I do.” Iginla is well known for his kind-hearted nature. Former Flames General manager Craig Button described Iginla as being grounded: "He doesn't carry himself with any attitude or arrogance. He's confident in his abilities. He's self-assured. He's genuine. He's a better person than he is a player, and we all know what kind of player he is." In 2002, while in Salt Lake City for the Winter Olympic Games, Iginla struck up a conversation with four Calgarians sitting next to his table and found out they were sleeping in their car outside of the hotel. He excused himself from the conversation, and booked them accommodations at his own expense at the hotel his family was staying in. Since 2002, he has operated the Jarome Iginla Hockey School in Calgary as a non-profit organization, donating proceeds to the Diabetes Research Association. In 2004, he was awarded the NHL Foundation Player Award for his community service and the King Clancy Memorial Trophy in recognition of his humanitarian contributions. Iginla supports many charities. In 2000, he began donating \$1,000 per goal he scored to KidSport, a figure he doubled to \$2,000 in 2005. Between 2000 and 2013, he donated more than \$700,000 from this initiative. Iginla is a part owner of the Kamloops Blazers of the Western Hockey League, for whom he played during his junior hockey days. He purchased a minority share in the franchise, along with fellow NHL players Shane Doan, Mark Recchi and Darryl Sydor, in October 2007. He is also an ambassador with the NHL Diversity program, which supports youth hockey organizations that offer economically disadvantaged kids the opportunity to play. Since 2008, he has been a hockey spokesperson for Scotiabank, appearing in commercials and at events supporting its grassroots hockey programs, as well as for Samsung Canada. He was the cover athlete and spokesperson for the EA Sports video game NHL 2003. Since retiring, Iginla has resided in Chestnut Hill, Massachusetts and Kelowna, British Columbia. ## Career statistics ### Regular season and playoffs ### International ## Awards and honours ## See also - List of Black NHL players - List of NHL statistical leaders
1,396,819
Siege of Calais (1346–1347)
1,173,430,893
Siege by King Edward III during the Hundred Years' War
[ "1346 in England", "1346 in France", "1347 in England", "1347 in France", "Battles in Hauts-de-France", "Cannibalism in Europe", "Conflicts in 1346", "Conflicts in 1347", "Edward III of England", "History of Calais", "Hundred Years' War, 1337–1360", "Incidents of cannibalism", "Military history of the Pas-de-Calais", "Sieges involving England", "Sieges involving France", "Sieges of the Hundred Years' War" ]
The siege of Calais (4 September 1346 – 3 August 1347) occurred at the conclusion of the Crécy campaign, when an English army under the command of King Edward III of England successfully besieged the French town of Calais during the Edwardian phase of the Hundred Years' War. The English army of some 10,000 men had landed in northern Normandy on 12 July 1346. They embarked on a large-scale raid, or chevauchée, devastating large parts of northern France. On 26 August 1346, fighting on ground of their own choosing, the English inflicted a heavy defeat on a large French army led by their king Philip VI at the Battle of Crécy. A week later the English invested the well-fortified port of Calais, which had a strong garrison under the command of Jean de Vienne. Edward made several unsuccessful attempts to breach the walls or to take the town by assault, either from the land or seaward sides. During the winter and spring the French were able to run in supplies and reinforcements by sea, but in late April the English established a fortification which enabled them to command the entrance to the harbour and cut off the further flow of supplies. On 25 June Jean de Vienne wrote to Philip stating that their food was exhausted. On 17 July Philip marched north with an army estimated at between 15,000 and 20,000 men. Confronted with a well-entrenched English and Flemish force of more than 50,000, he withdrew. On 3 August Calais capitulated. It provided the English with an important strategic lodgement for the remainder of the Hundred Years' War and beyond. The port was not recaptured by the French until 1558. ## Background Since the Norman Conquest of 1066, English monarchs had held titles and lands within France, the possession of which made them vassals of the kings of France. The status of the English king's French fiefs was a major source of conflict between the two monarchies throughout the Middle Ages. French monarchs systematically sought to check the growth of English power, stripping away lands as the opportunity arose. Over the centuries, English holdings in France had varied in size, but by 1337 only Gascony in south-western France was left. The Gascons preferred their relationship with a distant English king who left them alone, to one with a French king who would interfere in their affairs. Following a series of disagreements between Philip VI of France (r. 1328–1350) and Edward III of England (r. 1327–1377), on 24 May 1337 Philip's Great Council in Paris agreed that Gascony and Ponthieu should be taken back into Philip's hands on the grounds that Edward was in breach of his obligations as a vassal. This marked the start of the Hundred Years' War, which was to last 116 years. ## Prelude Although Gascony was the cause of the war, Edward was able to spare few resources for it; whenever an English army had campaigned on the continent, it had operated in northern France. In 1346 Edward raised an army in England and the largest fleet ever assembled by the English to that date, 747 ships. The fleet landed on 12 July at St. Vaast la Hogue, 20 miles (32 km) from Cherbourg. The English army is estimated by modern historians to have been some 10,000 strong, and consisted of English and Welsh soldiers and a small number of German and Breton mercenaries and allies. The English achieved complete strategic surprise and marched south. Edward's aim was to conduct a chevauchée, a large-scale raid, across French territory to reduce his opponent's morale and wealth. His soldiers razed every town in their path and looted whatever they could from the populace. The English fleet paralleled the army's route and landing parties devastated the country for up to 5 miles (8 km) inland, taking vast amounts of loot; after their crews filled their holds, many ships deserted. They also captured or burnt more than 100 French ships; 61 of these had been converted into military vessels. Caen, the cultural, political, religious and financial centre of north-west Normandy, was stormed on 26 July. Most of the population was massacred, there was an orgy of drunken rape and the city was sacked for five days. The English army marched out towards the River Seine on 1 August. They devastated the country to the suburbs of Rouen before leaving a swath of destruction, rapine and slaughter along the left bank of the Seine to Poissy, 20 miles (32 km) from Paris. Duke John of Normandy, Philip's oldest son and heir, had been in charge of France's main army, campaigning in the English occupied province of Gascony in south-west France; Philip ordered him north, to reinforce the army facing Edward. Meanwhile, the English had turned north and become trapped in territory which the French had denuded of food. They escaped by fighting their way across the Somme against a French blocking force. Two days later, on 26 August 1346, fighting on ground of their own choosing, the English inflicted a heavy defeat on the French at the Battle of Crécy. ## Siege After resting for two days and burying the dead, the English, requiring supplies and reinforcements, marched north. They continued to devastate the land, and razed several towns, including Wissant, the normal port of disembarkation for English shipping to north-east France. Outside the burning town Edward held a council, which decided to capture Calais. The city was an ideal entrepôt from an English point of view, and close to the border of Flanders and Edward's Flemish allies. The English arrived outside the town on 4 September and besieged it. Calais was strongly fortified: it boasted a double moat, substantial city walls, and its citadel in the north-west corner had its own moat and additional fortifications. It was surrounded by extensive marshes, some of them tidal, making it difficult to find stable platforms for trebuchets and other artillery, or to mine the walls. It was adequately garrisoned and provisioned, and was under the command of the experienced Jean de Vienne. It could be readily reinforced and supplied by sea. The day after the siege commenced, English ships arrived offshore and resupplied, re-equipped and reinforced the English army. The English settled down for a lengthy stay, establishing a thriving camp to the west, Nouville, or "New Town", with two market days each week. A major victualling operation drew on sources throughout England and Wales to supply the besiegers, as well as overland from nearby Flanders. A total of 853 ships, crewed by 24,000 sailors, were involved over the course of the siege; an unprecedented effort. Wearied by nine years of war, Parliament grudgingly agreed to fund the siege. Edward declared it a matter of honour and avowed his intent to remain until the town fell. Two cardinals acting as emissaries from Pope Clement VI, who had been unsuccessfully attempting to negotiate a halt to hostilities since July 1346, continued to travel between the armies, but neither king would speak to them. ### French disorder Philip vacillated: on the day the siege of Calais began he disbanded most of his army to save money, convinced that Edward had finished his chevauchée and would proceed to Flanders and ship his army home. On or shortly after 7 September, Duke John made contact with Philip, having already disbanded his own army. On 9 September Philip announced that the army would reassemble at Compiègne on 1 October, an impossibly short interval, and then march to the relief of Calais. Among other consequences, this equivocation allowed the English forces in the south west, under the Duke of Lancaster, to launch offensives into Quercy and the Bazadais; and launch a major raid 160 miles (260 km) north through Saintonge, Aunis and Poitou, capturing numerous towns, castles and smaller fortified places and storming the rich city of Poitiers. These offensives completely disrupted the French defences and shifted the focus of the fighting from the heart of Gascony to 60 miles (97 km) or more beyond its borders. Few French troops had arrived at Compiègne by 1 October and as Philip and his court waited for the numbers to swell, news of Lancaster's conquests came in. It was believed that Lancaster was heading for Paris, and in order to block this the French changed the assembly point for any men not already committed to Compiègne to Orléans, and reinforced them with some of those already mustered. After Lancaster turned south to head back to Gascony, those Frenchmen already at or heading towards Orléans were redirected to Compiègne; French planning collapsed into chaos. Since June Philip had been calling on the Scots to fulfil their obligation under the terms of the Auld Alliance and invade England. The Scottish king, David II, convinced that English force was focused entirely on France, obliged on 7 October. He was brought to battle at Neville's Cross on 17 October by a smaller English force raised exclusively from the northern English counties. The battle ended with the rout of the Scots, the capture of their king and the death or capture of most of their leadership. Strategically this freed English resources for the war against France, and the English border counties were able to guard against the remaining Scottish threat from their own resources. Even though only 3,000 men-at-arms had assembled at Compiègne, the French treasurer was unable to pay them. Philip cancelled all offensive arrangements on 27 October and dispersed his army. Recriminations were rife: the Marshal of France, Charles de Montmorency, was sacked; officials at all levels of the Chambre des Comptes (the French treasury) were dismissed; all financial affairs were put into the hands of a committee of three senior abbots; the King's council bent their efforts to blaming each other for the kingdom's misfortunes; Duke John fell out with his father and refused to attend court for several months; Joan of Navarre, daughter of an earlier king of France (Louis X) and previously a staunch supporter of Philip, declared neutrality, signed a private truce with Lancaster, and denied Philip access to Navarrese fortifications – Philip was considerably chagrined, but unable to counter this. ### Military operations During the winter of 1346–47 the English army shrank, possibly to as few as 5,000 men at some points. This was due to: many soldiers' terms of service expiring; a deliberate reduction by Edward for reasons of economy; an outbreak of dysentery in Neuville which caused major loss of life; and widespread desertion. Despite his reduced numbers, between mid-November and late February Edward made several attempts to breach the walls with trebuchets or cannon, or to take the town by assault, either from the land or seaward sides; all were unsuccessful. During the winter the French made great efforts to strengthen their naval resources. This included French and mercenary Italian galleys and French merchant ships, many adapted for military use. During March and April, more than 1,000 long tons (1,000 t) of supplies were run into Calais without opposition. Philip attempted to take the field with his army in late April, but the French ability to assemble in a timely fashion had not improved since the autumn and by July it had still not fully mustered. Taxes proved ever more difficult to collect, with many towns using all available funds to reinforce their walls or equip their militia, and much of the nobility crippled by debt they had accumulated paying for the previous nine years of war. Several French nobles suggested to Edward that they may switch their allegiance. Inconclusive fighting occurred in April and May: the French tried and failed to cut the English supply route to Flanders, while the English tried and failed to capture Saint-Omer and Lille. In June the French attempted to secure their flank by launching a major offensive against the Flemings; this was defeated at Cassel. Early in 1347 Edward took steps to substantially increase the size of his army; in large part he was able to do this because the Scottish army's threat to the north of England and the French navy's threat to the south were much reduced. It is known, for example, that he ordered the recruitment of 7,200 archers; this is nearly as many men as the entire invasion force of the previous year. In late April the English established a fortification on the end of the spit of sand to the north of Calais, which enabled them to command the entrance to the harbour and prevent any further supplies reaching the garrison. In May, June and July the French attempted to force convoys through, unsuccessfully. On 25 June the commander of the Calais garrison wrote to Philip stating that their food was exhausted and suggesting that they may have to resort to cannibalism. Despite increasing financial difficulties, the English steadily reinforced their army through 1347, reaching a peak strength of 32,000; the largest English army to be deployed overseas prior to 1600. 20,000 Flemings were gathered within a day's march of Calais. English shipping ran an effective ferry service to the siege from June 1347, bringing in supplies, equipment and reinforcements. On 17 July Philip led the French army north. Alerted to this, Edward called the Flemings to Calais. On 27 July the French came within view of the town, 6 miles (10 km) away. Their army was between 15,000 and 20,000 strong; a third of the size of the English and their allies, who had prepared earthworks and palisades across every approach. The English position was clearly unassailable. In an attempt to save face, Philip now admitted the Pope's emissaries to an audience. They in turn arranged talks, but after four days of wrangling these came to nothing. On 1 August the garrison of Calais, having observed the French army seemingly within reach for a week, signalled that they were on the verge of surrender. That night the French army withdrew. On 3 August 1347 Calais surrendered. The entire French population was expelled. A vast amount of booty was found within the town. Edward repopulated the town with English settlers. ## Subsequent activities As soon as Calais capitulated, Edward paid off a large part of his army and released his Flemish allies. Philip in turn stood down the French army. Edward promptly launched strong raids up to 30 miles (48 km) into French territory. Philip attempted to recall his army, setting a date of 1 September, but experienced serious difficulties. His treasury was exhausted and taxes for the war had to be collected in many places at sword point. Despite these exigencies, ready cash was not forthcoming. The French army had little stomach for further conflict, and Philip was reduced to threatening to confiscate the estates of nobles who refused to muster. He set back the date for his army to assemble by a month. Edward also had difficulties in raising money, partly due to the unexpected timing of the need; he employed draconian measures, which were extremely unpopular. The English also suffered a pair of military setbacks: a large raid was routed by the French garrison of Saint-Omer; and a supply convoy en route to Calais was captured by French raiders from Boulogne. Given the military misfortunes and financial exhaustion of both sides, the Pope's emissaries now found willing listeners. Negotiations began on 4 September and by the 28th a truce had been agreed. The treaty strongly favoured the English, and confirmed them in possession of all of their territorial conquests. The Truce of Calais was agreed to run for nine months to 7 July 1348, but was extended repeatedly over the years until it was formally set aside in 1355. The truce did not stop the ongoing naval clashes between the two countries, nor the fighting in Gascony and Brittany. After full-scale war resumed in 1355 it continued until 1360, when it ended in an English victory with the Treaty of Brétigny. The period of the chevauchée, from the landing in Normandy to the fall of Calais, became known as Edward III's annus mirabilis (year of marvels). ## Aftermath Calais was vital to England's effort against the French for the rest of the war, it being all but impossible to land a large force other than at a friendly port. It also allowed the accumulation of supplies and materiel prior to a campaign. A ring of substantial fortifications defending the approaches to Calais was rapidly constructed, marking the boundary of an area known as the Pale of Calais. The town had an extremely large standing garrison of 1,400 men, virtually a small army, under the overall command of the Captain of Calais, who had numerous deputies and specialist under-officers. Edward granted Calais numerous trade concessions or privileges and it became the main port of entry for English exports to the continent, a position which it still holds. Calais was finally lost by the English monarch Mary I, following the 1558 siege of Calais. The fall of Calais marked the loss of England's last possession in mainland France. ## Memorials In 1884, Calais commissioned a statue by Auguste Rodin of the town leaders at the moment of their surrender to Edward. The resulting work, The Burghers of Calais, was completed in 1889. An account by the contemporary chronicler Froissart claims that the burghers expected to be executed, but their lives were spared by the intervention of England's queen, Philippa of Hainault, Froissart's patron, who persuaded her husband to exercise mercy. ## Notes, citations and sources
10,821
Francium
1,172,888,001
null
[ "Alkali metals", "Chemical elements", "Chemical elements predicted by Dmitri Mendeleev", "Chemical elements with body-centered cubic structure", "Eponyms", "Francium", "Science and technology in France" ]
Francium is a chemical element with the symbol Fr and atomic number 87. It is extremely radioactive; its most stable isotope, francium-223 (originally called actinium K after the natural decay chain in which it appears), has a half-life of only 22 minutes. It is the second-most electropositive element, behind only caesium, and is the second rarest naturally occurring element (after astatine). Francium's isotopes decay quickly into astatine, radium, and radon. The electronic structure of a francium atom is [Rn] 7s<sup>1</sup>; thus, the element is classed as an alkali metal. Bulk francium has never been seen. Because of the general appearance of the other elements in its periodic table column, it is presumed that francium would appear as a highly reactive metal if enough could be collected together to be viewed as a bulk solid or liquid. Obtaining such a sample is highly improbable since the extreme heat of decay resulting from its short half-life would immediately vaporize any viewable quantity of the element. Francium was discovered by Marguerite Perey in France (from which the element takes its name) in 1939. Before its discovery, francium was referred to as eka-caesium or ekacaesium because of its conjectured existence below caesium in the periodic table. It was the last element first discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium ores, where the isotope francium-223 (in the family of uranium-235) continually forms and decays. As little as 200–500 g exists at any given time throughout the Earth's crust; aside from francium-223 and francium-221, its other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms. ## Characteristics Francium is one of the most unstable of the naturally occurring elements: its longest-lived isotope, francium-223, has a half-life of only 22 minutes. The only comparable element is astatine, whose most stable natural isotope, astatine-219 (the alpha daughter of francium-223), has a half-life of 56 seconds, although synthetic astatine-210 is much longer-lived with a half-life of 8.1 hours. All isotopes of francium decay into astatine, radium, or radon. Francium-223 also has a shorter half-life than the longest-lived isotope of each synthetic element up to and including element 105, dubnium. Francium is an alkali metal whose chemical properties mostly resemble those of caesium. A heavy element with a single valence electron, it has the highest equivalent weight of any element. Liquid francium—if created—should have a surface tension of 0.05092 N/m at its melting point. Francium's melting point was estimated to be around 8.0 °C (46.4 °F); a value of 27 °C (81 °F) is also often encountered. The melting point is uncertain because of the element's extreme rarity and radioactivity; a different extrapolation based on Dmitri Mendeleev's method gave 20 ± 1.5 °C (68.0 ± 2.7 °F). A calculation based on the melting temperatures of binary ionic crystals gives 24.861 ± 0.517 °C (76.750 ± 0.931 °F). The estimated boiling point of 620 °C (1,148 °F) is also uncertain; the estimates 598 °C (1,108 °F) and 677 °C (1,251 °F), as well as the extrapolation from Mendeleev's method of 640 °C (1,184 °F), have also been suggested. The density of francium is expected to be around 2.48 g/cm<sup>3</sup> (Mendeleev's method extrapolates 2.4 g/cm<sup>3</sup>). Linus Pauling estimated the electronegativity of francium at 0.7 on the Pauling scale, the same as caesium; the value for caesium has since been refined to 0.79, but there are no experimental data to allow a refinement of the value for francium. Francium has a slightly higher ionization energy than caesium, 392.811(4) kJ/mol as opposed to 375.7041(2) kJ/mol for caesium, as would be expected from relativistic effects, and this would imply that caesium is the less electronegative of the two. Francium should also have a higher electron affinity than caesium and the Fr<sup>−</sup> ion should be more polarizable than the Cs<sup>−</sup> ion. ## Compounds As a result of francium being very unstable, its salts are only known to a small extent. Francium coprecipitates with several caesium salts, such as caesium perchlorate, which results in small amounts of francium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. It will additionally coprecipitate with many other caesium salts, including the iodate, the picrate, the tartrate (also rubidium tartrate), the chloroplatinate, and the silicotungstate. It also coprecipitates with silicotungstic acid, and with perchloric acid, without another alkali metal as a carrier, which leads to other methods of separation. ### Francium perchlorate Francium perchlorate is produced by the reaction of francium chloride and sodium perchlorate. The francium perchlorate coprecipitates with caesium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and C. M. Nelson. However, this method is unreliable in separating thallium, which also coprecipitates with caesium. Francium perchlorate's entropy is expected to be 42.7 e.u (178.7 J mol<sup>−1</sup> K<sup>−1</sup>). ### Francium halides Francium halides are all soluble in water and are expected to be white solids. They are expected to be produced by the reaction of the corresponding halogens. For example, francium chloride would be produced by the reaction of francium and chlorine. Francium chloride has been studied as a pathway to separate francium from other elements, by using the high vapour pressure of the compound, although francium fluoride would have a higher vapour pressure. ### Other compounds Francium nitrate, sulfate, hydroxide, carbonate, acetate, and oxalate, are all soluble in water, while the iodate, picrate, tartrate, chloroplatinate, and silicotungstate are insoluble. The insolubility of these compounds are used to extract francium from other radioactive products, such as zirconium, niobium, molybdenum, tin, antimony, the method mentioned in the section above. The CsFr molecule is predicted to have francium at the negative end of the dipole, unlike all known heterodiatomic alkali metal molecules. Francium superoxide (FrO<sub>2</sub>) is expected to have a more covalent character than its lighter congeners; this is attributed to the 6p electrons in francium being more involved in the francium–oxygen bonding. The relativistic destabilisation of the 6p<sub>3/2</sub> spinor may make francium compounds in oxidation states higher than +1 possible, such as [Fr<sup>V</sup>F<sub>6</sub>]<sup>−</sup>; but this has not been experimentally confirmed. The only double salt known of francium has the formula Fr<sub>9</sub>Bi<sub>2</sub>I<sub>9</sub>. ## Isotopes There are 37 known isotopes of francium ranging in atomic mass from 197 to 233. Francium has seven metastable nuclear isomers. Francium-223 and francium-221 are the only isotopes that occur in nature, with the former being far more common. Francium-223 is the most stable isotope, with a half-life of 21.8 minutes, and it is highly unlikely that an isotope of francium with a longer half-life will ever be discovered or synthesized. Francium-223 is a fifth product of the uranium-235 decay series as a daughter isotope of actinium-227; thorium-227 is the more common daughter. Francium-223 then decays into radium-223 by beta decay (1.149 MeV decay energy), with a minor (0.006%) alpha decay path to astatine-219 (5.4 MeV decay energy). Francium-221 has a half-life of 4.8 minutes. It is the ninth product of the neptunium decay series as a daughter isotope of actinium-225. Francium-221 then decays into astatine-217 by alpha decay (6.457 MeV decay energy). Although all primordial <sup>237</sup>Np is extinct, the neptunium decay series continues to exist naturally in tiny traces due to (n,2n) knockout reactions in natural <sup>238</sup>U. The least stable ground state isotope is francium-215, with a half-life of 0.12 μs: it undergoes a 9.54 MeV alpha decay to astatine-211. Its metastable isomer, francium-215m, is less stable still, with a half-life of only 3.5 ns. ## Applications Due to its instability and rarity, there are no commercial applications for francium. It has been used for research purposes in the fields of chemistry and of atomic structure. Its use as a potential diagnostic aid for various cancers has also been explored, but this application has been deemed impractical. Francium's ability to be synthesized, trapped, and cooled, along with its relatively simple atomic structure, has made it the subject of specialized spectroscopy experiments. These experiments have led to more specific information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels which are fairly similar to those predicted by quantum theory. ## History As early as 1870, chemists thought that there should be an alkali metal beyond caesium, with an atomic number of 87. It was then referred to by the provisional name eka-caesium. Research teams attempted to locate and isolate this missing element, and at least four false claims were made that the element had been found before an authentic discovery was made. ### Erroneous and incomplete discoveries In 1914, Stefan Meyer, Viktor F. Hess, and Friedrich Paneth (working in Vienna) made measurements of alpha radiation from various substances, including <sup>227</sup>Ac. They observed the possibility of a minor alpha branch of this nuclide, though follow-up work could not be done due to the outbreak of World War I. Their observations were not precise and sure enough for them to announce the discovery of element 87, though it is likely that they did indeed observe the decay of <sup>227</sup>Ac to <sup>223</sup>Fr. Soviet chemist Dmitry Dobroserdov was the first scientist to claim to have found eka-caesium, or francium. In 1925, he observed weak radioactivity in a sample of potassium, another alkali metal, and incorrectly concluded that eka-caesium was contaminating the sample (the radioactivity from the sample was from the naturally occurring potassium radioisotope, potassium-40). He then published a thesis on his predictions of the properties of eka-caesium, in which he named the element russium after his home country. Shortly thereafter, Dobroserdov began to focus on his teaching career at the Polytechnic Institute of Odesa, and he did not pursue the element further. The following year, English chemists Gerald J. F. Druce and Frederick H. Loring analyzed X-ray photographs of manganese(II) sulfate. They observed spectral lines which they presumed to be of eka-caesium. They announced their discovery of element 87 and proposed the name alkalinium, as it would be the heaviest alkali metal. In 1930, Fred Allison of the Alabama Polytechnic Institute claimed to have discovered element 87 (in addition to 85) when analyzing pollucite and lepidolite using his magneto-optical machine. Allison requested that it be named virginium after his home state of Virginia, along with the symbols Vi and Vm. In 1934, H.G. MacPherson of UC Berkeley disproved the effectiveness of Allison's device and the validity of his discovery. In 1936, Romanian physicist Horia Hulubei and his French colleague Yvette Cauchois also analyzed pollucite, this time using their high-resolution X-ray apparatus. They observed several weak emission lines, which they presumed to be those of element 87. Hulubei and Cauchois reported their discovery and proposed the name moldavium, along with the symbol Ml, after Moldavia, the Romanian province where Hulubei was born. In 1937, Hulubei's work was criticized by American physicist F. H. Hirsh Jr., who rejected Hulubei's research methods. Hirsh was certain that eka-caesium would not be found in nature, and that Hulubei had instead observed mercury or bismuth X-ray lines. Hulubei insisted that his X-ray apparatus and methods were too accurate to make such a mistake. Because of this, Jean Baptiste Perrin, Nobel Prize winner and Hulubei's mentor, endorsed moldavium as the true eka-caesium over Marguerite Perey's recently discovered francium. Perey took pains to be accurate and detailed in her criticism of Hulubei's work, and finally she was credited as the sole discoverer of element 87. All other previous purported discoveries of element 87 were ruled out due to francium's very limited half-life. ### Perey's analysis Eka-caesium was discovered on January 7, 1939, by Marguerite Perey of the Curie Institute in Paris, when she purified a sample of actinium-227 which had been reported to have a decay energy of 220 keV. Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one which was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, produced by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure which she later revised to 1%. Perey named the new isotope actinium-K (it is now referred to as francium-223) and in 1946, she proposed the name catium (Cm) for her newly discovered element, as she believed it to be the most electropositive cation of the elements. Irène Joliot-Curie, one of Perey's supervisors, opposed the name due to its connotation of cat rather than cation; furthermore, the symbol coincided with that which had since been assigned to curium. Perey then suggested francium, after France. This name was officially adopted by the International Union of Pure and Applied Chemistry (IUPAC) in 1949, becoming the second element after gallium to be named after France. It was assigned the symbol Fa, but this abbreviation was revised to the current Fr shortly thereafter. Francium was the last element discovered in nature, rather than synthesized, following hafnium and rhenium. Further research into francium's structure was carried out by, among others, Sylvain Lieberman and his team at CERN in the 1970s and 1980s. ## Occurrence <sup>223</sup>Fr is the result of the alpha decay of <sup>227</sup>Ac and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1 × 10<sup>18</sup> uranium atoms. Only about one ounce of francium is present naturally in the earth's crust. ## Production Francium can be synthesized by a fusion reaction when a gold-197 target is bombarded with a beam of oxygen-18 atoms from a linear accelerator in a process originally developed at the physics department of the State University of New York at Stony Brook in 1995. Depending on the energy of the oxygen beam, the reaction can yield francium isotopes with masses of 209, 210, and 211. <sup>197</sup>Au + <sup>18</sup>O → <sup>209</sup>Fr + 6 n <sup>197</sup>Au + <sup>18</sup>O → <sup>210</sup>Fr + 5 n <sup>197</sup>Au + <sup>18</sup>O → <sup>211</sup>Fr + 4 n The francium atoms leave the gold target as ions, which are neutralized by collision with yttrium and then isolated in a magneto-optical trap (MOT) in a gaseous unconsolidated state. Although the atoms only remain in the trap for about 30 seconds before escaping or undergoing nuclear decay, the process supplies a continual stream of fresh atoms. The result is a steady state containing a fairly constant number of atoms for a much longer time. The original apparatus could trap up to a few thousand atoms, while a later improved design could trap over 300,000 at a time. Sensitive measurements of the light emitted and absorbed by the trapped atoms provided the first experimental results on various transitions between atomic energy levels in francium. Initial measurements show very good agreement between experimental values and calculations based on quantum theory. The research project using this production method relocated to TRIUMF in 2012, where over 10<sup>6</sup> francium atoms have been held at a time, including large amounts of <sup>209</sup>Fr in addition to <sup>207</sup>Fr and <sup>221</sup>Fr. Other synthesis methods include bombarding radium with neutrons, and bombarding thorium with protons, deuterons, or helium ions. <sup>223</sup>Fr can also be isolated from samples of its parent <sup>227</sup>Ac, the francium being milked via elution with NH<sub>4</sub>Cl–CrO<sub>3</sub> from an actinium-containing cation exchanger and purified by passing the solution through a silicon dioxide compound loaded with barium sulfate. In 1996, the Stony Brook group trapped 3000 atoms in their MOT, which was enough for a video camera to capture the light given off by the atoms as they fluoresce. Francium has not been synthesized in amounts large enough to weigh.
54,211
Ganymede (moon)
1,173,690,922
Largest moon of Jupiter and in the Solar System
[ "Astronomical objects discovered in 1610", "Discoveries by Galileo Galilei", "Ganymede (moon)", "Moons of Jupiter", "Moons with a prograde orbit" ]
Ganymede, or Jupiter III, is the largest and most massive natural satellite of Jupiter as well as in the Solar System, being a planetary-mass moon. It is the largest Solar System object without an atmosphere, despite being the only moon of the Solar System with a magnetic field. Like Titan, it is larger than the planet Mercury, but has somewhat less surface gravity than Mercury, Io or the Moon. Ganymede is composed of approximately equal amounts of silicate rock and water. It is a fully differentiated body with an iron-rich, liquid core, and an internal ocean that may contain more water than all of Earth's oceans combined. Its surface is composed of two main types of terrain. Dark regions, saturated with impact craters and dated to four billion years ago, cover about a third of it. Lighter regions, crosscut by extensive grooves and ridges and only slightly less ancient, cover the remainder. The cause of the light terrain's disrupted geology is not fully known, but was likely the result of tectonic activity due to tidal heating. Ganymede orbits Jupiter in roughly seven days and is in a 1:2:4 orbital resonance with the moons Europa and Io, respectively. Possessing a metallic core, it has the lowest moment of inertia factor of any solid body in the Solar System. Ganymede's magnetic field is probably created by convection within its liquid iron core, also created by Jupiter's tidal forces. The meager magnetic field is buried within Jupiter's far larger magnetic field and would show only as a local perturbation of the field lines. Ganymede has a thin oxygen atmosphere that includes O, O<sub>2</sub>, and possibly O<sub>3</sub> (ozone). Atomic hydrogen is a minor atmospheric constituent. Whether Ganymede has an ionosphere associated with its atmosphere is unresolved. Ganymede's discovery is credited to Simon Marius and Galileo Galilei, who both observed it in 1610, as the third of the Galilean moons, the first group of objects discovered orbiting another planet. Its name was soon suggested by astronomer Simon Marius, after the mythological Ganymede, a Trojan prince desired by Zeus (the Greek counterpart of Jupiter), who carried him off to be the cupbearer of the gods. Beginning with Pioneer 10, several spacecraft have explored Ganymede. The Voyager probes, Voyager 1 and Voyager 2, refined measurements of its size, while Galileo discovered its underground ocean and magnetic field. The next planned mission to the Jovian system is the European Space Agency's Jupiter Icy Moon Explorer (JUICE), which was launched in 2023. After flybys of all three icy Galilean moons, it is planned to enter orbit around Ganymede. ## History Chinese astronomical records report that in 365 BC, Gan De detected what might have been a moon of Jupiter, probably Ganymede, with the naked eye. However, Gan De reported the color of the companion as reddish, which is puzzling since the moons are too faint for their color to be perceived with the naked eye. Shi Shen and Gan De together made fairly accurate observations of the five major planets. On January 7, 1610, Galileo Galilei used a telescope to observe what he thought were three stars near Jupiter, including what turned out to be Ganymede, Callisto, and one body that turned out to be the combined light from Io and Europa; the next night he noticed that they had moved. On January 13, he saw all four at once for the first time, but had seen each of the moons before this date at least once. By January 15, Galileo came to the conclusion that the stars were actually bodies orbiting Jupiter. ## Name Galileo claimed the right to name the moons he had discovered. He considered "Cosmian Stars" and settled on "Medicean Stars", in honor of Cosimo II de' Medici. The French astronomer Nicolas-Claude Fabri de Peiresc suggested individual names from the Medici family for the moons, but his proposal was not taken up. Simon Marius, who had originally claimed to have found the Galilean satellites, tried to name the moons the "Saturn of Jupiter", the "Jupiter of Jupiter" (this was Ganymede), the "Venus of Jupiter", and the "Mercury of Jupiter", another nomenclature that never caught on. From a suggestion by Johannes Kepler, Marius suggested a different naming system based on Greek mythology: > Jupiter is much blamed by the poets on account of his irregular loves. Three maidens are especially mentioned as having been clandestinely courted by Jupiter with success. Io, daughter of the River Inachus, Callisto of Lycaon, Europa of Agenor. Then there was Ganymede, the handsome son of King Tros, whom Jupiter, having taken the form of an eagle, transported to heaven on his back, as poets fabulously tell... I think, therefore, that I shall not have done amiss if the First is called by me Io, the Second Europa, the Third, on account of its majesty of light, Ganymede, the Fourth Callisto... This name and those of the other Galilean satellites fell into disfavor for a considerable time, and were not in common use until the mid-20th century. In much of the earlier astronomical literature, Ganymede is referred to instead by its Roman numeral designation, Jupiter III (a system introduced by Galileo), in other words "the third satellite of Jupiter". Following the discovery of moons of Saturn, a naming system based on that of Kepler and Marius was used for Jupiter's moons. Ganymede is the only Galilean moon of Jupiter named after a male figure—like Io, Europa, and Callisto, he was a lover of Zeus. The Galilean satellites retain the Italian spellings of their names. In the cases of Io, Europa and Callisto, these are identical to the Latin, but the Latin form of Ganymede is Ganymedes. In English, the final 'e' is silent, perhaps under the influence of French, unlike later names taken from Latin and Greek. ## Orbit and rotation Ganymede orbits Jupiter at a distance of 1,070,400 kilometres (665,100 mi), third among the Galilean satellites, and completes a revolution every seven days and three hours. Like most known moons, Ganymede is tidally locked, with one side always facing toward the planet, hence its day is also seven days and three hours. Its orbit is very slightly eccentric and inclined to the Jovian equator, with the eccentricity and inclination changing quasi-periodically due to solar and planetary gravitational perturbations on a timescale of centuries. The ranges of change are 0.0009–0.0022 and 0.05–0.32°, respectively. These orbital variations cause the axial tilt (the angle between rotational and orbital axes) to vary between 0 and 0.33°. Ganymede participates in orbital resonances with Europa and Io: for every orbit of Ganymede, Europa orbits twice and Io orbits four times. Conjunctions (alignment on the same side of Jupiter) between Io and Europa occur when Io is at periapsis and Europa at apoapsis. Conjunctions between Europa and Ganymede occur when Europa is at periapsis. The longitudes of the Io–Europa and Europa–Ganymede conjunctions change with the same rate, making triple conjunctions impossible. Such a complicated resonance is called the Laplace resonance. The current Laplace resonance is unable to pump the orbital eccentricity of Ganymede to a higher value. The value of about 0.0013 is probably a remnant from a previous epoch, when such pumping was possible. The Ganymedian orbital eccentricity is somewhat puzzling; if it is not pumped now it should have decayed long ago due to the tidal dissipation in the interior of Ganymede. This means that the last episode of the eccentricity excitation happened only several hundred million years ago. Because Ganymede's orbital eccentricity is relatively low—on average 0.0015—tidal heating is negligible now. However, in the past Ganymede may have passed through one or more Laplace-like resonances that were able to pump the orbital eccentricity to a value as high as 0.01–0.02. This probably caused a significant tidal heating of the interior of Ganymede; the formation of the grooved terrain may be a result of one or more heating episodes. There are two hypotheses for the origin of the Laplace resonance among Io, Europa, and Ganymede: that it is primordial and has existed from the beginning of the Solar System; or that it developed after the formation of the Solar System. A possible sequence of events for the latter scenario is as follows: Io raised tides on Jupiter, causing Io's orbit to expand (due to conservation of momentum) until it encountered the 2:1 resonance with Europa; after that the expansion continued, but some of the angular moment was transferred to Europa as the resonance caused its orbit to expand as well; the process continued until Europa encountered the 2:1 resonance with Ganymede. Eventually the drift rates of conjunctions between all three moons were synchronized and locked in the Laplace resonance. ## Physical characteristics ### Size With a diameter of about 5,270 kilometres (3,270 mi) and a mass of 1.48×10<sup>20</sup> tonnes (1.48×10<sup>23</sup> kg; 3.26×10<sup>23</sup> lb), Ganymede is the largest and most massive moon in the Solar System. It is slightly more massive than the second most massive moon, Saturn's satellite Titan, and is more than twice as massive as the Earth's Moon. It is larger than the planet Mercury, which has a diameter of 4,880 kilometres (3,030 mi), but is only 45 percent of Mercury's mass. Ganymede is the ninth-largest object in the solar system, but the tenth-most massive. ### Composition The average density of Ganymede, 1.936 g/cm<sup>3</sup> (a bit greater than Callisto's), suggests a composition of about equal parts rocky material and mostly water ices. Some of the water is liquid, forming an underground ocean. The mass fraction of ices is between 46 and 50 percent, which is slightly lower than that in Callisto. Some additional volatile ices such as ammonia may also be present. The exact composition of Ganymede's rock is not known, but is probably close to the composition of L/LL type ordinary chondrites, which are characterized by less total iron, less metallic iron and more iron oxide than H chondrites. The weight ratio of iron to silicon ranges between 1.05 and 1.27 in Ganymede, whereas the solar ratio is around 1.8. ### Surface features Ganymede's surface has an albedo of about 43 percent. Water ice seems to be ubiquitous on its surface, with a mass fraction of 50–90 percent, significantly more than in Ganymede as a whole. Near-infrared spectroscopy has revealed the presence of strong water ice absorption bands at wavelengths of 1.04, 1.25, 1.5, 2.0 and 3.0 μm. The grooved terrain is brighter and has a more icy composition than the dark terrain. The analysis of high-resolution, near-infrared and UV spectra obtained by the Galileo spacecraft and from Earth observations has revealed various non-water materials: carbon dioxide, sulfur dioxide and, possibly, cyanogen, hydrogen sulfate and various organic compounds. Galileo results have also shown magnesium sulfate (MgSO<sub>4</sub>) and, possibly, sodium sulfate (Na<sub>2</sub>SO<sub>4</sub>) on Ganymede's surface. These salts may originate from the subsurface ocean. The Ganymedian surface albedo is very asymmetric; the leading hemisphere is brighter than the trailing one. This is similar to Europa, but the reverse for Callisto. The trailing hemisphere of Ganymede appears to be enriched in sulfur dioxide. The distribution of carbon dioxide does not demonstrate any hemispheric asymmetry, but little or no carbon dioxide is observed near the poles. Impact craters on Ganymede (except one) do not show any enrichment in carbon dioxide, which also distinguishes it from Callisto. Ganymede's carbon dioxide gas was probably depleted in the past. Ganymede's surface is a mix of two types of terrain: very old, highly cratered, dark regions and somewhat younger (but still ancient), lighter regions marked with an extensive array of grooves and ridges. The dark terrain, which comprises about one-third of the surface, contains clays and organic materials that could indicate the composition of the impactors from which Jovian satellites accreted. The heating mechanism required for the formation of the grooved terrain on Ganymede is an unsolved problem in the planetary sciences. The modern view is that the grooved terrain is mainly tectonic in nature. Cryovolcanism is thought to have played only a minor role, if any. The forces that caused the strong stresses in the Ganymedian ice lithosphere necessary to initiate the tectonic activity may be connected to the tidal heating events in the past, possibly caused when the satellite passed through unstable orbital resonances. The tidal flexing of the ice may have heated the interior and strained the lithosphere, leading to the development of cracks and horst and graben faulting, which erased the old, dark terrain on 70 percent of the surface. The formation of the grooved terrain may also be connected with the early core formation and subsequent tidal heating of Ganymede's interior, which may have caused a slight expansion of Ganymede by one to six percent due to phase transitions in ice and thermal expansion. During subsequent evolution deep, hot water plumes may have risen from the core to the surface, leading to the tectonic deformation of the lithosphere. Radiogenic heating within the satellite is the most relevant current heat source, contributing, for instance, to ocean depth. Research models have found that if the orbital eccentricity were an order of magnitude greater than currently (as it may have been in the past), tidal heating would be a more substantial heat source than radiogenic heating. Cratering is seen on both types of terrain, but is especially extensive on the dark terrain: it appears to be saturated with impact craters and has evolved largely through impact events. The brighter, grooved terrain contains many fewer impact features, which have been only of a minor importance to its tectonic evolution. The density of cratering indicates an age of 4 billion years for the dark terrain, similar to the highlands of the Moon, and a somewhat younger age for the grooved terrain (but how much younger is uncertain). Ganymede may have experienced a period of heavy cratering 3.5 to 4 billion years ago similar to that of the Moon. If true, the vast majority of impacts happened in that epoch, whereas the cratering rate has been much smaller since. Craters both overlay and are crosscut by the groove systems, indicating that some of the grooves are quite ancient. Relatively young craters with rays of ejecta are also visible. Ganymedian craters are flatter than those on the Moon and Mercury. This is probably due to the relatively weak nature of Ganymede's icy crust, which can (or could) flow and thereby soften the relief. Ancient craters whose relief has disappeared leave only a "ghost" of a crater known as a palimpsest. One significant feature on Ganymede is a dark plain named Galileo Regio, which contains a series of concentric grooves, or furrows, likely created during a period of geologic activity. Ganymede also has polar caps, likely composed of water frost. The frost extends to 40° latitude. These polar caps were first seen by the Voyager spacecraft. Theories on the formation of the caps include the migration of water to higher latitudes and bombardment of the ice by plasma. Data from Galileo suggests the latter is correct. The presence of a magnetic field on Ganymede results in more intense charged particle bombardment of its surface in the unprotected polar regions; sputtering then leads to redistribution of water molecules, with frost migrating to locally colder areas within the polar terrain. A crater named Anat provides the reference point for measuring longitude on Ganymede. By definition, Anat is at 128° longitude. The 0° longitude directly faces Jupiter, and unless stated otherwise longitude increases toward the west. ### Internal structure Ganymede appears to be fully differentiated, with an internal structure consisting of an iron-sulfide–iron core, a silicate mantle and outer layers of water ice and liquid water. `The precise thicknesses of the different layers in the interior of Ganymede depend on the assumed composition of silicates (fraction of olivine and pyroxene) and amount of sulfur in the core. Ganymede has the lowest moment of inertia factor, 0.31, among the solid Solar System bodies. This is a consequence of its substantial water content and fully differentiated interior.` #### Subsurface oceans In the 1970s, NASA scientists first suspected that Ganymede has a thick ocean between two layers of ice, one on the surface and one beneath a liquid ocean and atop the rocky mantle. In the 1990s, NASA's Galileo mission flew by Ganymede, and found indications of such a subsurface ocean. An analysis published in 2014, taking into account the realistic thermodynamics for water and effects of salt, suggests that Ganymede might have a stack of several ocean layers separated by different phases of ice, with the lowest liquid layer adjacent to the rocky mantle. Water–rock contact may be an important factor in the origin of life. The analysis also notes that the extreme depths involved (\~800 km to the rocky "seafloor") mean that temperatures at the bottom of a convective (adiabatic) ocean can be up to 40 K higher than those at the ice–water interface. In March 2015, scientists reported that measurements with the Hubble Space Telescope of how the aurorae moved confirmed that Ganymede has a subsurface ocean. A large salt-water ocean affects Ganymede's magnetic field, and consequently, its aurora. The evidence suggests that Ganymede's oceans might be the largest in the entire Solar System. There is some speculation on the potential habitability of Ganymede's ocean. #### Core The existence of a liquid, iron–nickel-rich core provides a natural explanation for the intrinsic magnetic field of Ganymede detected by Galileo spacecraft. The convection in the liquid iron, which has high electrical conductivity, is the most reasonable model of magnetic field generation. The density of the core is 5.5–6 g/cm<sup>3</sup> and the silicate mantle is 3.4–3.6 g/cm<sup>3</sup>. The radius of this core may be up to 500 km. The temperature in the core of Ganymede is probably 1500–1700 K and pressure up to 10 GPa (99,000 atm). ### Atmosphere and ionosphere In 1972, a team of Indian, British and American astronomers working in Java (Indonesia) and Kavalur (India) claimed that they had detected a thin atmosphere during an occultation, when it and Jupiter passed in front of a star. They estimated that the surface pressure was around 0.1 Pa (1 microbar). However, in 1979, Voyager 1 observed an occultation of the star κ Centauri during its flyby of Jupiter, with differing results. The occultation measurements were conducted in the far-ultraviolet spectrum at wavelengths shorter than 200 nm, which were much more sensitive to the presence of gases than the 1972 measurements made in the visible spectrum. No atmosphere was revealed by the Voyager data. The upper limit on the surface particle number density was found to be 1.5×10<sup>9</sup> cm<sup>−3</sup>, which corresponds to a surface pressure of less than 2.5 μPa (25 picobar). The latter value is almost five orders of magnitude less than the 1972 estimate. Despite the Voyager data, evidence for a tenuous oxygen atmosphere (exosphere) on Ganymede, very similar to the one found on Europa, was found by the Hubble Space Telescope (HST) in 1995. HST actually observed airglow of atomic oxygen in the far-ultraviolet at the wavelengths 130.4 nm and 135.6 nm. Such an airglow is excited when molecular oxygen is dissociated by electron impacts, which is evidence of a significant neutral atmosphere composed predominantly of O<sub>2</sub> molecules. The surface number density probably lies in the (1.2–7)×10<sup>8</sup> cm<sup>−3</sup> range, corresponding to the surface pressure of 0.2–1.2 μPa. These values are in agreement with the Voyager's upper limit set in 1981. The oxygen is not evidence of life; it is thought to be produced when water ice on Ganymede's surface is split into hydrogen and oxygen by radiation, with the hydrogen then being more rapidly lost due to its low atomic mass. The airglow observed over Ganymede is not spatially homogeneous like that over Europa. HST observed two bright spots located in the northern and southern hemispheres, near ± 50° latitude, which is exactly the boundary between the open and closed field lines of the Ganymedian magnetosphere (see below). The bright spots are probably polar auroras, caused by plasma precipitation along the open field lines. The existence of a neutral atmosphere implies that an ionosphere should exist, because oxygen molecules are ionized by the impacts of the energetic electrons coming from the magnetosphere and by solar EUV radiation. However, the nature of the Ganymedian ionosphere is as controversial as the nature of the atmosphere. Some Galileo measurements found an elevated electron density near Ganymede, suggesting an ionosphere, whereas others failed to detect anything. The electron density near the surface is estimated by different sources to lie in the range 400–2,500 cm<sup>−3</sup>. As of 2008, the parameters of the ionosphere of Ganymede are not well constrained. Additional evidence of the oxygen atmosphere comes from spectral detection of gases trapped in the ice at the surface of Ganymede. The detection of ozone (O<sub>3</sub>) bands was announced in 1996. In 1997 spectroscopic analysis revealed the dimer (or diatomic) absorption features of molecular oxygen. Such an absorption can arise only if the oxygen is in a dense phase. The best candidate is molecular oxygen trapped in ice. The depth of the dimer absorption bands depends on latitude and longitude, rather than on surface albedo—they tend to decrease with increasing latitude on Ganymede, whereas O<sub>3</sub> shows an opposite trend. Laboratory work has found that O<sub>2</sub> would not cluster or bubble but dissolve in ice at Ganymede's relatively warm surface temperature of 100 K (−173.15 °C). A search for sodium in the atmosphere, just after such a finding on Europa, turned up nothing in 1997. Sodium is at least 13 times less abundant around Ganymede than around Europa, possibly because of a relative deficiency at the surface or because the magnetosphere fends off energetic particles. Another minor constituent of the Ganymedian atmosphere is atomic hydrogen. Hydrogen atoms were observed as far as 3,000 km from Ganymede's surface. Their density on the surface is about 1.5×10<sup>4</sup> cm<sup>−3</sup>. In 2021 water vapour was detected in the atmosphere of Ganymede. ### Magnetosphere The Galileo craft made six close flybys of Ganymede from 1995 to 2000 (G1, G2, G7, G8, G28 and G29) and discovered that Ganymede has a permanent (intrinsic) magnetic moment independent of the Jovian magnetic field. The value of the moment is about 1.3 T·m<sup>3</sup>, which is three times larger than the magnetic moment of Mercury. The magnetic dipole is tilted with respect to the rotational axis of Ganymede by 176°, which means that it is directed against the Jovian magnetic moment. Its north pole lies below the orbital plane. The dipole magnetic field created by this permanent moment has a strength of 719 ± 2 nT at Ganymede's equator, which should be compared with the Jovian magnetic field at the distance of Ganymede—about 120 nT. The equatorial field of Ganymede is directed against the Jovian field, meaning reconnection is possible. The intrinsic field strength at the poles is two times that at the equator—1440 nT. The permanent magnetic moment carves a part of space around Ganymede, creating a tiny magnetosphere embedded inside that of Jupiter; it is the only moon in the Solar System known to possess the feature. Its diameter is 4–5 Ganymede radii. The Ganymedian magnetosphere has a region of closed field lines located below 30° latitude, where charged particles (electrons and ions) are trapped, creating a kind of radiation belt. The main ion species in the magnetosphere is single ionized oxygen—O<sup>+</sup>—which fits well with Ganymede's tenuous oxygen atmosphere. In the polar cap regions, at latitudes higher than 30°, magnetic field lines are open, connecting Ganymede with Jupiter's ionosphere. In these areas, the energetic (tens and hundreds of kiloelectronvolt) electrons and ions have been detected, which may cause the auroras observed around the Ganymedian poles. In addition, heavy ions precipitate continuously on Ganymede's polar surface, sputtering and darkening the ice. The interaction between the Ganymedian magnetosphere and Jovian plasma is in many respects similar to that of the solar wind and Earth's magnetosphere. The plasma co-rotating with Jupiter impinges on the trailing side of the Ganymedian magnetosphere much like the solar wind impinges on the Earth's magnetosphere. The main difference is the speed of plasma flow—supersonic in the case of Earth and subsonic in the case of Ganymede. Because of the subsonic flow, there is no bow shock off the trailing hemisphere of Ganymede. In addition to the intrinsic magnetic moment, Ganymede has an induced dipole magnetic field. Its existence is connected with the variation of the Jovian magnetic field near Ganymede. The induced moment is directed radially to or from Jupiter following the direction of the varying part of the planetary magnetic field. The induced magnetic moment is an order of magnitude weaker than the intrinsic one. The field strength of the induced field at the magnetic equator is about 60 nT—half of that of the ambient Jovian field. The induced magnetic field of Ganymede is similar to those of Callisto and Europa, indicating that Ganymede also has a subsurface water ocean with a high electrical conductivity. Given that Ganymede is completely differentiated and has a metallic core, its intrinsic magnetic field is probably generated in a similar fashion to the Earth's: as a result of conducting material moving in the interior. The magnetic field detected around Ganymede is likely to be caused by compositional convection in the core, if the magnetic field is the product of dynamo action, or magnetoconvection. Despite the presence of an iron core, Ganymede's magnetosphere remains enigmatic, particularly given that similar bodies lack the feature. Some research has suggested that, given its relatively small size, the core ought to have sufficiently cooled to the point where fluid motions, hence a magnetic field would not be sustained. One explanation is that the same orbital resonances proposed to have disrupted the surface also allowed the magnetic field to persist: with Ganymede's eccentricity pumped and tidal heating of the mantle increased during such resonances, reducing heat flow from the core, leaving it fluid and convective. Another explanation is a remnant magnetization of silicate rocks in the mantle, which is possible if the satellite had a more significant dynamo-generated field in the past. ### Radiation environment The radiation level at the surface of Ganymede is considerably lower than at Europa, being 50–80 mSv (5–8 rem) per day on Europa, an amount that would cause severe illness or death in human beings exposed for two months. ## Origin and evolution Ganymede probably formed by an accretion in Jupiter's subnebula, a disk of gas and dust surrounding Jupiter after its formation. The accretion of Ganymede probably took about 10,000 years, much shorter than the 100,000 years estimated for Callisto. The Jovian subnebula may have been relatively "gas-starved" when the Galilean satellites formed; this would have allowed for the lengthy accretion times required for Callisto. In contrast Ganymede formed closer to Jupiter, where the subnebula was denser, which explains its shorter formation timescale. This relatively fast formation prevented the escape of accretional heat, which may have led to ice melt and differentiation: the separation of the rocks and ice. The rocks settled to the center, forming the core. In this respect, Ganymede is different from Callisto, which apparently failed to melt and differentiate early due to loss of the accretional heat during its slower formation. This hypothesis explains why the two Jovian moons look so dissimilar, despite their similar mass and composition. Alternative theories explain Ganymede's greater internal heating on the basis of tidal flexing or more intense pummeling by impactors during the Late Heavy Bombardment. In the latter case, modeling suggests that differentiation would become a runaway process at Ganymede but not Callisto. After formation, Ganymede's core largely retained the heat accumulated during accretion and differentiation, only slowly releasing it to the ice mantle. The mantle, in turn, transported it to the surface by convection. The decay of radioactive elements within rocks further heated the core, causing increased differentiation: an inner, iron–iron-sulfide core and a silicate mantle formed. With this, Ganymede became a fully differentiated body. By comparison, the radioactive heating of undifferentiated Callisto caused convection in its icy interior, which effectively cooled it and prevented large-scale melting of ice and rapid differentiation. The convective motions in Callisto have caused only a partial separation of rock and ice. Today, Ganymede continues to cool slowly. The heat being released from its core and silicate mantle enables the subsurface ocean to exist, whereas the slow cooling of the liquid Fe–FeS core causes convection and supports magnetic field generation. The current heat flux out of Ganymede is probably higher than that out of Callisto. ## Exploration Several spacecraft have performed close flybys of Ganymede: two Pioneer and two Voyager spacecraft made a single flyby each between 1973 and 1979; the Galileo spacecraft made six passes between 1996 and 2000; and the Juno spacecraft performed two flybys in 2019 and 2021. No spacecraft has yet orbited Ganymede, but the JUICE mission, which launched in April 2023, intends to do so. ### Completed flybys The first spacecraft to approach close to Ganymede was Pioneer 10, which performed a flyby in 1973 as it passed through the Jupiter system at high speed. Pioneer 11 made a similar flyby in 1974. Data sent back by the two spacecraft was used to determine the moon's physical characteristics and provided images of the surface with up to 400 km (250 mi) resolution. Pioneer 10's closest approach was 446,250 km, about 85 times Ganymede's diameter. Voyager 1 and Voyager 2 both studied Ganymede when passing through the Jupiter system in 1979. Data from those flybys were used to refine the size of Ganymede, revealing it was larger than Saturn's moon Titan, which was previously thought to have been bigger. Images from the Voyagers provided the first views of the moon's grooved surface terrain. The Pioneer and Voyager flybys were all at large distances and high speeds, as they flew on unbound trajectories through the Jupiter system. Better data can be obtained from a spacecraft which is orbiting Jupiter, as it can encounter Ganymede at a lower speed and adjust the orbit for a closer approach. In 1995, the Galileo spacecraft entered orbit around Jupiter and between 1996 and 2000 made six close flybys of Ganymede. These flybys were denoted G1, G2, G7, G8, G28 and G29. During the closest flyby (G2), Galileo passed just 264 km from the surface of Ganymede (five percent of the moon's diameter), which remains the closest approach by any spacecraft. During the G1 flyby in 1996, Galileo instruments detected Ganymede's magnetic field. Data from the Galileo flybys was used to discover the sub-surface ocean, which was announced in 2001. High spatial resolution spectra of Ganymede taken by Galileo were used to identify several non-ice compounds on the surface. The New Horizons spacecraft also observed Ganymede, but from a much larger distance as it passed through the Jupiter system in 2007 (en route to Pluto). The data were used to perform topographic and compositional mapping of Ganymede. Like Galileo, the Juno spacecraft orbited around Jupiter. On 2019 December 25, Juno performed a distant flyby of Ganymede during its 24th orbit of Jupiter, at a range of 97,680 to 109,439 kilometers (60,696 to 68,002 mi). This flyby provided images of the moon's polar regions. In June 2021, Juno performed a second flyby, at a closer distance of 1,038 kilometers (645 mi). This encounter was designed to provide a gravity assist to reduce Juno'''s orbital period from 53 days to 43 days. Additional images of the surface were collected. ### Future missions The Jupiter Icy Moons Explorer (JUICE) will be the first to enter orbit around Ganymede itself. JUICE was launched on 14th April 2023. It is intended to perform its first flyby of Ganymede in 2031, then enter orbit of the moon in 2032. When the spacecraft consumes its propellant, JUICE is planned to be deorbited and impact Ganymede in February 2034. In addition to JUICE, NASA's Europa Clipper, which is scheduled to launch in October 2024, will conduct 4 close flybys of Ganymede beginning in 2030. ### Cancelled proposals Several other missions have been proposed to flyby or orbit Ganymede, but were either not selected for funding or cancelled before launch. The Jupiter Icy Moons Orbiter would have studied Ganymede in greater detail. However, the mission was canceled in 2005. Another old proposal was called The Grandeur of Ganymede. A Ganymede orbiter based on the Juno'' probe was proposed in 2010 for the Planetary Science Decadal Survey. The mission was not supported, with the Decadal Survey preferring the Europa Clipper mission instead. The Europa Jupiter System Mission had a proposed launch date in 2020, and was a joint NASA and ESA proposal for exploration of many of Jupiter's moons including Ganymede. In February 2009 it was announced that ESA and NASA had given this mission priority ahead of the Titan Saturn System Mission. The mission was to consist of the NASA-led Jupiter Europa Orbiter, the ESA-led Jupiter Ganymede Orbiter, and possibly a JAXA-led Jupiter Magnetospheric Orbiter. The NASA and JAXA components were later cancelled, and ESA's appeared likely to be cancelled too, but in 2012 ESA announced it would go ahead alone. The European part of the mission became the Jupiter Icy Moon Explorer (JUICE). The Russian Space Research Institute proposed a Ganymede lander astrobiology mission called Laplace-P, possibly in partnership with JUICE. If selected, it would have been launched in 2023. The mission was cancelled due to a lack of funding in 2017. ## Gallery ## See also - Cold trap (astronomy) - Jupiter's moons in fiction - List of craters on Ganymede - List of geological features on Ganymede - List of natural satellites - Lunar and Planetary Institute
32,623,684
Phellinus ellipsoideus
1,143,636,052
Species of fungus in the family Hymenochaetaceae found in China
[ "Fungi described in 2008", "Fungi of China", "Medicinal fungi", "Phellinus", "Taxa named by Bao-Kai Cui", "Taxa named by Yu-Cheng Dai" ]
Phellinus ellipsoideus (formerly Fomitiporia ellipsoidea) is a species of polypore fungus in the family Hymenochaetaceae, a specimen of which produced the largest fungal fruit body ever recorded. Found in China, the fruit bodies produced by the species are brown, woody basidiocarps that grow on dead wood, where the fungus feeds as a saprotroph. The basidiocarps are perennial, allowing them to grow very large under favourable circumstances. They are resupinate, measuring 30 centimetres (12 in) or more in length, though typically extending less than a centimetre from the surface of the wood. P. ellipsoideus produces distinct ellipsoidal spores, after which it is named, and unusual setae. These two features allow it to be readily differentiated microscopically from other, similar species. Chemical compounds isolated from the species include several steroidal compounds. These may have pharmacological applications, but further research is needed. The species was named in 2008 by Bao-Kai Cui and Yu-Cheng Dai based on collections made in Fujian Province. It was placed in the genus Fomitiporia, but later analysis suggests that it is more closely related to Phellinus species. It was revealed in 2011 that a very large fruit body, measuring up to 1,085 cm (427 in) in length, had been found on Hainan Island. The specimen, which was 20 years old, was estimated to weigh between 400 and 500 kilograms (880 and 1,100 lb). This was markedly larger than the previously largest recorded fungal fruit body, a specimen of Rigidoporus ulmarius found in the United Kingdom that had a circumference of 425 cm (167 in). The findings were formally published in September 2011, but attracted international attention from the mainstream press prior to this. ## Taxonomy and phylogenetics The species was first described in 2008 by Bao-Kai Cui and Yu-Cheng Dai, both of the Beijing Forestry University. Five specimens of the then-unknown species were collected during field work in the Wanmulin Nature Reserve (), Jian'ou, Fujian Province. The pair named the species Fomitiporia ellipsoidea in an article in the journal Mycotaxon. The specific name ellipsoidea is from the Latin meaning "ellipsoid", and refers to the shape of the spores. Species of the order Hymenochaetales, to which this taxon belongs, make up 25% of the over 700 species of polypore found in China. Phylogenetic analysis of large subunit and internal transcribed spacer DNA sequence data, the results of which were published in 2012, concluded that the species then known as F. ellipsoidea was closely related to Phellinus gabonensis, P. caribaeo-quercicolus and the newly described P. castanopsidis. The four species share morphological characteristics, and form a monophyletic clade. This clade resolved more closely with the Phellinus type species P. igniarius than it did with the Fomitiporia type species F. langloisii, and so the authors proposed a transference of F. ellipsoidea to Phellinus, naming the new combination Phellinus ellipsoideus. While the taxonomic database Index Fungorum follows the 2012 study, MycoBank continues to list Fomitiporia ellipsoidea as the correct binomial. Some mycologists consider Fomitiporia to be a synonym of Phellinus anyway. ## Description Phellinus ellipsoideus produces resupinate fruit bodies that are hard and woody, whether fresh or dry. The original description characterized them as measuring up to 30 centimetres (12 in) "or more" in length, 20 cm (7.9 in) in width, and extending 8 mm (0.3 in) from the wood on which they grow at their thickest point. The outermost layer is typically yellow to yellowish-brown, measuring 2 mm (0.08 in) in thickness. The shiny surface of the hymenium, the spore-producing section of the fruit body, is covered in pores and ranges in colour from yellow-brown to rust-brown. There are between 5 and 8 pores per millimetre. The tubes are up to 8 mm (0.3 in) in depth, have the same colouration as the surface of the hymenium, and are distinctively layered. They are also hard and woody. The very thin yellow-brown layer of flesh measures less than 0.5 mm (0.02 in) in width. As with much of the rest of the fruit body, it is firm, solid, and reminiscent of wood. The fruit bodies lack any odour or taste. ### Microscopic features Phellinus ellipsoideus produces basidiospores that are ellipsoidal or broadly ellipsoidal in shape. The spore shape is one of the features that makes the species readily recognisable microscopically, and the spores measure from 4.5 to 6.1 by 3.5 to 5 micrometres (μm). The average spore length is 5.25 μm, while the average width is 4.14 μm. The spores have thick cell walls, and are hyaline. They are strongly cyanophilous, meaning that the cell walls will readily absorb methyl blue stain. In addition, they are weakly dextrinoid, meaning that they will stain slightly reddish-brown in Melzer's reagent or Lugol's solution. The spores are borne on barrel-shaped basidia, with four spores per basidium, measuring 8 to 12 by 6 to 7 μm. There are also basidioles, which are similar in shape to the basidia, but slightly smaller. In addition to the spore shape, the species is readily identified with the use of a microscope because of its setae. Setae are a kind of unusual cystidia unique to the family Hymenochaetaceae, and, in P. ellipsoideus, are found in the hymenium. In shape, the setae are ventricose, with distinctive hooks on their tips. In colour, they are yellow-brown, and they have thick cell walls. They measure 20 to 30 by 10 to 14 μm. Neither more standard cystidia nor cystidioles (underdeveloped cystidia) can be found in the species, but there are a number of rhomboid crystals throughout the hymenium and the flesh. Most of the tissue of a fungal fruit body is made up of hyphae, which can be of three forms: generative, skeletal and binding. In P. ellipsoideus, the tissue is dominated by skeletal hyphae, but also has generative hyphae; it lacks binding hyphae. For this reason, the hyphal structure of P. ellipsoideus is referred to as "dimitic". The hyphae are divided into separate cells by septae, and lack clamp connections. The skeletal hyphae do not react with Melzer's reagent or Lugol's solution, and are not cyanophilous. While the hyphae will darken when a solution of potassium hydroxide is applied (the KOH test), they remain otherwise unchanged. The main structure of the fruit body consists primarily of an agglutination (mass) of interwoven skeletal hyphae, which are golden- to rust-brown. The hyphae are unbranched, forming long tubes 2 to 3.6 μm in diameter, enveloping a lumen of variable thickness. There are also hyaline generative hyphae. These hyphae have thinner walls than the skeletal hyphae, and are also septate (possessing of septa), but are sometimes branched. They measure 2 to 3 μm in diameter. The flesh, again, is primarily made up of skeletal hyphae with some generative hyphae. The thick-walled skeletal hyphae are a yellow-brown to rust brown, and are slightly less agglutinate. The hyphae in the flesh are a little smaller; the skeletal hyphae measure 1.8 to 3.4 μm in diameter, while the generative hyphae measure 1.5 to 2.6 μm in diameter. ### Similar species A cogeneric species potentially similar to Phellinus ellipsoideus is P. caribaeo-quercicola. The latter species shares the hooked hymenial setae and ellipsoidal to broadly ellipsoidal spores. However, details of the fruit body differ, and the spores are hyaline to yellowish, and not dextrinoid. Further, the species is known only from tropical America, where it grows on the Cuban oak. P. castanopsidis, newly described in 2013, is not perennial, and has a pale greyish-brown pore surface. The spores are also slightly larger than those of P. ellipsoideus. Phellinus ellipsoideus differs from species of Fomitiporia in two key respects. Its spores are less dextrinoid than those of the genus and their shape is atypical. Other than this, it is typical of the genus, according to the original description. Five species of Fomitiporia, F. bannaensis, F. pseudopunctata, F. sonorae, F. sublaevigata and F. tenuis, share with P. ellipsoideus the resupinate fruit bodies and the setae in the hymenium. Despite this, all of them but P. ellipsoideus have straight hymenial setae, and all of them have spores that are spherical or almost spherical, which is much more typical of the genus. F. uncinata (formerly Phellinus uncinatus) has hooked hymenial setae, and the spores are, as with P. ellipsoideus, thick-walled and dextrinoid. The species can be differentiated by the fact the spores are spherical or nearly so, and somewhat larger than those of P. ellipsoideus, measuring 5.5 to 7 by 5 to 6.5 μm. The species is also known only from tropical America, where it grows on bamboo. ## Distribution and ecology Phellinus ellipsoideus has been recorded growing on the fallen wood of oaks of the subgenus Cyclobalanopsis, as well as the wood of other flowering plants. The species favours the trunks of trees, where it feeds as a saprotroph, causing white rot. P. ellipsoideus fruit bodies are perennial growers, allowing them to, in the correct circumstances, grow very large. The species is found in the tropical and subtropical areas of China; it has been recorded in Fujian Province and Hainan Province. It is not a common species, and fruit bodies are only occasionally encountered. ### Largest fruit body In 2010, Cui and Dai were performing field work in tropical woodland on Hainan Island, China, studying wood-rotting fungi. The pair uncovered a very large P. ellipsoideus fruit body on a fallen Quercus asymmetrica log, which turned out to be the largest fungal fruit body ever documented. The fruit body was found at an altitude of 958 metres (3,143 ft), in old-growth forest. They were initially unable to identify the specimen as P. ellipsoideus, because of its large size, but tests revealed its identity after samples were taken for analysis. After their initial encounter with the large fruit body, Cui and Dai returned to it on two subsequent occasions, so that they could study it further. Nicholas P. Money, executive editor of Fungal Biology, in which the findings were published, praised the pair for not removing the fruit body, thereby allowing it "to continue its business and to marvel visitors to Hainan Island". The discovery was formally published in Fungal Biology in September 2011, but gained attention in the mainstream press worldwide prior to this. The fruit body was 20 years old, and up to 1,085 cm (35.60 ft) long. It was between 82 and 88 cm (32 and 35 in) wide, and between 4.6 and 5.5 cm (1.8 and 2.2 in) thick. The total volume of the fruit body was somewhere between 409,000 and 525,000 cubic centimetres (25,000 and 32,000 in<sup>3</sup>). It was estimated to weigh between 400 and 500 kilograms (880 and 1,100 lb), based on three samples from different areas of the fruit body. The specimen had an average of 49 pores per square millimetre, roughly equivalent to 425 million pores. Money estimated that, based on spore output from other polypore species, the fruit body would be able to release a trillion spores a day. Prior to this discovery, the largest recorded fruit body of any fungus was a specimen of Rigidoporus ulmarius, found in Kew Gardens, United Kingdom. It measured 150 by 133 cm (59 by 52 in) in diameter, and had a circumference of 425 cm (167 in). While the largest individual fruit bodies belong to polypores, individual organisms belonging to certain Armillaria species can grow extremely large. In 2003, a large specimen of A. solidipes (synonymous with A. ostoyae) was recorded in the Blue Mountains, Oregon, covering an area of 965 hectares (2,380 acres). At the time, the organism was estimated to be 8650 years old. Prior to this, an A. gallica (synonymous with A. bulbosa) organism was the largest recorded, covering 15 hectares (37 acres), weighing approximately 9,700 kilograms (21,400 lb). However, whilst these organisms cover a large area, the individual fruit bodies (the mushrooms) are not remarkably large, typically with stems of up to 10 centimetres (3.9 in) in height and caps less than 15 centimetres (5.9 in) in diameter, weighing from 40 to 100 grams (1.4 to 3.5 oz) each. ## Medicinal uses and biochemistry The fruit bodies of both Phellinus and Fomitiporia species have seen use in traditional medicine for gastrointestinal cancer and heart disease. In 2011, research into the chemistry of P. ellipsoideus was published in the journal Mycosystema by Cui, along with Hai-Ying Bao and Bao-Kai Liu of the Jilin Agricultural University. The research discussed how several chemical compounds could be isolated from P. ellipsoideus with petroleum ether and (after defatting) chloroform. The nine compounds isolated from these extracts included the common ergosterol and its derivative ergosterol peroxide. Two of the compounds, ergosta-7,22,25-triene-3-one and benzo[1,2-b:5,4-b']difuran-3,5-dione-8-methyl formate, were new to science. All of these chemicals were steroidal; such compounds play important physiological roles in cell membranes. Steroidal compounds, like those isolated from P. ellipsoideus, can have pharmacological applications; for instance, some can act as anti-inflammatories (including ergosterol) or inhibit tumour growth. The 2011 study concluded that, as P. ellipsoideus contained a large number of diverse steroidal compounds, there may be comparatively high pharmacological activity in the fungus; however, more research would be needed to confirm this. Later publications echoed this research, claiming that the fungus has "potential medicinal functions". Research published in 2012 named fomitiporiaester A, a natural furan derivative isolated from methanolic extract of P. ellipsoideus fruit bodies. The chemical, methyl 3,5-dioxo-1,3,5,7-tetrahydrobenzo[1,2-c:4,5-c']difuran-4-carboxylate, displayed significant antitumour ability in a mouse model. ## Industrial uses Phellinus ellipsoideus is used to make MuSkin, or mushroom leather, a vegan alternative to leather. ## See also - Largest organisms - Largest fungal fruit bodies
51,983
Canberra
1,173,279,114
Capital city of Australia
[ "1913 establishments in Australia", "Australian capital cities", "Canberra", "Capitals in Oceania", "Cities planned by Walter Burley Griffin", "Metropolitan areas of Australia", "Planned capitals", "Populated places established in 1913", "Populated places on the Murrumbidgee River" ]
Canberra (/ˈkænbərə/ KAN-bər-ə; Ngunawal: Ngambri) is the capital city of Australia. Founded following the federation of the colonies of Australia as the seat of government for the new nation, it is Australia's largest inland city and the eighth-largest Australian city overall. The city is located at the northern end of the Australian Capital Territory at the northern tip of the Australian Alps, the country's highest mountain range. As of June 2022, Canberra's estimated population was 456,692. The area chosen for the capital had been inhabited by Indigenous Australians for up to 21,000 years, with the principal group being the Ngunnawal people. European settlement commenced in the first half of the 19th century, as evidenced by surviving landmarks such as St John's Anglican Church and Blundells Cottage. On 1 January 1901, federation of the colonies of Australia was achieved. Following a long dispute over whether Sydney or Melbourne should be the national capital, a compromise was reached: the new capital would be built in New South Wales, so long as it was at least 100 mi (160 km) from Sydney. The capital city was founded and formally named as Canberra in 1913. A blueprint by American architects Walter Burley Griffin and Marion Mahony Griffin was selected after an international design contest, and construction commenced in 1913. Unusual among Australian cities, it is an entirely planned city. The Griffins' plan featured geometric motifs and was centred on axes aligned with significant topographical landmarks such as Black Mountain, Mount Ainslie, Capital Hill and City Hill. Canberra's mountainous location makes it the only mainland Australian city where snow-capped mountains can be seen in winter; although snow in the city itself is uncommon. As the seat of the Government of Australia, Canberra is home to many important institutions of the federal government, national monuments and museums. This includes Parliament House, Government House, the High Court and the headquarters of numerous government agencies. It is the location of many social and cultural institutions of national significance such as the Australian War Memorial, the Australian National University, the Royal Australian Mint, the Australian Institute of Sport, the National Gallery, the National Museum and the National Library. The city is home to many important institutions of the Australian Defence Force including the Royal Military College Duntroon and the Australian Defence Force Academy. It hosts all foreign embassies in Australia as well as regional headquarters of many international organisations, not-for-profit groups, lobbying groups and professional associations. Canberra has been ranked among the world's best cities to live and visit. Although the Commonwealth Government remains the largest single employer in Canberra, it is no longer the majority employer. Other major industries have developed in the city, including in health care, professional services, education and training, retail, accommodation and food, and construction. Compared to the national averages, the unemployment rate is lower and the average income higher; tertiary education levels are higher, while the population is younger. At the 2016 Census, 32% of Canberra's inhabitants were reported as having been born overseas. Canberra's design is influenced by the garden city movement and incorporates significant areas of natural vegetation. Its design can be viewed from its highest point at the Telstra Tower and the summit of Mount Ainslie. Other notable features include the National Arboretum, born out of the 2003 Canberra bushfires, and Lake Burley Griffin, named for the city's architects. Highlights in the annual calendar of cultural events include Floriade, the largest flower festival in the Southern Hemisphere, the Enlighten Festival, Skyfire, the National Multicultural Festival and Summernats. Canberra's main sporting venues are Canberra Stadium and Manuka Oval. The city is served with domestic and international flights at Canberra Airport, while interstate train and coach services depart from Canberra railway station and the Jolimont Centre respectively. City Interchange is the main hub of Canberra's bus and light rail transport network. ## Name The word "Canberra" is derived from the name of a local Ngunnawal clan who resided in the area and were referred to by the early British colonists as either the Canberry or Nganbra tribe. Joshua John Moore, the first European land-owner in the region, named his grant "Canberry" in 1823 after these people. "Canberry Creek" and "Canberry" first appeared on regional maps from 1830, while the derivative name "Canberra" started to appear from around 1857. Numerous local commentators, including the Ngunnawal elder Don Bell, have speculated upon possible meanings of "Canberra" over the years. These include "meeting place", "woman's breasts" and "the hollow between a woman's breasts". Alternative proposals for the name of the city during its planning included Austral, Australville, Aurora, Captain Cook, Caucus City, Cookaburra, Dampier, Eden, Eucalypta, Flinders, Gonebroke, Home, Hopetoun, Kangaremu, Myola, Meladneyperbane, New Era, Olympus, Paradise, Shakespeare, Sydmelperadbrisho, Swindleville, The National City, Union City, Unison, Wattleton, Wheatwoolgold, Yass-Canberra. ## History ### First Nations peoples The first peoples of the Canberra area are the Ngunnawal people, while the Ngarigo lived immediately to the south of the ACT (including some of the far-southern suburbs of Canberra), the Wandandian to the east, the Walgulu also to the south, Gandangara people to the north and Wiradjuri to the north-west. The first British settlers into the Canberra area described two clans of Ngunnawal people resident to the vicinity. The Canberry or Nganbra clan lived mostly around Sullivan's Creek and had ceremonial grounds at the base of Galambary (Black Mountain), while the Pialligo clan had land around what is now Canberra Airport. The people living here carefully managed and cultivated the land with fire and farmed yams and hunted for food. Archaeological evidence of settlement in the region includes inhabited rock shelters, rock paintings and engravings, burial places, camps and quarry sites as well as stone tools and arrangements. Artefacts suggests early human activity occurred at some point in the area 21,000 years previously. Still today, Ngunnawal men into the present conduct ceremony on the banks of the river, Murrumbidgee River. They travel upstream as they receive their Totems and corresponding responsibilities for land management. 'Murrum' means 'Pathway' and Bidgee means 'Boss'. The submerged limestone caves beneath Lake Burley Griffin contained Aboriginal rock art, some of the only sites in the region. Galambary (Black Mountain) is an important Aboriginal meeting and business site, predominantly for men’s business. According to the Ngunnawal and Ngambri people, Mt Ainslie is primarily for place of women’s business. Black Mountain and Mount Ainslie are referred to as women’s breasts. Galambary was also used by Ngunnawal people as an initiation site, with the mountain itself said to represent the growth of a boy into a man. ### British exploration and colonisation `In October 1820, Charles Throsby led the first British expedition to the area. Four other expeditions occurred between 1820 and 1823 with the first accurate map being produced by explorer Mark John Currie in June 1823. By this stage the area had become known as the Limestone Plains.` British settlement of the area probably dates from late 1823, when a sheep station was formed on what is now the Acton Peninsula by James Cowan, the head stockman employed by Joshua John Moore. Moore had received a land grant in the region in 1823 and formally applied to purchase the site on 16 December 1826. He named the property "Canberry". On 30 April 1827, Moore was told by letter that he could retain possession of 1,000 acres (405 ha) at Canberry. Other colonists soon followed Moore's example to take up land in the region. Around 1825 James Ainslie, working on behalf of the wealthy merchant Robert Campbell, arrived to establish a sheep station. He was guided to the region by a local Aboriginal girl who showed him the fine lands of her Pialligo clan. The area then became the property of Campbell and it was initially named Pialligo before Campbell changed it to the Scottish title of Duntroon. Campbell's family later built the imposing stone house that is now the officers' mess of the Royal Military College, Duntroon. The Campbells sponsored settlement by other farmer families to work their land, such as the Southwells of "Weetangera". Other notable early colonists included Henry Donnison, who established the Yarralumla estate—now the site of the official residence of the Governor-General of Australia—in 1827, and John Palmer who employed Duncan Macfarlane to form the Jerrabomberra property in 1828. A year later, John MacPherson established the Springbank estate, becoming the first British owner-occupier in the region. The Anglican church of St John the Baptist, in the suburb of Reid, was consecrated in 1845 and is now the oldest surviving public building in the city. St John's churchyard contains the earliest graves in the district. It has been described as a "sanctuary in the city", remaining a small English village-style church even as the capital grew around it. Canberra's first school, St John's School (now a museum), was situated next to the church and opened in the same year of 1845. It was built to educate local settlers children, including the Blundell children who lived in nearby Blundell's Cottage. As the European presence increased, the Indigenous population dwindled largely due to the destruction of their society, dislocation from their lands and from introduced diseases such as influenza, smallpox, alcoholism and measles. ### Creation of the nation's capital The district's change from a rural area in New South Wales to the national capital started during debates over federation in the late 19th century. Following a long dispute over whether Sydney or Melbourne should be the national capital, a compromise was reached: the new capital would be built in New South Wales, so long as it was at least 100 mi (160 km) from Sydney, with Melbourne to be the temporary seat of government while the new capital was built. A survey was conducted across several sites in New South Wales with Bombala, southern Monaro, Orange, Yass, Albury, Tamworth, Armidale, Tumut and Dalgety all discussed. Dalgety was chosen by the federal parliament and it passed the Seat of Government Act 1904 confirming Dalgety as the site of the nation's capital. However, the New South Wales government refused to cede the required territory as they did not accept the site. In 1906, the New South Wales Government finally agreed to cede sufficient land provided that it was in the Yass-Canberra region as this site was closer to Sydney. Newspaper proprietor John Gale circulated a pamphlet titled 'Dalgety or Canberra: Which?' advocating Canberra to every member of the Commonwealth's seven state and federal parliaments. By many accounts, it was decisive in the selection of Canberra as the site in 1908 as was a result of survey work done by the government surveyor Charles Scrivener. The NSW government ceded the district to the federal government in 1911 and the Federal Capital Territory was established. An international design competition was launched by the Department of Home Affairs on 30 April 1911, closing on 31 January 1912. The competition was boycotted by the Royal Institute of British Architects, the Institution of Civil Engineers and their affiliated bodies throughout the British Empire because the Minister for Home Affairs King O'Malley insisted that the final decision was for him to make rather than an expert in city planning. A total of 137 valid entries were received. O'Malley appointed a three-member board to advise him but they could not reach unanimity. On 24 May 1911, O'Malley came down on the side of the majority of the board with the design by Walter Burley Griffin and Marion Mahony Griffin of Chicago, Illinois, United States, being declared the winner. Second was Eliel Saarinen of Finland and third was Alfred Agache of Brazil but resident in Paris, France. O'Malley then appointed a six-member board to advise him on the implementation of the winning design. On 25 November 1912, the board advised that it could not support Griffin's plan in its entirety and suggested an alternative plan of its own devising. This plan incorporated the best features of the three place-getting designs as well as of a fourth design by H. Caswell, R.C.G. Coulter and W. Scott-Griffiths of Sydney, the rights to which it had purchased. It was this composite plan that was endorsed by Parliament and given formal approval by O'Malley on 10 January 1913. In 1913, Griffin was appointed Federal Capital Director of Design and Construction and construction began. On 23 February, King O'Malley drove the first peg in the construction of the future capital city. In 1912, the government invited suggestions from the public as to the name of the future city. Almost 750 names were suggested. At midday on 12 March 1913, Lady Denman, the wife of Governor-General Lord Denman, announced that the city would be named "Canberra" at a ceremony at Kurrajong Hill, which has since become Capital Hill and the site of the present Parliament House. Canberra Day is a public holiday observed in the ACT on the second Monday in March to celebrate the founding of Canberra. After the ceremony, bureaucratic disputes hindered Griffin's work; a Royal Commission in 1916 ruled his authority had been usurped by certain officials and his original plan was reinstated. Griffin's relationship with the Australian authorities was strained and a lack of funding meant that by the time he was fired in 1920, little work had been done. By this time, Griffin had revised his plan, overseen the earthworks of major avenues and established the Glenloch Cork Plantation. ### Development throughout 20th century The Commonwealth government purchased the pastoral property of Yarralumla in 1913 to provide an official residence for the Governor-General of Australia in the new capital. Renovations began in 1925 to enlarge and modernise the property. In 1927, the property was official dubbed Government House. On 9 May that year, the Commonwealth parliament moved to Canberra with the opening of the Provisional Parliament House. The Prime Minister Stanley Bruce had officially taken up residence in The Lodge a few days earlier. Planned development of the city slowed significantly during the depression of the 1930s and during World War II. Some projects planned for that time, including Roman Catholic and Anglican cathedrals, were never completed. (Nevertheless, in 1973 the Roman Catholic parish church of St. Christopher was remodelled into St. Christopher's Cathedral, Manuka, serving the Archdiocese of Canberra and Goulburn. It is the only cathedral in Canberra.) From 1920 to 1957, three bodies — successively the Federal Capital Advisory Committee, the Federal Capital Commission, and the National Capital Planning and Development Committee — continued to plan the further expansion of Canberra in the absence of Griffin. However, they were only advisory and development decisions were made without consulting them, which increased inefficiency. The largest event in Canberra up to World War II was the 24th Meeting of ANZAAS in January 1939. The Canberra Times described it as "a signal event ... in the history of this, the world's youngest capital city". The city's accommodation was not nearly sufficient to house the 1,250 delegates and a tent city had to be set up on the banks of the Molonglo River. One of the prominent speakers was H. G. Wells, who was a guest of the Governor-General Lord Gowrie for a week. This event coincided with a heatwave across south-eastern Australia during which the temperature in Canberra reached 108.5 degrees Fahrenheit (42.5 Celsius) on 11 January. On Friday, 13 January, the Black Friday bushfires caused 71 deaths in Victoria and Wells accompanied the Governor-General on his tour of areas threatened by fires. Immediately after the end of the war, Canberra was criticised for resembling a village and its disorganised collection of buildings was deemed ugly. Canberra was often derisively described as "several suburbs in search of a city". Prime Minister Sir Robert Menzies regarded the state of the national capital as an embarrassment. Over time his attitude changed from one of contempt to that of championing its development. He fired two ministers charged with the development of the city for poor performance. Menzies remained in office for over a decade and in that time the development of the capital sped up rapidly. The population grew by more than 50 per cent in every five-year period from 1955 to 1975. Several Government departments, together with public servants, were moved to Canberra from Melbourne following the war. Government housing projects were undertaken to accommodate the city's growing population. The National Capital Development Commission (NCDC) formed in 1957 with executive powers and ended four decades of disputes over the shape and design of Lake Burley Griffin — the centrepiece of Griffin's design — and construction was completed in 1964 after four years of work. The completion of the lake finally laid the platform for the development of Griffin's Parliamentary Triangle. Since the initial construction of the lake, various buildings of national importance have been constructed on its shores. The newly built Australian National University was expanded and sculptures as well as monuments were built. A new National Library was constructed within the Parliamentary Triangle, followed by the High Court and the National Gallery. Suburbs in Canberra Central (often referred to as North Canberra and South Canberra) were further developed in the 1950s and urban development in the districts of Woden Valley and Belconnen commenced in the mid and late 1960s respectively. Many of the new suburbs were named after Australian politicians such as Barton, Deakin, Reid, Braddon, Curtin, Chifley and Parkes. On 9 May 1988, a larger and permanent Parliament House was opened on Capital Hill as part of Australia's bicentenary celebrations. The Commonwealth Parliament moved there from the Provisional Parliament House, now known as Old Parliament House. ### Self-government In December 1988, the Australian Capital Territory was granted full self-government by the Commonwealth Parliament, a step proposed as early as 1965. Following the first election on 4 March 1989, a 17-member Legislative Assembly sat at temporary offices at 1 Constitution Avenue, Civic, on 11 May 1989. Permanent premises were opened on London Circuit in 1994. The Australian Labor Party formed the ACT's first government, led by the Chief Minister Rosemary Follett, who made history as Australia's first female head of government. Parts of Canberra were engulfed by bushfires on 18 January 2003 that killed four people, injured 435 and destroyed more than 500 homes as well as the major research telescopes of Australian National University's Mount Stromlo Observatory. Throughout 2013, several events celebrated the 100th anniversary of the naming of Canberra. On 11 March 2014, the last day of the centennial year, the Canberra Centenary Column was unveiled in City Hill. Other works included The Skywhale, a hot air balloon designed by the sculptor Patricia Piccinini, and StellrScope by visual media artist Eleanor Gates-Stuart. On 7 February 2021, The Skywhale was joined by Skywhalepapa to create a Skywhale family, an event marked by Skywhale-themed pastries and beer produced by local companies as well as an art pop song entitled "We are the Skywhales". In 2014, Canberra was named the best city to live in the world by the Organisation for Economic Co-operation and Development, and was named the third best city to visit in the world by Lonely Planet in 2017. ## Geography Canberra covers an area of 814.2 km<sup>2</sup> (314.4 sq mi) and is located near the Brindabella Ranges (part of the Australian Alps), approximately 150 km (93 mi) inland from Australia's east coast. It has an elevation of approximately 580 m (1,900 ft) AHD; the highest point is Mount Majura at 888 m (2,913 ft). Other low mountains include Mount Taylor 855 m (2,805 ft), Mount Ainslie 843 m (2,766 ft), Mount Mugga Mugga 812 m (2,664 ft) and Black Mountain 812 m (2,664 ft). The native forest in the Canberra region was almost wholly eucalypt species and provided a resource for fuel and domestic purposes. By the early 1960s, logging had depleted the eucalypt, and concern about water quality led to the forests being closed. Interest in forestry began in 1915 with trials of a number of species including Pinus radiata on the slopes of Mount Stromlo. Since then, plantations have been expanded, with the benefit of reducing erosion in the Cotter catchment, and the forests are also popular recreation areas. The urban environs of the city of Canberra straddle the Ginninderra plain, Molonglo plain, the Limestone plain, and the Tuggeranong plain (Isabella's Plain). The Molonglo River which flows across the Molonglo plain has been dammed to form the national capital's iconic feature Lake Burley Griffin. The Molonglo then flows into the Murrumbidgee north-west of Canberra, which in turn flows north-west toward the New South Wales town of Yass. The Queanbeyan River joins the Molonglo River at Oaks Estate just within the ACT. A number of creeks, including Jerrabomberra and Yarralumla Creeks, flow into the Molonglo and Murrumbidgee. Two of these creeks, the Ginninderra and Tuggeranong, have similarly been dammed to form Lakes Ginninderra and Tuggeranong. Until recently the Molonglo River had a history of sometimes calamitous floods; the area was a flood plain prior to the filling of Lake Burley Griffin. ### Climate Under the Köppen-Geiger classification, Canberra has an oceanic climate (Cfb). In January, the warmest month, the average high is approximately 29 °C (84 °F); in July, the coldest month, the average high drops to approximately 12 °C (54 °F). Frost is common in the winter months. Snow is rare in the CBD (central business district) due to being on the leeward (eastern) side of the dividing range, but the surrounding areas get annual snowfall through winter and often the snow-capped Brindabella Range can be seen from the CBD. The last significant snowfall in the city centre was in 1968. Canberra is often affected by foehn winds, especially in winter and spring, evident by its anomalously warm maxima relative to altitude. The highest recorded maximum temperature was 44.0 °C (111.2 °F) on 4 January 2020. Winter 2011 was Canberra's warmest winter on record, approximately 2 °C (4 °F) above the average temperature. The lowest recorded minimum temperature was −10.0 °C (14.0 °F) on the morning of 11 July 1971. Light snow falls only once in every few years, and is usually not widespread and quickly dissipates. Canberra is protected from the west by the Brindabellas which create a strong rain shadow in Canberra's valleys. Canberra gets 100.4 clear days annually. Annual rainfall is the third lowest of the capital cities (after Adelaide and Hobart) and is spread fairly evenly over the seasons, with late spring bringing the highest rainfall. Thunderstorms occur mostly between October and April, owing to the effect of summer and the mountains. The area is generally sheltered from a westerly wind, though strong northwesterlies can develop. A cool, vigorous afternoon easterly change, colloquially referred to as a 'sea-breeze' or the 'Braidwood Butcher', is common during the summer months and often exceeds 40 km/h in the city. Canberra is also less humid than the nearby coastal areas. Canberra was severely affected by smoke haze during the 2019/2020 bushfires. On 1 January 2020, Canberra had the worst air quality of any major city in the world, with an AQI of 7700 (USAQI 949). ### Urban structure Canberra is a planned city and the inner-city area was originally designed by Walter Burley Griffin, a major 20th-century American architect. Within the central area of the city near Lake Burley Griffin, major roads follow a wheel-and-spoke pattern rather than a grid. Griffin's proposal had an abundance of geometric patterns, including concentric hexagonal and octagonal streets emanating from several radii. However, the outer areas of the city, built later, are not laid out geometrically. Lake Burley Griffin was deliberately designed so that the orientation of the components was related to various topographical landmarks in Canberra. The lakes stretch from east to west and divided the city in two; a land axis perpendicular to the central basin stretches from Capital Hill—the eventual location of the new Parliament House on a mound on the southern side—north northeast across the central basin to the northern banks along Anzac Parade to the Australian War Memorial. This was designed so that looking from Capital Hill, the War Memorial stood directly at the foot of Mount Ainslie. At the southwestern end of the land axis was Bimberi Peak, the highest mountain in the ACT, approximately 52 km (32 mi) south west of Canberra. The straight edge of the circular segment that formed the central basin of Lake Burley Griffin was perpendicular to the land axis and designated the water axis, and it extended northwest towards Black Mountain. A line parallel to the water axis, on the northern side of the city, was designated the municipal axis. The municipal axis became the location of Constitution Avenue, which links City Hill in Civic Centre and both Market Centre and the Defence precinct on Russell Hill. Commonwealth Avenue and Kings Avenue were to run from the southern side from Capital Hill to City Hill and Market Centre on the north respectively, and they formed the western and eastern edges of the central basin. The area enclosed by the three avenues was known as the Parliamentary Triangle, and formed the centrepiece of Griffin's work. The Griffins assigned spiritual values to Mount Ainslie, Black Mountain, and Red Hill and originally planned to cover each of these in flowers. That way each hill would be covered with a single, primary colour which represented its spiritual value. This part of their plan never came to fruition, as World War I slowed construction and planning disputes led to Griffin's dismissal by Prime Minister Billy Hughes after the war ended. The urban areas of Canberra are organised into a hierarchy of districts, town centres, group centres, local suburbs as well as other industrial areas and villages. There are seven residential districts, each of which is divided into smaller suburbs, and most of which have a town centre which is the focus of commercial and social activities. The districts were settled in the following chronological order: - Canberra Central, mostly settled in the 1920s and 1930s, with expansion up to the 1960s, 25 suburbs - Woden Valley, first settled in 1964, 12 suburbs - Belconnen, first settled in 1966, 27 suburbs (2 not yet developed) - Weston Creek, settled in 1969, 8 suburbs - Tuggeranong, settled in 1974, 18 suburbs - Gungahlin, settled in the early 1990s, 18 suburbs (3 not yet developed) - Molonglo Valley, development began in 2010, 13 suburbs planned. The Canberra Central district is substantially based on Walter Burley Griffin's designs. In 1967 the then National Capital Development Commission adopted the "Y Plan" which laid out future urban development in Canberra around a series of central shopping and commercial area known as the 'town centres' linked by freeways, the layout of which roughly resembled the shape of the letter Y, with Tuggeranong at the base of the Y and Belconnen and Gungahlin located at the ends of the arms of the Y. Development in Canberra has been closely regulated by government, both through planning processes and the use of crown lease terms that have tightly limited the use of parcels of land. Land in the ACT is held on 99-year crown leases from the national government, although most leases are now administered by the Territory government. There have been persistent calls for constraints on development to be liberalised, but also voices in support of planning consistent with the original 'bush capital' and 'urban forest' ideals that underpin Canberra's design. Many of Canberra's suburbs are named after former Prime Ministers, famous Australians, early settlers, or use Aboriginal words for their title. Street names typically follow a particular theme; for example, the streets of Duffy are named after Australian dams and reservoirs, the streets of Dunlop are named after Australian inventions, inventors and artists and the streets of Page are named after biologists and naturalists. Most diplomatic missions are located in the suburbs of Yarralumla, Deakin and O'Malley. There are three light industrial areas: the suburbs of Fyshwick, Mitchell and Hume. ### Sustainability and the environment The average Canberran was responsible for 13.7 tonnes of greenhouse gases in 2005. In 2012, the ACT Government legislated greenhouse gas targets to reduce its emissions by 40 per cent from 1990 levels by 2020, 80 per cent by 2050, with no net emissions by 2060. The government announced in 2013 a target for 90% of electricity consumed in the ACT to be supplied from renewable sources by 2020, and in 2016 set an ambitious target of 100% by 2020. In 1996 Canberra became the first city in the world to set a vision of no waste, proposing an ambitious target of 2010 for completion. The strategy aimed to achieve a waste-free society by 2010, through the combined efforts of industry, government and community. By early 2010, it was apparent that though it had reduced waste going to landfill, the ACT initiative's original 2010 target for absolutely zero landfill waste would be delayed or revised to meet the reality. Plastic bags made of polyethylene polymer with a thickness of less than 35 μm were banned from retail distribution in the ACT from November 2011. The ban was introduced by the ACT Government in an effort to make Canberra more sustainable. Of all waste produced in the ACT, 75 per cent is recycled. Average household food waste in the ACT remains above the Australian average, costing an average \$641 per household per annum. Canberra's annual Floriade festival features a large display of flowers every Spring in Commonwealth Park. The organisers of the event have a strong environmental standpoint, promoting and using green energy, "green catering", sustainable paper, the conservation and saving of water. The event is also smoke-free. ## Government and politics ### Territory government There is no local council or city government for the city of Canberra. The Australian Capital Territory Legislative Assembly performs the roles of both a city council for the city and a territory government for the rest of the Australian Capital Territory. However, the vast majority of the population of the Territory reside in Canberra and the city is therefore the primary focus of the ACT Government. The assembly consists of 25 members elected from five districts using proportional representation. The five districts are Brindabella, Ginninderra, Kurrajong, Murrumbidgee and Yerrabi, which each elect five members. The Chief Minister is elected by the Members of the Legislative Assembly (MLA) and selects colleagues to serve as ministers alongside him or her in the Executive, known informally as the cabinet. Whereas the ACT has federally been dominated by Labor, the Liberals have been able to gain some footing in the ACT Legislative Assembly and were in government during a period of six and half years from 1995 and 2001. Labor took back control of the Assembly in 2001. At the 2004 election, Chief Minister Jon Stanhope and the Labor Party won 9 of the 17 seats allowing them to form the ACT's first majority government. Since 2008, the ACT has been governed by a coalition of Labor and the Greens. As of 2022, the Chief Minister was Andrew Barr from the Australian Labor Party. The Australian federal government retains some influence over the ACT government. In the administrative sphere, most frequently this is through the actions of the National Capital Authority which is responsible for planning and development in areas of Canberra which are considered to be of national importance or which are central to Griffin's plan for the city, such as the Parliamentary Triangle, Lake Burley Griffin, major approach and processional roads, areas where the Commonwealth retains ownership of the land or undeveloped hills and ridge-lines (which form part of the Canberra Nature Park). The national government also retains a level of control over the Territory Assembly through the provisions of the Australian Capital Territory (Self-Government) Act 1988. This federal act defines the legislative power of the ACT assembly. ### Federal representation The ACT was given its first federal parliamentary representation in 1949 when it gained a seat in the House of Representatives, the Division of Australian Capital Territory. However, the ACT member could only vote on matters directly affecting the territory. In 1974, the ACT was allocated two Senate seats and the House of Representatives seat was divided into two. A third was created in 1996, but was abolished in 1998 because of changes to the regional demographic distribution. At the 2019 election, the third seat has been reintroduced as the Division of Bean. The House of Representatives seats have mostly been held by Labor and usually by comfortable margins. The Labor Party has polled at least seven percentage points more than the Liberal Party at every federal election since 1990 and their average lead since then has been 15 percentage points. The ALP and the Liberal Party held one Senate seat each until the 2022 election when Independent candidate David Pocock unseated the Liberal candidate Zed Seselja. ### Judiciary and policing The Australian Federal Police (AFP) provides all of the constabulary services in the territory in a manner similar to state police forces, under a contractual agreement with the ACT Government. The AFP does so through its community policing arm ACT Policing. People who have been charged with offences are tried either in the ACT Magistrates Court or, for more severe offences, the ACT Supreme Court. Prior to its closure in 2009, prisoners were held in remand at the Belconnen Remand Centre in the ACT but usually imprisoned in New South Wales. The Alexander Maconochie Centre was officially opened on 11 September 2008 by then Chief Minister Jon Stanhope. The total cost for construction was \$130 million. The ACT Civil and Administrative Tribunal deal with minor civil law actions and other various legal matters. Canberra has the lowest rate of crime of any capital city in Australia as of 2019. As of 2016 the most common crimes in the ACT were property related crimes, unlawful entry with intent and motor vehicle theft. They affected 2,304 and 966 people (580 and 243 per 100,000 persons respectively). Homicide and related offences—murder, attempted murder and manslaughter, but excluding driving causing death and conspiracy to murder—affect 1.0 per 100,000 persons, which is below the national average of 1.9 per 100,000. Rates of sexual assault (64.4 per 100,000 persons) are also below the national average (98.5 per 100,000). However the 2017 crime statistics showed a rise in some types of personal crime, notably burglaries, thefts and assaults. ## Economy In February 2020, the unemployment rate in Canberra was 2.9% which was lower than the national unemployment rate of 5.1%. As a result of low unemployment and substantial levels of public sector and commercial employment, Canberra has the highest average level of disposable income of any Australian capital city. The gross average weekly wage in Canberra is \$1827 compared with the national average of \$1658 (November 2019). The median house price in Canberra as of February 2020 was \$745,000, lower than only Sydney among capital cities of more than 100,000 people, having surpassed Melbourne and Perth since 2005. The median weekly rent paid by Canberra residents is higher than rents in all other states and territories. As of January 2014 the median unit rent in Canberra was \$410 per week and median housing rent was \$460, making the city the third most expensive in the country. Factors contributing to this higher weekly rental market include; higher average weekly incomes, restricted land supply, and inflationary clauses in the ACT Residential Tenancies Act. The city's main industry is public administration and safety, which accounted for 27.1% of Gross Territory Product in 2018-19 and employed 32.49% of Canberra's workforce. The headquarters of many Australian Public Service agencies are located in Canberra, and Canberra is also host to several Australian Defence Force establishments, most notably the Australian Defence Force headquarters and , which is a naval communications centre that is being converted into a tri-service, multi-user depot. Other major sectors by employment include Health Care (10.54%), Professional Services (9.77%), Education and Training (9.64%), Retail (7.27%), Accommodation & Food (6.39%) and Construction (5.80%). The former RAAF Fairbairn, adjacent to the Canberra Airport was sold to the operators of the airport, but the base continues to be used for RAAF VIP flights. A growing number of software vendors have based themselves in Canberra, to capitalise on the concentration of government customers; these include Tower Software and RuleBurst. A consortium of private and government investors is making plans for a billion-dollar data hub, with the aim of making Canberra a leading centre of such activity in the Asia-Pacific region. A Canberra Cyber Security Innovation Node was established in 2019 to grow the ACT's cyber security sector and related space, defence and education industries. ## Demographics At the , the population of Canberra was 453,558, up from 395,790 at the 2016 census, and 355,596 at the . Canberra has been the fastest-growing city in Australia in recent years, having grown 23.3% from 2011-2021. Canberrans are relatively young, highly mobile and well educated. The median age is 35 years and only 12.7% of the population is aged over 65 years. Between 1996 and 2001, 61.9% of the population either moved to or from Canberra, which was the second highest mobility rate of any Australian capital city. As at May 2017, 43% of ACT residents (25–64) had a level of educational attainment equal to at least a bachelor's degree, significantly higher that the national average of 31%. According to statistics collected by the National Australia Bank and reported in The Canberra Times, Canberrans on average give significantly more money to charity than Australians in other states and territories, for both dollar giving and as a proportion of income. ### Ancestry and immigration At the 2016 census, the most commonly nominated ancestries were: The 2016 census showed that 32% of Canberra's inhabitants were born overseas. Of inhabitants born outside Australia, the most prevalent countries of birth were England, China, India, New Zealand and the Philippines. 1.6% of the population, or 6,476 people, identified as Indigenous Australians (Aboriginal Australians and Torres Strait Islanders) in 2016. ### Language At the 2016 census, 72.7% of people spoke only English at home. The other languages most commonly spoken at home were Mandarin (3.1%), Vietnamese (1.1%), Cantonese (1%), Hindi (0.9%) and Spanish (0.8%). ### Religion On census night in 2016, approximately 50.0% of ACT residents described themselves as Christian (excluding not stated responses), the most common denominations being Catholic and Anglican; 36.2% described themselves as having no religion. ## Culture ### Education The two main tertiary institutions are the Australian National University (ANU) in Acton and the University of Canberra (UC) in Bruce, with over 10,500 and 8,000 full-time-equivalent students respectively. Established in 1946, the ANU has always had a strong research focus and is ranked among the leading universities in the world and the best in Australia by The Times Higher Education Supplement and the Shanghai Jiao Tong World University Rankings. There are two religious university campuses in Canberra: Signadou in the northern suburb of Watson is a campus of the Australian Catholic University; St Mark's Theological College in Barton is part of the secular Charles Sturt University. The ACT Government announced on 5 March 2020 that the CIT campus and an adjoining carpark in Reid would be leased to the University of New South Wales (UNSW) for a peppercorn lease, for it to develop as a campus for a new UNSW Canberra. UNSW released a master plan in 2021 for a 6,000 student campus to be realised over 15 years at a cost of \$1 billion. The Australian Defence College has two campuses: the Australian Command and Staff College (ACSC) plus the Centre for Defence and Strategic Studies (CDSS) at Weston, and the Australian Defence Force Academy (ADFA) beside the Royal Military College, Duntroon located in the inner-northern suburb of Campbell. ADFA teaches military undergraduates and postgraduates and includes UNSW@ADFA, a campus of the University of New South Wales; Duntroon provides Australian Army officer training. Tertiary level vocational education is also available through the Canberra Institute of Technology (CIT), with campuses in Bruce, Reid, Gungahlin, Tuggeranong and Fyshwick. The combined enrolment of the CIT campuses was over 28,000 students in 2019. Following the transfer of land in Reid for the new UNSW Canberra, a new CIT Woden is scheduled to be completed by 2025. In 2016 there were 132 schools in Canberra; 87 were operated by the government and 45 were private. During 2006, the ACT Government announced closures of up to 39 schools, to take effect from the end of the school year, and after a series of consultations unveiled its Towards 2020: Renewing Our Schools policy. As a result, some schools closed during the 2006–08 period, while others were merged; the creation of combined primary and secondary government schools was to proceed over a decade. The closure of schools provoked significant opposition. Most suburbs were planned to include a primary and a nearby preschool; these were usually located near open areas where recreational and sporting activities were easily available. Canberra also has the highest percentage of non-government (private) school students in Australia, accounting for 40.6 per cent of ACT enrollments. ### Arts and entertainment Canberra is home to many national monuments and institutions such as the Australian War Memorial, the Australian Institute of Aboriginal and Torres Strait Islander Studies, the National Gallery of Australia, the National Portrait Gallery, the National Library, the National Archives, the Australian Academy of Science, the National Film & Sound Archive and the National Museum. Many Commonwealth government buildings in Canberra are open to the public, including Parliament House, the High Court and the Royal Australian Mint. Lake Burley Griffin is the site of the Captain James Cook Memorial and the National Carillon. Other sites of interest include the Australian–American Memorial, Commonwealth Park, Commonwealth Place, the Telstra Tower, the Australian National Botanic Gardens, the National Zoo and Aquarium, the National Dinosaur Museum, and Questacon – the National Science and Technology Centre. The Canberra Museum and Gallery in the city is a repository of local history and art, housing a permanent collection and visiting exhibitions. Several historic homes are open to the public: Lanyon and Tuggeranong Homesteads in the Tuggeranong Valley, Mugga-Mugga in Symonston, and Blundells' Cottage in Parkes all display the lifestyle of the early European settlers. Calthorpes' House in Red Hill is a well-preserved example of a 1920s house from Canberra's very early days. Strathnairn Homestead is an historic building which also dates from the 1920s. Canberra has many venues for live music and theatre: the Canberra Theatre and Playhouse which hosts many major concerts and productions; and Llewellyn Hall (within the ANU School of Music), a world-class concert hall are two of the most notable. The Street Theatre is a venue with less mainstream offerings. The Albert Hall was the city's first performing arts venue, opened in 1928. It was the original performance venue for theatre groups such as the Canberra Repertory Society. Stonefest was a large annual festival, for some years one of the biggest festivals in Canberra. It was downsized and rebranded as Stone Day in 2012. There are numerous bars and nightclubs which also offer live entertainment, particularly concentrated in the areas of Dickson, Kingston and the city. Most town centres have facilities for a community theatre and a cinema, and they all have a library. Popular cultural events include the National Folk Festival, the Royal Canberra Show, the Summernats car festival, Enlighten festival, the National Multicultural Festival in February and the Celebrate Canberra festival held over 10 days in March in conjunction with Canberra Day. Canberra maintains sister-city relationships with both Nara, Japan and Beijing, China. Canberra has friendship-city relationships with both Dili, East Timor and Hangzhou, China. City-to-city relationships encourage communities and special interest groups both locally and abroad to engage in a wide range of exchange activities. The Canberra Nara Candle Festival held annually in spring, is a community celebration of the Canberra Nara Sister City relationship. The festival is held in Canberra Nara Park on the shores of Lake Burley Griffin. ### Media As Australia's capital, Canberra is the most important centre for much of Australia's political reportage and thus all the major media, including the Australian Broadcasting Corporation, the commercial television networks, and the metropolitan newspapers maintain local bureaus. News organisations are represented in the "press gallery", a group of journalists who report on the national parliament. The National Press Club of Australia in Barton has regular television broadcasts of its lunches at which a prominent guest, typically a politician or other public figure, delivers a speech followed by a question-and-answer session. Canberra has a daily newspaper, The Canberra Times, which was established in 1926. There are also several free weekly publications, including news magazines CityNews and Canberra Weekly as well as entertainment guide BMA Magazine. BMA Magazine first went to print in 1992; the inaugural edition featured coverage of the Nirvana Nevermind tour. There are a number of AM and FM stations broadcasting in Canberra (AM/FM Listing). The main commercial operators are the Capital Radio Network (2CA and 2CC), and Austereo/ARN (104.7 and Mix 106.3). There are also several community operated stations. A DAB+ digital radio trial is also in operation, it simulcasts some of the AM/FM stations, and also provides several digital only stations (DAB+ Trial Listing). Five free-to-air television stations service Canberra: - ABC Canberra (ABC) - SBS New South Wales (SBS) - Southern Cross 10 Southern NSW & ACT (CTC) – Network 10 affiliate - Seven Network Southern NSW & ACT (CBN) – Seven Network owned and operated station - WIN Television Southern NSW & ACT (WIN) – Nine Network affiliate Each station broadcasts a primary channel and several multichannels. Of the three main commercial networks: - WIN airs a half-hour local WIN News each weeknight at 6pm, produced from a newsroom in the city and broadcast from studios in Wollongong. - Southern Cross 10 airs short local news updates throughout the day, produced and broadcast from its Hobart studios. It previously aired a regional edition of Nine News from Sydney each weeknight at 6pm, featuring opt-outs for Canberra and the ACT when it was a Nine affiliate. - Seven airs short local news and weather updates throughout the day, produced and broadcast from its Canberra studios. Prior to 1989, Canberra was serviced by just the ABC, SBS and Capital Television (CTC), which later became Ten Capital in 1994 then Southern Cross Ten in 2002 then Channel 9/Southern Cross Nine in 2016 and finally Channel 10 in 2021, with Prime Television (now Prime7) and WIN Television arriving as part of the Government's regional aggregation program in that year. Pay television services are available from Foxtel (via satellite) and telecommunications company TransACT (via cable). ### Sport In addition to local sporting leagues, Canberra has a number of sporting teams that compete in national and international competitions. The best known teams are the Canberra Raiders and the Brumbies who play rugby league and rugby union respectively; both have been champions of their leagues. Both teams play their home games at Canberra Stadium, which is the city's largest stadium and was used to hold group matches in soccer for the 2000 Summer Olympics and in rugby union for the 2003 Rugby World Cup. The city also has a successful basketball team, the Canberra Capitals, which has won seven out of the last eleven national women's basketball titles. Canberra United FC represents the city in the A-League Women (formerly the W-League), the national women's association football league, and were champions in the 2011–12 season. The Canberra Vikings represent the city in the National Rugby Championship and finished second in the 2015 season. There are also teams that participate in national competitions in netball, field hockey, ice hockey, cricket and baseball. The historic Prime Minister's XI cricket match is played at Manuka Oval annually. Other significant annual sporting events include the Canberra Marathon and the City of Canberra Half Ironman Triathlon. Canberra has been bidding for an Australian Football League club since 1981 when Australian rules football in the Australian Capital Territory was more popular. While the league has knocked back numerous proposals, according to the AFL Canberra belongs to the Greater Western Sydney Giants who play three home games at Manuka Oval each season. Other significant annual sporting events include the Canberra Marathon and the City of Canberra Half Ironman Triathlon. The Australian Institute of Sport (AIS) is located in the Canberra suburb of Bruce. The AIS is a specialised educational and training institution providing coaching for elite junior and senior athletes in a number of sports. The AIS has been operating since 1981 and has achieved significant success in producing elite athletes, both local and international. The majority of Australia's team members and medallists at the 2000 Summer Olympics in Sydney were AIS graduates. Canberra has numerous sporting ovals, golf courses, skate parks, and swimming pools that are open to the public. Tennis courts include those at the National Sports Club, Lyneham, former home of the Canberra Women's Tennis Classic. A Canberra-wide series of bicycle paths are available to cyclists for recreational and sporting purposes. Canberra Nature Parks have a large range of walking paths, horse and mountain bike trails. Water sports like sailing, rowing, dragon boating and water skiing are held on Canberra's lakes. The Rally of Canberra is an annual motor sport event, and from 2000 to 2002, Canberra hosted the Canberra 400 event for V8 Supercars on the temporary Canberra Street Circuit, which was located inside the Parliamentary Triangle. A popular form of exercise for people working near or in the Parliamentary Triangle is to do the "bridge to bridge walk/run" of about 5 km around Lake Burley Griffin, crossing the Commonwealth Avenue Bridge and Kings Avenue Bridge, using the paths beside the lake. The walk takes about 1 hour, making it ideal for a lunchtime excursion. This is also popular on weekends. Such was the popularity during the COVID-19 isolation in 2020 that the ACT Government initiated a 'Clockwise is COVID-wise' rule for walkers and runners. ## Infrastructure ### Health Canberra has two large public hospitals, the approximately 600-bed Canberra Hospital—formerly the Woden Valley Hospital—in Garran and the 174-bed Calvary Public Hospital in Bruce. Both are teaching institutions. The largest private hospital is the Calvary John James Hospital in Deakin. Calvary Private Hospital in Bruce and Healthscope's National Capital Private Hospital in Garran are also major healthcare providers. The Royal Canberra Hospital was located on Acton Peninsula on Lake Burley Griffin; it was closed in 1991 and was demolished in 1997 in a controversial and fatal implosion to facilitate construction of the National Museum of Australia. The city has 10 aged care facilities. Canberra's hospitals receive emergency cases from throughout southern New South Wales, and ACT Ambulance Service is one of four operational agencies of the ACT Emergency Services Authority. NETS provides a dedicated ambulance service for inter-hospital transport of sick newborns within the ACT and into surrounding New South Wales. ### Transport The automobile is by far the dominant form of transport in Canberra. The city is laid out so that arterial roads connecting inhabited clusters run through undeveloped areas of open land or forest, which results in a low population density; this also means that idle land is available for the development of future transport corridors if necessary without the need to build tunnels or acquire developed residential land. In contrast, other capital cities in Australia have substantially less green space. Canberra's districts are generally connected by parkways—limited access dual carriageway roads with speed limits generally set at a maximum of 100 km/h (62 mph). An example is the Tuggeranong Parkway which links Canberra's CBD and Tuggeranong, and bypasses Weston Creek. In most districts, discrete residential suburbs are bounded by main arterial roads with only a few residential linking in, to deter non-local traffic from cutting through areas of housing. In an effort to improve road safety, traffic cameras were first introduced to Canberra by the Kate Carnell Government in 1999. The traffic cameras installed in Canberra include fixed red-light and speed cameras and point-to-point speed cameras; together they bring in revenue of approximately \$11 million per year in fines. ACTION, the government-operated bus service, provides public transport throughout the city. CDC Canberra provides bus services between Canberra and nearby areas of New South Wales of (Murrumbateman and Yass) and as Qcity Transit (Queanbeyan). A light rail line commenced service on 20 April 2019 linking the CBD with the northern district of Gungahlin. A planned Stage 2A of Canberra's light rail network will run from Alinga Street station to Commonwealth Park, adding three new stops at City West, City South and Commonwealth Park. In February 2021 ACT Minister for Transport and City Services Chris Steel said he expects construction on Stage 2A to commence in the 2021-22 financial year, and for "tracks to be laid" by the next Territory election in 2024. At the 2016 census, 7.1% of the journeys to work involved public transport, while 4.5% walked to work. There are two local taxi companies. Aerial Capital Group enjoyed monopoly status until the arrival of Cabxpress in 2007. In October 2015 the ACT Government passed legislation to regulate ride sharing, allowing ride share services including Uber to operate legally in Canberra. The ACT Government was the first jurisdiction in Australia to enact legislation to regulate the service. Since then many other ride sharing and taxi services have started in ACT namely Ola, Glide Taxi and GoCatch An interstate NSW TrainLink railway service connects Canberra to Sydney. Canberra railway station is in the inner south suburb of Kingston. Between 1920 and 1922 the train line crossed the Molonglo River and ran as far north as the city centre, although the line was closed following major flooding and was never rebuilt, while plans for a line to Yass were abandoned. A gauge construction railway was built in 1923 between the Yarralumla brickworks and the provisional Parliament House; it was later extended to Civic, but the whole line was closed in May 1927. Train services to Melbourne are provided by way of a NSW TrainLink bus service which connects with a rail service between Sydney and Melbourne in Yass, about a one-hour drive from Canberra. Plans to establish a high-speed rail service between Melbourne, Canberra and Sydney, have not been implemented, as the various proposals have been deemed economically unviable. The original plans for Canberra included proposals for railed transport within the city, however none eventuated. The phase 2 report of the most recent proposal, the High Speed Rail Study, was published by the Department of Infrastructure and Transport on 11 April 2013. A railway connecting Canberra to Jervis Bay was also planned but never constructed. Canberra is about three hours by road from Sydney on the Federal Highway (National Highway 23), which connects with the Hume Highway (National Highway 31) near Goulburn, and seven hours by road from Melbourne on the Barton Highway (National Highway 25), which joins the Hume Highway at Yass. It is a two-hour drive on the Monaro Highway (National Highway 23) to the ski fields of the Snowy Mountains and the Kosciuszko National Park. Batemans Bay, a popular holiday spot on the New South Wales coast, is also two hours away via the Kings Highway. Canberra Airport provides direct domestic services to Adelaide, Brisbane, Cairns, Darwin, Gold Coast, Hobart, Melbourne, Perth, Sunshine Coast and Sydney with connections to other domestic centres. There are also direct flights to small regional towns: Ballina, Dubbo, Newcastle and Port Macquarie in New South Wales. Canberra Airport is, as of September 2013, designated by the Australian Government Department of Infrastructure and Regional Development as a restricted use designated international airport. International flights have previously been operated by both Singapore Airlines and Qatar Airways. Fiji Airways has announced direct flights to Nadi commencing in July 2023. Until 2003 the civilian airport shared runways with RAAF Base Fairbairn. In June of that year, the Air Force base was decommissioned and from that time the airport was fully under civilian control. Canberra has one of the highest rates of active travel of all Australian major cities, with 7.1 per cent of commuters walking or cycling to work in 2011. An ACT Government survey conducted in late 2010 found that Canberrans walk an average of 26 minutes each day. According to The Canberra Times in March 2014, Canberra's cyclists are involved in an average of four reported collisions every week. The newspaper also reported that Canberra is home to 87,000 cyclists, translating to the highest cycling participation rate in Australia; and, with higher popularity, bike injury rates in 2012 were twice the national average. Since late 2020, two scooter-sharing systems have been operational in Canberra: orange scooters from Neuron Mobility and purple scooters from Beam Mobility, both Singapore-based companies that operate in many Australian cities. These services cover much of Canberra Central and Central Belconnen, with plans to expand coverage to more areas of the city in 2022. ### Utilities The government-owned Icon Water manages Canberra's water and sewerage infrastructure. ActewAGL is a joint venture between ACTEW and AGL, and is the retail provider of Canberra's utility services including water, natural gas, electricity, and also some telecommunications services via a subsidiary TransACT. Canberra's water is stored in four reservoirs, the Corin, Bendora and Cotter dams on the Cotter River and the Googong Dam on the Queanbeyan River. Although the Googong Dam is located in New South Wales, it is managed by the ACT government. Icon Water owns Canberra's two wastewater treatment plants, located at Fyshwick and on the lower reaches of the Molonglo River. Electricity for Canberra mainly comes from the national power grid through substations at Holt and Fyshwick (via Queanbeyan). Power was first supplied from the Kingston Powerhouse near the Molonglo River, a thermal plant built in 1913, but this was finally closed in 1957. The ACT has four solar farms, which were opened between 2014 and 2017: Royalla (rated output of 20 megawatts, 2014), Mount Majura (2.3 MW, 2016), Mugga Lane (13 MW, 2017) and Williamsdale (11 MW, 2017). In addition, numerous houses in Canberra have photovoltaic panels or solar hot water systems. In 2015 and 2016, rooftop solar systems supported by the ACT government's feed-in tariff had a capacity of 26.3 megawatts, producing 34,910 MWh. In the same year, retailer-supported schemes had a capacity of 25.2 megawatts and exported 28,815 MWh to the grid (power consumed locally was not recorded). There are no wind-power generators in Canberra, but several have been built or are being built or planned in nearby New South Wales, such as the 140.7 megawatt Capital Wind Farm. The ACT government announced in 2013 that it was raising the target for electricity consumed in the ACT to be supplied from renewable sources to 90% by 2020, raising the target from 210 to 550 megawatts. It announced in February 2015 that three wind farms in Victoria and South Australia would supply 200 megawatts of capacity; these are expected to be operational by 2017. Contracts for the purchase of an additional 200 megawatts of power from two wind farms in South Australia and New South Wales were announced in December 2015 and March 2016. The ACT government announced in 2014 that up to 23 megawatts of feed-in-tariff entitlements would be made available for the establishment of a facility in the ACT or surrounding region for burning household and business waste to produce electricity by 2020. The ACT has the highest rate with internet access at home (94 per cent of households in 2014–15). ## Twin towns and sister cities Canberra has three sister cities: - Beijing, China - Nara, Japan - Wellington, New Zealand In addition, Canberra has the following friendship cities: - Hangzhou, China: The ACT Government signed a Memorandum of Understanding with the Hangzhou Municipal People's Government on 29 October 1998. The Agreement was designed to promote business opportunities and cultural exchanges between the two cities. - Dili, East Timor: The Canberra Dili Friendship Agreement was signed in 2004, aiming to build friendship and mutual respect and promote educational, cultural, economic, humanitarian and sporting links between Canberra and Dili. ## See also - 1971 Canberra flood - 2003 Canberra bushfires - List of planned cities - List of tallest buildings in Canberra - Lists of capitals
760,423
Seattle Center Monorail
1,170,756,248
Monorail line in Seattle, Washington, US
[ "1962 establishments in Washington (state)", "Alweg people movers", "Articles containing video clips", "Century 21 Exposition", "Landmarks in Seattle", "Monorails in the United States", "Railway lines opened in 1962", "Rapid transit in Washington (state)", "Seattle Center", "Transportation buildings and structures in Seattle", "Transportation in Seattle", "World's fair architecture in Seattle" ]
The Seattle Center Monorail is an elevated straddle-beam monorail line in Seattle, Washington, United States. The 0.9-mile (1.4 km) monorail runs along 5th Avenue between Seattle Center and Westlake Center in Downtown Seattle, making no intermediate stops. The monorail is a major tourist attraction but also operates as a regular public transit service with trains every ten minutes running for up to 16 hours per day. It was constructed in eight months at a cost of \$4.2 million for the 1962 Century 21 Exposition, a world's fair hosted at Seattle Center. The monorail underwent major renovations in 1988 after the southern terminal was moved from its location over Pine Street to inside the Westlake Center shopping mall. The system retains its original fleet of two Alweg trains from the world's fair; each carries up to 450 people. It is owned by the city government, which designated the tracks and trains as a historic landmark in 2003. A private contractor has operated the system since 1994, when it replaced King County Metro, the county's public transit system. The monorail carries approximately two million people annually and earns a profit split between the contractor and the city government. The monorail usually operates with one train per track, and the entire trip takes approximately two minutes. Several major accidents have occurred during the system's half-century in service, including a train-to-train collision in 2005 on a gauntlet track near the Westlake Center terminal. Several government agencies and private companies have proposed expansions to the monorail system since its inception in the 1960s. The most prominent was the Seattle Monorail Project, founded by a 1997 ballot initiative to build a citywide network that would expand coverage beyond the planned Link light rail system. The project ran into financial difficulties, including cost estimates rising to \$11 billion, before being cancelled by a city vote in 2005. ## Route and stations The 0.9-mile (1.4 km) monorail begins at a terminal at Seattle Center, a civic complex and park northwest of Downtown Seattle. The Seattle Center terminal is located at the Next 50 Plaza near the center of the complex, adjacent to the Space Needle, Chihuly Garden and Glass, and Memorial Stadium. It is elevated above the south end of the plaza and consists of three platforms arranged in the Spanish solution: two side platforms for alighting and a center platform for boarding. The monorail trains' maintenance facility is below the platforms at ground level in the Seattle Center station. From the terminal, the tracks travel east and begin a wide turn to the south while passing through the Museum of Pop Culture, which was designed around the existing tracks. The monorail tracks cross over Broad Street and travel along the west side of 5th Avenue North for two blocks, passing the KOMO Plaza news broadcasting center. The tracks then begin a gradual southeastern turn over a small office building and auto repair shop toward 5th Avenue, which begins on the south side of Denny Way and Tilikum Place. The one-way street travels southeast through Belltown with southbound-only traffic, split into two sets of through lanes by the monorail's supporting columns. The monorail passes by several city landmarks, including the Amazon Spheres and Westin Seattle towers, eventually reaching McGraw Square, where 5th Avenue makes a slight turn to the south. Before reaching the southern terminal at the Westlake Center shopping mall on Pine Street, the monorail's tracks narrow into a set of gauntlet tracks that are 4 to 5 feet (1.2 to 1.5 m) apart, preventing two trains from using the station at the same time. The Westlake Center terminal is on the third floor of the mall and has a direct elevator to street level and the Westlake tunnel station served by Link light rail trains on the 1 Line. The South Lake Union Streetcar also terminates at nearby McGraw Square, and several major bus routes run near the Westlake Center terminal. ## Service and fares The monorail takes approximately two minutes to travel between the Seattle Center and Westlake Center terminals, which are located 0.9 miles (1.4 km) apart. Trains depart from each terminal approximately every 10 minutes, with a single train running continuously. The service has two seasonal schedules, with trains in the autumn and winter (September to May) operating for 13–14 hours per day from Monday to Saturday, ending at 11:00 p.m. on Fridays and Saturdays, and 12 hours on Sundays, ending at 9:00 p.m. The summer schedule is in use from May to September and has weekday trains operating for 16 hours and weekend trains for 15 hours, with service ending at 11:00 p.m. every day. Monorail service is typically reduced on national holidays and closed entirely on Thanksgiving Day and Christmas. During special events at Seattle Center, operating hours are extended and train frequencies are increased to every five minutes by using both trains in the fleet. Fares for the monorail are paid at turnstiles at either terminal using an ORCA card, a smartphone app, or paper tickets bought from a vending machine with cash, credit/debit cards, or mobile payments. One-way fares are \$3.50 for adults, \$1.75 for youths aged 6–18, and \$1.75 for people qualifying for the reduced rate, including senior citizens 65 years and older, disabled individuals, persons with Medicare cards, and active duty members of the U.S. military carrying their identification cards. Round-trip fares are twice the price of a one-way fare; monthly passes are also offered at adult and reduced rates. Children aged four and under are able to ride free. In October 2019, the monorail began accepting ORCA cards, the regional transit payment system, after five years of negotiations and a study over fare integration; since May 2023, youth ORCA cards are charged a \$0 fare on the monorail as part of a statewide program to provide free transit for riders aged 18 years or younger. Free fares have also provided to attendees of all public events at Climate Pledge Arena through a mobile app since January 2023 after an existing program for the Seattle Kraken and Seattle Storm was expanded. ## Operations A private contractor, Seattle Monorail Services (SMS), founded in 1994, and currently owned by former Port of Seattle commissioner Tom Albro, operates the Seattle Center Monorail. Before 1994, the monorail was jointly operated by Seattle Center and King County Metro, the county's public transit agency. The monorail receives no operating funds from public sources, with costs covered by fares and federal grants for capital projects; the service is unusual among U.S. public transport systems because it makes an operating profit. The contract between SMS and the city government is renewed every ten years and includes an even split of profits between the two parties. In 2018, the Seattle Center Monorail carried approximately 2.022 million passengers, averaging 4,780 passengers on weekdays and 7,536 passengers on weekends. Following declines due to the COVID-19 pandemic, ridership rebounded in 2022 and 2023 with the opening of Climate Pledge Arena, where event tickets include free transit fares. The service generated \$4.3 million in fare revenue and received approximately \$883,000 in capital funds from local and federal governments in 2018. During the Century 21 Exposition from March to September 1962, the monorail carried over 90 percent of World's Fair visitors and had a total ridership of 7.4 million. ### Rolling stock and guideway The straddle-beam monorail is entirely elevated and uses a series of 68 hollow support columns up to 30 feet (9.1 m) above street level. The two parallel tracks are carried on prestressed concrete beams that are approximately 70 ft (21 m) long, 5 ft (1.5 m) tall, and 3 ft (0.91 m) wide. Several sections use split or one-armed columns that carry one track because of a lack of space on curves; the guideway passes over one building at the intersection of Denny Way and 5th Avenue as part of a long curve in the tracks. The system's maintenance and operations base is underneath the platforms at the Seattle Center terminal. The system has two aluminum trains, named the "Blue Train" (originally Spirit of Seattle) and "Red Train" (originally Spirit of Century 21) for their original paint schemes, which are each assigned to a single track and travel bidirectionally. They were constructed in 1962 by Alwac International in West Germany and have remained in operation on the line since then, undergoing a major renovation in 2009 and 2010. Each train is 122 ft (37 m) long, 10 ft 3 in (3.12 m) wide, and 14 ft (4.3 m) tall, with articulating joints between sections. They each have 124 seats and a capacity of 450 passengers with standing room, with an estimated maximum throughput of 10,800 passengers per hour. The trains have built-in emergency ramps to transfer passengers between trains if stopped between stations. Each train rides on a set of 64 pneumatic rubber tires arranged into eight bogies: 16 are load-bearing tires arranged in pairs on top of the beam and have a diameter of 39.5 in (100 cm); the remaining 48 tires are used to guide the train on the side of the beam and have a diameter of 26 in (66 cm). The system was designed for automated driving, but operators control the trains using a joystick and LCD monitors that display technical information. The trains typically coast without power for the latter half of their journey and switch to dynamic brakes when approaching a station. The system uses a third rail for electrification, with 700 volts DC that feed eight electric motors. Originally, the trains could reach speeds of up to 60 to 70 miles per hour (97 to 113 km/h), but this has since been reduced to 45 mph (72 km/h) for normal operations. During severe winter weather, the trains deposit de-icing chemicals and salt on the tracks to allow for normal speeds. ## History ### Early proposals and planning Several small-scale proposals for monorail systems in the Seattle area were published in the early 20th century, but they were never realized. William H. Boyes, a New York City inventor, was photographed with a replica of his monorail in 1910, with plans to build a line from Seattle to Tacoma. A year later, another Boyes proposal earned an operating franchise from the city government of Edmonds, Washington, but never proceeded beyond the early stages of construction. Another plan from the Universal Elevated Railway Company in 1918 envisioned an elevated monorail system that would run along Westlake Avenue in Seattle (near the modern-day monorail terminal), replacing the private streetcar network. After the streetcars were acquired by the city government in 1919, its lobbying for a monorail system ceased. Other plans for monorail systems were submitted to the Seattle city government in 1930 and 1955, the latter as part of the Everett–Seattle–Tacoma Tollway (modern Interstate 5). The Seattle city government, supported by civic boosters and the state legislature, began planning for its second World's Fair in 1955 to celebrate the 50th anniversary of the 1909 Alaska–Yukon–Pacific Exposition. A monorail was suggested in 1957 to connect the proposed fairgrounds in Lower Queen Anne to auxiliary parking lots in Interbay and attractions on Elliott Bay. The Seattle Transit Commission ordered a study into a monorail between Downtown Seattle and the proposed fairgrounds in April 1958, after hearing proposals from private operators who also offered New Orleans and Houston their own systems. Among the proposals was a "carveyor" from the Goodyear Tire and Rubber Company with small pods connecting downtown to the fairgrounds and a 5-mile (8.0 km) loop between Interbay, the fairgrounds, and downtown. The monorail proposal was later scaled back to a 1.2-mile (1.9 km) route on 5th Avenue connecting downtown hotels to the fairgrounds that would cost \$5.39 million to construct (equivalent to \$ in dollars). ### Bidding and proposals The Seattle Transit System opened up bids for monorail design and construction in December 1958, receiving proposals from the Lockheed Corporation, St. Louis Car Company, General Monorail of San Francisco, and the German firm Alwac International, which had begun installing the Disneyland Monorail in California. The Northrop Corporation presented its own proposal in February, using an unconventional gyroscope and generator that would not require a third rail or overhead catenary. In April 1959, the Seattle Transit Commission chose Lockheed to build the \$5 million monorail system, which would travel along 5th Avenue from Pine Street to the fairgrounds and open in 1961. Lockheed's design featured a straddle-beam monorail with three streamlined trains that resembled jetliners. The monorail was seen as a centerpiece to the planned Century 21 Exposition and as a catalyst for future development of a citywide rapid transit system, but would use no local transit funding. The operating costs were expected to be paid through fare recovery, while other options were considered for capital funding, including Lockheed buying back the system after the world's fair. Lockheed entered into final negotiations with the city and exposition organizers in late 1959, but the transit commission lost interest in running the system after the world's fair was shortened to six months instead of the original eighteen. The system's uncertain financing, not including engineering costs incurred by Lockheed, remained a major concern for the city government as negotiations continued into January 1960. Alwac International, which had previously estimated it would cost \$3.5 million (\$ in dollars) to install their Alweg monorail system, submitted a proposal in February 1960 to finance and build the project themselves at no cost to the city or exposition organizers. The firm would collect monorail fares and revenue from terminal concessions, and a surcharge on fair tickets, and transfer the system to the city government if the full \$3.5 million cost was repaid; in the event that the system did not recoup the investment, it would have been dismantled and removed. Lockheed responded by presenting a modified bid to the transit commission in March with a \$1 million buyback option, but they were dropped in favor of a new round of bidding by Alwac and the French engineering firm SAFEGE. The Century 21 Steering Committee, serving as the exposition's main organizers, took over negotiations from the transit commission and signed a preliminary construction contract with Alwac on May 20, 1960. The monorail would run along 5th Avenue from the fairgrounds to the intersection of Pine Street and Westlake Avenue, which would be converted into a permanent pedestrian mall. Alwac representatives signed the design contract on December 22, 1960, with a revised cost of \$4.2 million (\$ in dollars) to accommodate larger trains and stations. The final construction and operations contract was signed on May 13, 1961. Century 21 announced plans in April 1961 to build a small-scale people mover around the fairgrounds that would use a suspended monorail, but they were dropped five months later after the bidding firms were unable to obtain financing. ### Construction and preparations In March 1961, the city's Board of Public Works approved the construction and street use permits for the monorail project, which Century 21, Alwac, and local contractor Howard S. Wright Construction Company would undertake. Wright was also named a financing partner for the monorail, contributing \$375,000 (\$ in dollars), and went on to build the Space Needle and Seattle Center Coliseum. The construction permit included a requirement to remove the monorail within six months of the exposition's end, but Alwac had announced their intention to sell the Alweg system to the city government if they desired. Alweg representatives unveiled the finalized design plans for the monorail later that month, while the two railcars were under construction at the Linke-Hofmann-Busch factory in West Germany. Century 21 broke ground on the monorail in a ceremony at the Westlake Mall on April 6, 1961, which was declared "Monorail Day" and featured the Seattle Symphony Orchestra, a speech from Senator Warren G. Magnuson, and free monorail tickets for the 500 people in attendance. The wooden forms for the first of 80 monorail columns were laid in early May, and concrete pouring for the first column began on May 23 between Virginia and Lenora streets. A crane lifted the Virginia–Lenora columns, each weighing 54 short tons (49,000 kg), onto a prepared concrete footing on June 15. Concrete pouring at the Westlake Mall terminal began in late June, with plans to build the station platforms 25 feet (7.6 m) over Pine Street. The monorail's 60-short-ton (54,000 kg) precast concrete beams were assembled in Tacoma and trucked up to Seattle, with special permission from the Washington State Highway Commission, and the first was installed on September 21 between Virginia and Stewart streets before advancing northwards. Column construction and girder installation took approximately eight months, with at least three lanes of traffic on 5th Avenue remaining open during most periods. The steel girders at the Westlake Mall terminal were installed in October, followed by work on the Seattle Center terminal. By December 1961, most of the work on the tracks and 54 percent of work on the stations was complete, using 14,700 short tons (13,300,000 kg) of concrete and 970 short tons (880,000 kg) of steel. The last of the 138 guideway beams was hoisted and installed on January 9, 1962, near Denny Way to complete 5,200 feet (1,600 m) of track. In February 1962, the Seattle Transit Commission approved a contract with Century 21 to allow its employees to operate the monorail trains. Monorail personnel, including drivers and ticket booth attendants, wore blue-and-white poplin uniforms designed for the exposition. The first monorail train, later named the "Blue Train", was shipped in four sections from Bremen, West Germany, to Newark, New Jersey, and transported by train to Seattle. It arrived on February 19, 1962, and was lifted onto the trackway later that day. The monorail completed its first test run on March 3 and continued with several tests at reduced speeds. Jim West, a former cable car operator on the Yesler Way line who later drove the city's streetcars, trolleybuses and motor buses, drove the first test run. Several test runs were made into special occasions, including a trip that was televised live by KING-TV and a preview ride for 175 dignitaries after a ribbon cutting at the Westlake terminal on March 12. The second train, later named the "Red Train", arrived on March 27 and was installed on its track at the Seattle Center terminal. It made its first test run on April 10 and entered passenger service to replace the Blue Train temporarily before the beginning of the fair. ### World's Fair The monorail and Space Needle opened for a public preview on March 24, 1962, a month before the formal start of the Century 21 Exposition. The inaugural monorail trip from the Westlake terminal carried 130 passengers who received commemorative medals, including the first riders, who had lined up several hours early. An estimated 9,600 people rode the Blue Train on the monorail's first day, as did 24,000 over the preview weekend; service on the first day was suspended an hour earlier than scheduled because of a mechanical issue. Government officials and civic leaders officially christened the monorail on April 19. 179,000 passengers had boarded the trains during preview rides. The Century 21 Exposition formally opened on April 21. Monorail fares during the fair were set at 50 cents one-way and 75 cents round-trip for adults and 35 cents one-way and 50 cents round-trip for children. Trains operated from 8:45 a.m. to 12:15 a.m. during the fair, taking 96 seconds to complete each trip. It carried 7.4 million passengers, about 90 percent of fair attendees, from April 21 to October 21. Astronaut John Glenn rode the monorail on May 10, shortly after his return from orbit on Friendship 7; the red train was temporarily renamed "Friendship 21" in his honor and also carried Governor Albert D. Rosellini, Senator Warren G. Magnuson, and NASA rocket scientist Wernher von Braun. After the fair, the monorail operated with a reduced schedule, from 11:00 a.m. to 11:00 p.m. It was limited to one train over the winter months but averaged 1,200 daily passengers. Fare box revenue generated from March 24 to September 17 fully covered the system's \$4.2 million construction costs. Alwac retained temporary ownership of the monorail system after the fair contracted to end on April 21, 1963. The city government was tasked with deciding whether the monorail should be demolished or sold to a public or private operator. Alwac was granted an extension of its existing street use permit to operate trains until October. Alwac agreed to transfer the entire system, including the terminals and offices, to Century 21 Center, Inc., the operator of the fairgrounds, on June 3, 1963. The transfer came at no cost to Century 21 and allowed the monorail to remain in operation and included an extension of agreements with the city government and Seattle Transit System. ### Ownership transfer and early years Century 21 Center, Inc. ran into financial difficulties in late 1964, with \$2 million in outstanding debt (\$ in dollars), and began negotiating a takeover of all fairground operations by the city government, which already owned the Seattle Center property. As part of cost-saving measures, in October 1964 monorail ticket booths were eliminated and replaced with onboard attendants to take fares. Century 21 Center offered to sell the monorail to the city government for \$600,000 (\$ in dollars) as part of resolving its debts to the city and entering liquidation. Lacking an operating franchise, the corporation's liquidation trustees declined to take the title of the monorail system in December, and elected not to pay \$200,000 for demolition. Negotiations continued for several months until the city government agreed in April to terminate its contracts with Century 21 and take over the fairground facilities. The monorail was transferred to the city government in May at a cost of \$775,150 (\$ in dollars), of which \$414,128 (\$ in dollars) was in the form of debt forgiveness. Seattle Center reopened for the summer season on June 1, 1965, with monorail fares lowered the following day to 25 cents for adults on a one-way trip to attract more patrons. The monorail's operating hours were extended to midnight on weekdays and Saturdays, and ridership in the first week of June doubled compared to the prior year. A group of property owners along the monorail route sued the city government in 1965 over the loss of views and other livability concerns stemming from the construction of the line. The city settled the lawsuit in 1968 at a cost of \$776,249 (\$ in dollars) for light and air easements on 82 parcels of property. By the end of the 1960s, the monorail was averaging 10,000 passengers on weekdays and 14,000 on weekends during the peak summer season. The Seattle Transit System remained the contracted operator of the monorail until January 1, 1973, when the Municipality of Metropolitan Seattle (Metro Transit) absorbed it to form a countywide transit system. The Seattle city government retained ownership of the monorail and awarded an operating contract to Metro Transit using funding from the Seattle Center department. Under Metro Transit, the monorail vehicles were renumbered 6201 and 6202 and given a new paint scheme in 1978, including the repainting of the red train to the green train. The arrival of a traveling exhibition with artifacts from the tomb of Pharaoh Tutankhamun at Seattle Center spurred the repainting. The exhibition caused a surge in monorail ridership, which reached 2.8 million in 1978. ### Renovations and preservation The southern terminus at Westlake Mall was originally a large station that straddled Pine Street along a section of Westlake Avenue that had been converted into a public plaza. The terminal had a sloped moving walkway between street level and the three elevated platforms covered by a "scalloped" roof. The plaza at Westlake Mall was sought as the location of an expanded downtown park, leading to a major renovation of the monorail terminal that began in January 1968 and completed in April 1968. Reduced monorail service continued while the terminal was shrunk with the removal of the outer platforms deemed unnecessary for post-fair demand and the replacement of the roof with a simpler design. An emergency repair to the Westlake terminal was made in 1974 at a cost of \$100,000 to replace metal shields that caught debris dropped by passengers on the platform. A larger renovation was completed in 1988 to accommodate the downtown park, later named Westlake Park, and the adjacent Westlake Center shopping mall and office complex. The old terminal had been viewed as a "blight" on the area, which the city government sought for redevelopment as the center of Downtown Seattle's retail core beginning in the late 1960s. The city considered several proposals for a shopping mall on the block on the north side of Pine Street in the 1970s, including hotels, movie theaters, a potential home for the Seattle Art Museum, and a new monorail terminal, but they were never realized. After several years of litigation led by preservation activists, a new proposal from The Rouse Company and a local developer was approved for construction in late 1985. The new proposal included demolition of the monorail terminal to make way for a public park, while trains would terminate at a new station integrated into the shopping mall. The relocation of the station was initially rejected in 1985 after engineers had discovered that the monorail tracks would require significant reconstruction to make the necessary turn into the station. The city government proposed moving the columns onto the sidewalk on 5th Avenue instead and creating a gauntlet track, which would prevent the two trains from using the Westlake terminal at the same time. City councilmember George Benson suggested using a retractable ramp to access the outer track. A temporary station would be used during mall construction to allow the monorail to continue operations. The monorail relocation project was estimated to cost \$19 million (\$ in dollars) with heavy reliance on a federal grant that was initially denied by the Urban Mass Transportation Administration. The city considered several options, including running a single train, selling the system to Tacoma or demolishing the monorail entirely. In March 1986, it chose to keep the system and spend \$2.7 million (\$ in dollars) on the initial planning for the station overhaul and other renovations. The federal government awarded a \$5.6 million grant (\$ in dollars) for the relocation project in late July, two months after construction began on a temporary terminal at 5th Avenue and Stewart Street. The old terminal at Westlake Mall closed permanently on September 1, 1986, and was demolished over the following two months. The temporary terminal and its 140-foot (43 m) platform opened on September 17, 1986, allowing monorail service to resume after a two-week suspension. It was built one block to the north at Stewart Street, next to the western track, and only served the blue train. The city council finalized a \$7 million spending package (\$ in dollars) in March 1987 to construct the permanent terminal, which would begin after work on Pine Street for the Downtown Seattle Transit Tunnel advanced beyond the excavation stage. The monorail project included improvements to the electrical systems and an expansion of the Seattle Center terminal, and work on the two trains. An extensive interior refurbishment was cut after the monorail project trended \$1.7 million above budget (\$ in dollars), and was later reduced to new paneling and floorboards. The Westlake Center shopping mall was opened to the public on October 20, 1988, with the new monorail terminal on the third floor used temporarily for one day before it closed for additional construction. Several days before the scheduled opening, engineers discovered the west track was two inches (50 mm) too close to the platform and mall building, preventing its use. The discovery was made when a retractable boarding ramp at the terminal scratched the blue train during a test run; a hinge pin that failed to fold properly was identified as the cause for the misalignment. The ramp was fixed in November, but other technical glitches and extended safety testing delayed the opening of the new terminal station for four months. The new Westlake Center monorail terminal opened on February 25, 1989, alongside the return of the red train to service. In 1994, a private company replaced Metro Transit (later King County Metro) and Seattle Center as the monorail's operator, signing a ten-year contract with the city. Metro had previously provided drivers and maintained the trains, while Seattle Center employed ticket-takers and janitorial staff. Near the northern end of the line, the Experience Music Project building (now the Museum of Pop Culture) was constructed over the monorail tracks from 1998 to 2000. The building was designed so that the tracks would pass through a valley at the center of the structure, with windows from the exhibit spaces facing the guideway. The monorail tracks and vehicles were declared a historic landmark by the Seattle Landmarks Preservation Board in April 2003 amid plans to demolish or replace the line as part of a citywide monorail expansion. In July, the city council passed the landmark ordinance to provide protections to the two Alweg trains, but excluded the guideway to support its reuse for the expansion project. The monorail began a long-term closure on March 16, 2020 due to decreased demand amid the COVID-19 pandemic in the Seattle area. It reopened on May 28 with limited service and suspension of cash ticket sales, but was closed again over the weekend because of protest activity in Downtown Seattle. ### Station expansions The monorail was integrated into the regional fare system in October 2019 with the acceptance of mobile tickets and later the ORCA card. As part of preparations for the opening of Climate Pledge Arena in 2021 at the renovated KeyArena for a National Hockey League team (later named the Seattle Kraken), Seattle Monorail Services announced a renovation of the monorail terminals in February 2020 to handle larger crowds. The Westlake Center terminal was to be expanded to accommodate 6,000 people per hour with new fare gates and ticket vending machines for ORCA cards and tickets. The NHL team would also fund free transit passes for attendees before and after games to reduce the number of car trips to the arena. A proposed second phase of the expansion program would have included a covered walkway and second entrance at the Westlake Center terminal with access from the Pine Street plaza and the transit tunnel station, but it was later abandoned. NHL Seattle, the Kraken's ownership group, also announced that it would purchase a 50 percent stake in Seattle Monorail Services. Construction on the remodeled stations began in April 2021 with the demolition of the station interiors, which required a full suspension of monorail service for several weeks. Another month-long closure began in September to finish construction of the expanded Westlake terminal ahead of the first arena events in late October. The monorail reopened on October 11, 2021, with work completed on the renovated Westlake Center terminal, which is planned to handle up to 3,000 passengers per hour during events. The project was primarily funded by \$6.6 million in private spending and a \$5.5 million grant from the Federal Transit Administration. The Seattle Center terminal is planned to be renovated at a later date. On June 29, 2023, a set of 16 monorail columns on 5th Avenue between Olive Way and Vine Street were painted with portraits of Major League Baseball (MLB) players and local sports fans. The murals by artist Brady Black were commissioned by tourism agency Visit Seattle to celebrate the 2023 MLB All-Star Game, which Seattle is set to host in early July. Black and several volunteers painted the portraits onto mural cloth and transferred them to vinyl to be installed by crane on the columns. ## Expansion proposals The monorail has been the subject of several expansion proposals, with the primary goal of expanding it into a citywide rapid transit system. In 1961, businessman Ben B. Ehrlichman proposed that the then-unfinished monorail be extended north to Alderwood Manor or Mountlake Terrace and south towards Seattle–Tacoma International Airport, Kent, and Renton. The initial system would have cost \$60 million (\$ in dollars), while a second line serving the Eastside region would be built separately using a new floating bridge. Former Seattle Transit System manager Marmion D. Mills proposed his own monorail system in 1963 that would connect Seattle to Mountlake Terrace, Kent and the airport. Mills argued that a conventional subway system would be too expensive for Seattle and that the other alternative would be an expanded freeway network. The Forward Thrust program included a ballot measure that would build a conventional rapid transit system serving King County with federal funding, but voters rejected it in 1968 and 1970. The designers of the rapid transit proposal considered extending the monorail across a regional network, but found it would not have the capacity or flexibility provided by conventional trains. In 1976, ABAM Engineers drew up a regional monorail plan for the Puget Sound Council of Governments, the regional planning authority. The firm, which designed the Walt Disney World Monorail System in Florida and several automated people mover systems for U.S. airports, envisioned an 83-mile (134 km) network with 41 stations and 700 monorail vehicles that would cost \$500 million to build (\$ in dollars). The PSCOG did not submit the proposal for further consideration. The city government announced its own plan in 1970 to extend the monorail to a parking garage on Mercer Street near the site of a proposed stadium, but it was shelved after a different site was chosen for the stadium. The Seattle city government commissioned a new study in 1979 to examine improvements to the monorail system, including a closed loop around the Seattle Center campus and an infill station in the Denny Regrade neighborhood. A full conversion into an automated people mover with smaller vehicles was also studied as part of the improvement program. The 1970s energy crisis and subsequent availability of federal funding for transit projects sparked a revived interest in the monorail, but the Urban Mass Transit Administration rejected the Seattle proposals. ### ETC and Seattle Monorail Project The Regional Transit Authority (later Sound Transit) was formed in 1993 to create a regional light rail plan that was ratified by voters in November 1996. Taxi driver Dick Falkenbury conceived a separate proposal in 1996 to build a citywide monorail system and submitted a ballot initiative after a signature-gathering campaign. Falkenbury's proposal envisioned an "X"-shaped system with service from Downtown Seattle to Ballard, Lake City, the Rainier Valley, and West Seattle, which would cost \$850 million to construct (\$ in dollars). 53 percent of voters approved the monorail plan, named Initiative 41, in a general election on November 4, 1997, creating the Elevated Transportation Company (ETC) to seek financing. The city government appointed a board for the ETC (later renamed the Seattle Monorail Project) and funded early planning work, but did not agree to fund a \$4 million feasibility study in 2000. The original monorail initiative was repealed and replaced by a new plan approved by voters in November 2000, which included \$6 million for a study. The first corridor, the 14-mile (23 km) "Green Line" from West Seattle to Ballard, was estimated to cost \$1.75 billion; a motor vehicle excise tax would fund it. The tax was adopted through a ballot measure that voters narrowly approved in the November 2002 election, creating the Seattle Popular Monorail Authority to manage the program. The monorail project initially attracted two bids led by Hitachi and Bombardier, but both pulled out in April 2004 over cost concerns and the availability of local contractors. The project was stymied by tax revenue that was lower than expected and design changes to keep construction costs within the proposed budget and open by 2009—a two-year delay from the original plan. A recall measure on the November 2004 ballot aimed to prevent monorail construction, but voters rejected it, allowing the expansion project to continue. The monorail operator reached a tentative agreement with Cascadia Monorail to build the system in June 2005 but had not published the full financial analysis required by the city government before construction was permitted to begin. A revised cost estimate of \$11 billion, including debt payments until 2050, was unveiled later that month and withdrawn by the Seattle Monorail Project after public criticism from elected officials. The monorail project, including a \$4.9 billion financing plan for a 10-mile (16 km) line, was abandoned after a fifth ballot initiative in November 2005, when 64 percent of voters rejected it. The Seattle Monorail Project was formally dissolved in January 2008, having spent \$124.7 million on planning and property acquisition. The "Green Line" corridor from West Seattle to Ballard was later included as a light rail project in the Sound Transit 3 ballot measure, which was passed by voters in 2016. The light rail line, scheduled to open in the 2030s, incorporated some elements from the monorail plan into its early project feasibility studies. ## Accidents and incidents On October 20, 1962, the penultimate day of the Century 21 Exposition, the red train struck a bumper stop at the Westlake terminal—the first accident on the monorail system. None of the 400 passengers were injured, but the train's window and nose were damaged, requiring a patch and two hours of repairs before returning to service. The red train was damaged in a similar manner on August 14, 1963, striking the Westlake terminal's bumper while on a test run after the first set of brakes failed. The first major accident involving the monorail occurred on July 25, 1971, when a brake failure on the red train caused it to strike a girder at the end of the track in the Seattle Center terminal. The train struck the girder at 15 to 20 mph (24 to 32 km/h), injuring 26 of 40 passengers. The red train was lifted off the track and moved to a Seattle Transit System maintenance facility in August for a complete rebuild of the front car at a cost of \$100,000 (equivalent to \$ in dollars). This was completed in June 1973 with the help of translated blueprints from Alwac. One maintenance worker was killed during the repairs after falling into a pit under the vehicle. A similar incident on the blue train occurred on May 21, 1979, injuring 15 people at the Seattle Center terminal. The monorail's brake system was not found to be at fault, but the disabling of the onboard speed control system was criticized by city officials. The monorail struck a bumper at the temporary downtown terminal on August 27, 1987, causing no injuries but breaking the glass window, which fell onto a parked car below. The incident was later blamed on driver error. On May 31, 2004, a fire broke out on the blue train as it passed through the Experience Music Project with 150 people aboard; eight suffered minor injuries. Passengers were evacuated using ladders deployed by the Seattle Fire Department to the red train, which traveled back to the Seattle Center terminal. The fire was determined to have been caused by a snapped drive shaft that damaged a collector shoe, which began to short circuit. The electric current melted through the shoe's aluminum housing and arced, causing sparks that ignited the undercarriage's grease and oil, creating a fire that entered the interior and ignited the seat cushions. The red train re-entered service on December 16, while the blue train returned on May 2, 2005, after extensive repairs. The two monorail trains clipped one another on the curve above 5th Avenue and Olive Way near the Westlake Center terminal on November 26, 2005, at around 7:10 p.m. The southbound blue train's driver caused the collision when they failed to yield while entering a gauntlet track north of Westlake created by the 1988 renovation. The two trains carried 84 passengers who were evacuated using firetruck ladders, including two people hospitalized with minor injuries. Within a week, the trains were separated and towed via crane to the Seattle Center terminal to undergo extensive repairs that cost \$4.64 million (\$ in dollars), funded through an insurance payout and contributions from the federal government and the private monorail operator. Instead of using a traditional contractor, the Seattle Opera props department constructed a new set of nine aluminum doors—eight for the red train and one for the blue train—at their Renton warehouse. The monorail was expected to resume service on July 18, 2006, but problems found during last minute testing delayed the resumption of service to August 11. On July 31, 2023, a male 14-year-old from Phoenix, Arizona, was fatally struck by the monorail near the intersection of 5th Avenue and Denny Way around 9:00 p.m. According to the Seattle Police Department, security footage showed he had been tagging an adjacent building from a roof when he was struck, which caused him to fall. ## Popular culture Along with the Space Needle, the Seattle Center Monorail is considered an iconic landmark of the city of Seattle and is among the most popular tourist attractions in the state. It was featured in the 1963 musical film It Happened at the World's Fair, which starred Elvis Presley and was filmed during the Century 21 Exposition. The monorail and Space Needle were depicted on the cover of Life magazine and on commemorative stamps and coins issued during the world's fair in 1962. The Monorail Espresso coffeehouse was named in honor of the monorail and originally began under the Westlake terminal in 1980 as the first downtown coffee cart. ## See also - Transportation in Seattle - List of monorail systems
2,146,652
Spyro: Year of the Dragon
1,167,558,294
2000 video game
[ "2000 video games", "3D platform games", "Fantasy video games", "Insomniac Games games", "PlayStation (console) games", "PlayStation Network games", "Single-player video games", "Sony Interactive Entertainment games", "Spyro the Dragon video games", "Universal Interactive games", "Video game sequels", "Video games developed in the United States", "Video games scored by Stewart Copeland" ]
Spyro: Year of the Dragon is a 2000 platform game developed by Insomniac Games and published by Sony Computer Entertainment for the PlayStation. Year of the Dragon is the third game in the Spyro series. The game follows the adventures of the purple dragon Spyro. After an evil sorceress steals magical dragon eggs from the land of the dragons, Spyro travels to the "Forgotten Realms" to retrieve them. Players travel across thirty different worlds gathering gems and eggs, defeating enemies, and playing minigames. Year of the Dragon introduced new characters and minigames to the series, as well as offering improved graphics and music. Year of the Dragon received positive reviews from critics, who noted the game successfully built on the formula of its predecessors. The game sold more than 3 million copies worldwide. Year of the Dragon was the last Spyro title released for the first PlayStation, and the last developed by Insomniac Games; their next game would be Ratchet & Clank. Year of the Dragon was followed by the multiplatform title Spyro: Enter the Dragonfly, and was later remade as part of the Spyro Reignited Trilogy in 2018. ## Gameplay Year of the Dragon is a platforming video game primarily played from a third person perspective. The main objective is to recover stolen dragon eggs which are scattered across 37 levels. These eggs are hidden, or are given as rewards for completing certain tasks and levels. The worlds of Spyro are linked together by "homeworlds" or "hubs", large worlds which contain gateways to many other levels. To proceed to the next hub, the character must complete five worlds, gather a certain number of eggs, and defeat a boss. Players do not need to gather every egg to complete the main portion of the game or gain access to new levels; in fact, certain eggs can only be found by returning to the world at a later time. Gems are scattered across the worlds, hidden in crates and jars. These gems are used to bribe a bear named Moneybags to release captured characters and activate things which help Spyro progress through levels. Gems, along with the number of eggs collected, count to the total completion percentage of the game. The player controls the dragon Spyro for much of the game. Spyro's health is measured by his companion Sparx, a dragonfly who changes color and then disappears after taking progressively more damage. If the player does not have Sparx, then the next hit would cause the player to lose a life and restart at the last saved checkpoint. Consuming small wildlife known as "fodder" regenerates Sparx. Spyro has several abilities, including breathing fire, swimming and diving, gliding, and headbutting, which he can use to explore and combat a variety of enemies, most of which are rhinoceros-like creatures called Rhynocs. Some foes are only vulnerable to certain moves. Spyro can run through "Powerup Gates", which give him special abilities for a limited period. Year of the Dragon introduces new playable characters other than Spyro, known as critters, which are unlocked by paying off Moneybags as the player proceeds through the game. Subsequently, the player plays as the critter in specially marked sections of levels. Each critter has their own special moves and abilities. Sheila the Kangaroo, for example, can double jump, while Sgt. Byrd is armed with rocket launchers and can fly indefinitely. Besides the primary quest to find dragon eggs, Year of the Dragon features an extensive set of minigames, which are split off from the levels into smaller zones. Some of the minigames were featured in Spyro 2: Ripto's Rage! and were subsequently expanded for Year of the Dragon, while others are entirely new to the series. These minigames are played by Spyro or the other playable characters. ## Plot The game opens in the land of the dragons, where Spyro and his kin are celebrating the "Year of the Dragon", an event that occurs every twelve years when new dragon eggs are brought to the realm. During the celebration, the Sorceress' apprentice, Bianca, invades the Dragon Realms with an army of rhino-based creatures called Rhynocs, stealing all of the Dragon eggs. The Sorceress spreads the eggs throughout several worlds, split up into four home realms: Sunrise Spring, Midday Garden, Evening Lake, and Midnight Mountain. Spyro, Sparx, and Spyro's friend Hunter are sent down a hole to find the thieves and recover the dragon eggs. Spyro emerges in the Forgotten Realms (lands once inhabited by the dragons), where magic has gradually been disappearing. These worlds are under the iron-fisted reign of the Sorceress and her Rhynoc army. Spyro meets with Sheila the Kangaroo, Sergeant Byrd the Penguin, Bentley the Yeti, and Agent 9 the Monkey, all who help him on his quest. Spyro travels through each world, acquiring aid from the local inhabitants and rescuing the dragon eggs. It is revealed that the Sorceress banished the dragons, not realizing they were the source of magic, and wants to use the baby dragons' wings to concoct a spell that can grant her immortality. Once Bianca learns this, she turns against the Sorceress and helps Spyro defeat her. After the credits, the player can continue to find dragon eggs and gems to unlock the true ending, defeating the Sorceress once more for the final dragon egg. Spyro returns all of the baby dragons to the Dragon Realms. Along the journey to help Spyro recover the eggs, Hunter forms a crush on Bianca, and they begin a relationship, with Spyro and Sparx looking on in dismay. ## Development Development of Spyro: Year of the Dragon spanned about ten and a half months, from November 1999 to September 2000; the development team was influenced by a host of other games, including Doom and Crash Bandicoot. Among the new features touted before the game's release was "Auto Challenge Tuning", which Insomniac CEO Ted Price described as "invented to even out the gameplay difficulty curve for players of different abilities". The levels were made much larger than those in Spyro 2, so that more areas for minigames could be added; to prevent player confusion on where to go next, these areas were designed to load separately from the main hubs. Price stated that the addition of critters was a way to make the game more enjoyable and varied, instead of just adding more moves for Spyro. The game was named Year of the Dragon because it released during the year of the Dragon in the Chinese zodiac. In previews, publications such as IGN and GameSpot noted that the graphics had been improved, and that there were many new characters and locations. The new minigames were previewed, and IGN pointed out that they offered enough complexity to back up the simple gameplay. In an interview with GameSpot, Ted Price stated that the emphasis for the title was on the new critters, but that Spyro would not be left behind in the story. ### Music The music for Year of the Dragon was composed by Stewart Copeland, former drummer for the rock band The Police, alongside Ryan Beveridge, who helped produce the score. During the band's hiatus, Copeland composed several movie soundtracks, and composed the scores for the previous Spyro titles; Price stated that Copeland's offering for the third installment was his best work to date. In an interview, Copeland stated that his creative process for writing the music for the Spyro series always began by playing through the levels, trying to get a feel for each world's "atmosphere". Copeland noted the challenge of writing for games was to create music that would both be interesting to listen to and complemented the gameplay; his approach was to incorporate more complicated harmonies and basslines so that the music could seem fresh for players, even after repeated listening. He complimented the compact disc format of the PlayStation and its support for high quality audio; there were no technical constraints that stopped him from producing the sound he wanted. Copeland recorded entire orchestral scores for extra flourish when the visuals called for an expansive sound, but used more percussive and beat-driven melodies for "high-energy" moments in the game. ## Release Year of the Dragon released in the United States on October 10, 2000. While the upcoming PlayStation 2 was being aggressively promoted, Sony continued to support its three marquee PlayStation 1 games, including Year of the Dragon, with a \$10 million advertising campaign. In an effort to reduce software piracy, Year of the Dragon implemented crack protection in addition to the copy protection previous games had contained. This helped prevent hackers from cracking the game until two months after release, rather than the week it had taken for Spyro 2. Since as much as half of a game's lifetime sales occurred during the early release window, Insomniac considered their effort a success. The game sold more than two million units in the United States. It received a "Platinum" sales award from the Entertainment and Leisure Software Publishers Association (ELSPA), indicating sales of at least 300,000 units in the UK. By June 2007, the game sold more than 3.2 million units worldwide. ## Reception Year of the Dragon received "universal acclaim" according to review aggregator Metacritic. Critics including NextGen, AllGame, and PSXExtreme called it the best entry in the series thus far, with AllGame's Ben Simpson considering it one of the PlayStation's best platformers. GameSpot's Brad Shoemaker and IGN's David Smith noted that the game only brought minor refinements to the series, but that if players liked the previous games, they would enjoy Year of the Dragon as much or more. Andrew Reiner, writing for Game Informer, said the gameplay managed to be accessible for audiences of all ages, while still offering a challenge. Some reviewers critiqued that the camera could be annoying at times, particularly when it was unable to keep up with Spyro. GameSpot noted that while Year of the Dragon made no significant changes to the formula of its predecessors, the combination of new playable characters, more detailed graphics, and the wide variety of minigames made the game worth buying. IGN praised the game's appeal to all ages and the polished levels, as well as the successful multi-character focus. GamePro noted that the ability of the game to automatically drop the difficulty if players get stuck was an excellent feature. NextGen's Kevin Rice provided one of the most positive reviews in which he stated the top-notch level design, intuitive controls and excellent graphics made the title the best Spyro game to date, and arguably the best PlayStation game overall. Publications like PSXExtreme thought the music helped bring atmosphere to the varied worlds, and AllGame enthused that "Insomniac should be commended for realizing the importance of music in games; it seems to enhance the whole experience." Other points of praise were the voice acting and character development. Joseph Parazen of GameRevolution found the sound to be well done but nothing extraordinary, arguing that the background music and sound effects were both fairly generic, while the voice acting was better than usual. He also called the game's premise its only real flaw, as it was too unoriginal, but added that "the story that unfolds as you actually play the game is flawlessly interwoven and quite entertaining". Other publications cautioned that elements of the game might feel too much like those of its predecessors. During the 4th Annual Interactive Achievement Awards, the Academy of Interactive Arts & Sciences nominated Spyro: Year of the Dragon for the "Art Direction", "Console Action/Adventure", "Console Game of the Year" and "Game of the Year" awards, all of which ultimately went to Final Fantasy IX, The Legend of Zelda: Majora's Mask, SSX and Diablo II, respectively. Year of the Dragon was developer Insomniac Games' last Spyro title. In an interview, Ted Price said that the company stopped producing the games because they could not do anything new with the character, and that after five years of development on a single series, the team wanted to do something different. They began prototyping what would become their next title, Ratchet & Clank, in the first half of 2000, while Year of the Dragon was still in production. The next entry in the series, Spyro: Enter the Dragonfly, would be released in 2002. Year of the Dragon was later packaged along with the first two Spyro games in the Spyro Reignited Trilogy.
1,218,280
Tom Derrick
1,169,445,708
Recipient of the Victoria Cross
[ "1914 births", "1945 deaths", "Australian Army officers", "Australian Army personnel of World War II", "Australian World War II recipients of the Victoria Cross", "Australian military personnel killed in World War II", "Australian people of Irish descent", "Australian recipients of the Distinguished Conduct Medal", "Burials at Labuan War Cemetery", "Military personnel from Adelaide" ]
Thomas Currie "Diver" Derrick, (20 March 1914 – 24 May 1945) was an Australian soldier and a recipient of the Victoria Cross, the highest decoration for gallantry "in the face of the enemy" awarded to members of the British and Commonwealth armed forces. In November 1943, during the Second World War, Derrick was awarded the Victoria Cross for his assault on a heavily defended Japanese position at Sattelberg, New Guinea. During the engagement, he scaled a cliff face while under heavy fire and silenced seven machine gun posts, before leading his platoon in a charge that destroyed a further three. Born in the Adelaide suburb of Medindie, South Australia, Derrick left school at the age of fourteen and found work in a bakery. As the Great Depression grew worse he lost his job and moved to Berri, working on a fruit farm before marrying in 1939. In July 1941, Derrick enlisted in the Second Australian Imperial Force, joining the 2/48th Battalion. He was posted to the Middle East, where he took part in the siege of Tobruk, was recommended for the Military Medal and promoted to corporal. Later, at El Alamein, Derrick was awarded the Distinguished Conduct Medal for knocking out three German machine gun posts, destroying two tanks, and capturing one hundred prisoners. Derrick returned to Australia with his battalion in February 1943, before transferring to the South West Pacific Theatre where he fought in the battle to capture Lae. Back in Australia the following February he was posted to an officer cadet training unit, being commissioned lieutenant in November 1944. In April 1945 his battalion was sent to the Pacific island of Morotai, an assembly point for the Allied invasion of the Philippines. Engaged in action the following month on the heavily defended hill Freda on Tarakan Island, Derrick was hit by five bullets from a Japanese machine gun. He died from his wounds on 24 May 1945. ## Early life Derrick was born on 20 March 1914 at the McBride Maternity Hospital in the Adelaide suburb of Medindie, South Australia, to David Derrick, a labourer from Ireland, and his Australian wife, Ada (née Whitcombe). The Derricks were poor, and Tom often walked barefoot to attend Sturt Street Public School and later Le Fevre Peninsula School. In 1928, aged fourteen, Derrick left school and found work in a bakery. By this time, he had developed a keen interest in sports, particularly cricket, Australian Rules Football, boxing and swimming; his diving in the Port River earned him the nickname of "Diver". With the advent of the Great Depression, Derrick scraped a living from odd jobs—such as fixing bicycles and selling newspapers—to supplement his job as a baker. When in 1931, the Depression worsened, Derrick lost his bakery job and, with friends, headed by bicycle for the regional town of Berri, approximately 225 kilometres (140 mi) away, in search of work. Jobs in Berri were hard to come by and Derrick and two friends spent the next few months living in a tent on the banks of the Murray River. When the annual Royal Adelaide Show opened that year, Derrick went to the boxing pavilion to accept a challenge of staying upright for three rounds with the ex-lightweight champion of Australia. Although he was knocked down in the second round, he immediately got back to his feet and won the bet; albeit at the cost of a black eye, and a few bruised ribs. Eventually, towards the end of 1931, Derrick found work picking fruit at a vineyard in Winkie, a short distance outside Berri. He later moved on to a full-time job at a nearby fruit farm, remaining there for the next nine years. On 24 June 1939, Derrick married Clarance Violet "Beryl" Leslie—his "one true love" whom he had met at a dance in Adelaide seven years earlier—at St Laurence's Catholic Church, North Adelaide. ## Second World War Derrick did not join up when war broke out in September 1939 but, like many Australians, enlisted after the fall of France in June 1940. He joined the Second Australian Imperial Force on 5 July 1940, and was posted to the 2/48th Battalion, 26th Brigade, as a private. Derrick first joined his unit at the Wayville Showgrounds, before basic training at Woodside. Derrick thrived on military life, but found discipline difficult to accept. In October, the 2/48th Battalion paraded through the streets of Adelaide to Mitcham railway station before its embarkation for the Middle East. The battalion's voyage overseas was postponed until 17 November, when the unit boarded the SS Stratheden. The ship made a stop at Perth, where Derrick was confined on board for going absent without leave to sightsee. He was soon in more trouble, and was charged and fined for punching another soldier who taunted him over this incident. ### North Africa On arrival in Palestine, the 2/48th Battalion encamped at El Kantara and began training in desert warfare. For relaxation, the battalion set up athletic events, and Derrick became well known for often winning cross-country races—and for organising a book on the outcomes. In March 1941, the unit went by train and truck to Alexandria, Egypt, then along the North African coast to Cyrenaica, in Libya, to join the 9th Australian Division. After the 2/48th Battalion completed its training with the 9th Division at Cyrenaica, they were moved further along the coast to Gazala. Then, just as they began to dig in, the battalion was abruptly withdrawn to Tobruk in response to the German Afrika Korps' advance. They entered Tobruk on 9 April 1941, and spent the following eight months besieged by Axis forces. While there, Derrick acquired an Italian Breda machine gun and regularly led fighting patrols against both German and Italian troops. Although Derrick's bravery was noted during the siege, he wrote in his diary about his constant fear of dying. On the night of 30 April, the Axis forces assaulted Tobruk's outer defences and managed to capture substantial ground. In response, the 2/48th Battalion was ordered to counter-attack the following evening. During the ensuing engagement, Derrick fought as a section member in the far left flank of the attack. After suffering heavy casualties in what Derrick described as "a bobby dazzler of a fire fight", the battalion was forced to withdraw. Praised for his leadership and bravery during the assault, Derrick was immediately promoted to corporal, and recommended for the Military Medal, but the award was never made. In late May, Derrick discovered a German posing as a British tank officer and reported him to company headquarters; the man was immediately arrested as a spy. Following a period of heavy fighting in June, the 2/48th Battalion was placed in reserve for a few days the following month. Promoted to platoon sergeant in September, Derrick—along with the rest of his battalion—was withdrawn from Tobruk and returned to Palestine aboard HMS Kingston on 22 October. Disembarking at Tel Aviv, they were given three days' leave in the city, before returning for training. Following a period of rest and light garrison duties in Syria, the 2/48th Battalion was rushed to El Alamein, Egypt, to reinforce the British Eighth Army. During the First Battle of El Alamein on 10 July 1942, Derrick took part in the 26th Australian Brigade's attack on Tel el Eisa. In the initial assault, Derrick, against a barrage of German grenades, led an attack against three machine gun posts and succeeded in destroying the positions before capturing over one hundred prisoners. During the Axis counter-attack that evening, the Australian line was overrun by tanks. As the German infantry following the tanks advanced, Derrick's company led a charge against the men. During the engagement, Derrick managed to destroy two German tanks using sticky bombs. Commended for his "outstanding leadership and courage", Derrick was awarded the Distinguished Conduct Medal for his part in the fighting at Tel el Eisa. The award was announced in a supplement to the London Gazette on 18 February 1943. Promoted to sergeant on 28 July, Derrick led a six-man reconnaissance on 3 October, successfully pinpointing several German machine gun positions and strongholds; this information was to be vital for the upcoming Second Battle of El Alamein. The El Alamein offensive was launched on 23 October, the 9th Australian Division taking part. At one point during the engagement, Derrick jumped up onto an Allied gun carrier heading towards the Germans. Armed with a Thompson submachine gun and under intense heavy fire, Derrick attacked and knocked out three machine gun posts while standing in the carrier. He then had the driver reverse up to each post so he could ensure each position was silenced. By the following morning, Derrick's platoon occupied all three posts. The members of the 2/48th Battalion who witnessed Derrick's action were sure he would be awarded the Victoria Cross, though no recommendation was made. For part of 31 October, Derrick assumed command of his company after all of the unit's officers had been killed or wounded in fierce fighting. On 21 November 1942, Derrick was briefly admitted to the 2/3rd Australian Field Ambulance with slight shrapnel wounds to his right hand and buttock. Twelve days later, the 2/48th Battalion left El Alamein and returned to Gaza in Palestine, where, later that month, Derrick attended a corps patrolling course. In January 1943, the 2/48th Battalion sailed home to Australia, aboard the SS Nieuw Amsterdam along with the rest of the 9th Division. ### South West Pacific Disembarking at Port Melbourne in late February 1943, Derrick was granted a period of leave and travelled by train to Adelaide where he spent time with Beryl. He rejoined his battalion—now encamped in the outskirts of Adelaide—before they went by train to the Atherton Tableland for training in jungle warfare. Brought up to full strength by the end of April, the 2/48th Battalion completed its training following landing-craft exercises near Cairns. On 23 July, Derrick was attached to the 21st Brigade Headquarters but admitted to hospital for old injuries to his right eye later the same day. After hospital, Derrick returned briefly to brigade headquarters before rejoining the 2/48th Battalion on 27 August. For much of August, the 2/48th Battalion had been in training for the Allied attack on Lae, in Papua New Guinea. The unit's objective was to land on a strip of land designated as "Red Beach", and then fight their way approximately 30 kilometres (19 mi) west towards Lae. Following a bombardment by American destroyers, Derrick's wave landed on the beach with minimal casualties on 4 September. Ten days later, the 2/48th Battalion's C Company—led by Derrick's platoon—captured Malahang airstrip, before Lae fell to the Allies on 16 September. Derrick was scornful of the Japanese defence of Lae, and wrote in his diary that "our greatest problem was trying to catch up" with the retreating Japanese force. #### Victoria Cross Following Lae, the 9th Division was tasked to seize Finschhafen, clear the Huon Peninsula and gain control of the Vitiaz Strait. By 2 October, one of the division's brigades had gained a foothold on Finschhafen, but soon encountered fierce Japanese resistance. In response to a Japanese counter-attack, the 26th Brigade was transferred to reinforce the Australian position on 20 October and, when the division switched to the offensive in November, the brigade was ordered to capture Sattelberg. Sattelberg was a densely wooded hill rising 1,000 metres (1,100 yd) and dominating the Finschhafen region; it was in an assault on this position that Derrick was to earn the Victoria Cross. The Australian attack on Sattelberg began in mid-November, the Japanese slowly giving ground and withdrawing back up the precipitous slopes. Each side suffered heavy casualties, and on 20 November, Derrick—who had been acting as company sergeant major for the previous month—was given command of B Company's 11 platoon after the unit had "lost all but one of their leaders". By 22 November, the 2/23rd and 2/48th Battalions had reached the southern slopes of Sattelberg, holding a position approximately 600 metres (660 yd) from the summit. A landslide had blocked the only road, so the final assault was made by infantry alone, without supporting tanks. On 24 November, the 2/48th Battalion's B Company was ordered to outflank a strong Japanese position sited on a cliff face, before attacking a feature 140 metres (150 yd) from the Sattelberg township. The nature of the terrain meant that the only possible route was up a slope covered with kunai grass directly beneath the cliffs. Over a period of two hours, the Australians made several attempts to clamber up the slopes to reach their objective, but each time they were repulsed by intense machine gun fire and grenade attacks. As dusk fell, it appeared impossible to reach the objective or even hold the ground already gained, and the company was ordered to withdraw. In response, Derrick replied to his company commander: "Bugger the CO [commanding officer]. Just give me twenty more minutes and we'll have this place. Tell him I'm pinned down and can't get out." Moving forward with his platoon, Derrick attacked a Japanese post that had been holding up the advance. He destroyed the position with grenades and ordered his second section around to the right flank. The section soon came under heavy machine gun and grenade fire from six Japanese posts. Clambering up the cliff face under heavy fire, Derrick held on with one hand while lobbing grenades into the weapon pits with the other, like "a man ... shooting for [a] goal at basketball". Climbing further up the cliff and in full view of the Japanese, Derrick continued to attack the posts with grenades before following up with accurate rifle fire. Within twenty minutes, he had reached the peak and cleared seven posts, while the demoralised Japanese defenders fled from their positions to the buildings of Sattelberg. Derrick then returned to his platoon, where he gathered his first and third sections in preparation for an assault on the three remaining machine gun posts in the area. Attacking the posts, Derrick personally rushed forward on four separate occasions and threw his grenades at a range of about 7 metres (7.7 yd), before all three were silenced. Derrick's platoon held their position that night, before the 2/48th Battalion moved in to take Sattelberg unopposed the following morning. The battalion commander insisted that Derrick personally hoist the Australian flag over the town; it was raised at 10:00 on 25 November 1943. The final assault on Sattelberg became known within the 2/48th Battalion as 'Derrick's Show'. Although he was already a celebrity within the 9th Division, the action brought him to wide public attention. On 23 March 1944, the announcement and accompanying citation for Derrick's Victoria Cross appeared in a supplement to the London Gazette. It read: > Government House, Canberra. 23rd March 1944. > > The KING has been graciously pleased to approve the award of the VICTORIA CROSS to:- > > Sergeant Thomas Currie Derrick, D.C.M., Australian Military Forces. > > For most conspicuous courage, outstanding leadership and devotion to duty during the final assault on Sattelberg in November, 1943. > > On 24th November, 1943, a company of an Australian Infantry Battalion was ordered to outflank a strong enemy position sited on a precipitous cliff-face and then to attack a feature 150 yards from the township of Sattelberg. Sergeant Derrick was in command of his platoon of the company. Due to the nature of the country, the only possible approach to the town lay through an open kunai patch situated directly beneath the top of the cliffs. Over a period of two hours many attempts were made by our troops to clamber up the slopes to their objective, but on each occasion the enemy prevented success with intense machine-gun fire and grenades. > > Shortly before last light it appeared that it would be impossible to reach the objective or even to hold the ground already occupied and the company was ordered to retire. On receipt of this order, Sergeant Derrick, displaying dogged tenacity, requested one last attempt to reach the objective. His request was granted. > > Moving ahead of his forward section he personally destroyed, with grenades, an enemy post which had been holding up this section. He then ordered his second section around on the right flank. This section came under heavy fire from light machine-guns and grenades from, six enemy posts. Without regard for personal safety he clambered forward well ahead of the leading men of the section and hurled grenade after grenade, so completely demoralising the enemy that they fled leaving weapons and grenades. By this action alone the company was able to gain its first foothold on the precipitous ground. > > Not content with the work already done, he returned to the first section, and together with the third section of his platoon advanced to deal with the three remaining posts in the area. On four separate occasions he dashed forward and threw grenades at a range of six to eight yards until these positions were finally silenced. > > In all, Sergeant Derrick had reduced ten enemy posts. From the vital ground he had captured the remainder of the Battalion moved on to capture Sattelberg the following morning. > > Undoubtedly Sergeant Derrick's fine leadership and refusal to admit defeat, in the face of a seemingly impossible situation, resulted in the capture of Sattelberg. His outstanding gallantry, thoroughness and devotion to duty were an inspiration not only to his platoon and company but to the whole Battalion. #### Later war service The 2/48th Battalion remained at Sattelberg until late December 1943, when it returned to the coast to regroup. On Christmas Eve, Derrick noted in his diary that the next day would be his "4th Xmas overseas" and "I don't care where I spend the next one I only hope I'm still on deck [alive]". On 7 February 1944, the battalion sailed from Finschhafen for Australia, disembarking at Brisbane. Granted home leave, Derrick made his way to South Australia for a short period with Beryl. In April, he was admitted to hospital suffering from malaria before returning to his battalion the following month. During this time, he was charged with being absent without leave and subsequently forfeited a day's pay. On 20 August 1944, Derrick was posted to an officer cadet training unit in Victoria. He requested that he be allowed to rejoin the 2/48th Battalion at the end of the course; contrary to normal Army policy that prevented officers commissioned from the ranks from returning to their previous units. An exemption was granted to Derrick only after much lobbying. While at this unit, Derrick shared a tent with Reg Saunders, who later became the Army's first Indigenous Australian officer. Commissioned as a lieutenant on 26 November 1944, Derrick was granted twenty-four days leave. Returning to the 2/48th Battalion as a reinforcement officer, his appointment as a platoon commander in his old company was met by "great jubilation". During this period, the battalion had been posted to Ravenshoe on the Atherton Tablelands for "an extensive training period", before being transported from Cairns to Morotai during April 1945. It was around this time that Derrick converted from his Church of England religious denomination and Salvationist beliefs to Catholicism—his wife's religion—though he was not overtly religious. On 1 May 1945, Derrick took part in the landing at Tarakan; an island off the coast of Borneo. Under the cover of a naval and aerial bombardment, he led his men ashore in the initial waves of the landing, where they were initially posted at the boundary of the 2/48th Battalion and 2/24th Battalion's area of responsibility. The Japanese force on the island mounted a determined resistance, and Derrick was later quoted in the Sunday Sun as saying he had "never struck anything so tough as the Japanese on Tarakan". Slowly pushing inland, the 2/48th Battalion's main task from 19 May was to capture a heavily defended hill code-named Freda. Derrick's platoon unsuccessfully probed Japanese positions on that day and the next, at a loss of two men killed with others wounded. He later recorded in his diary that these setbacks were a "bad show". On 21 May, Derrick and Lieutenant Colonel Bob Ainslie, the 2/48th Battalion's commander, debated the optimum size of the unit which should be used to capture the Freda position. Derrick successfully argued that a company was best, given the restrictions posed by the terrain. He was in high spirits that night, possibly in an attempt to lift his platoon's morale. On 22 May, Derrick's was one of two platoons that attacked a well-defended knoll and captured the position. Derrick played a key role in this action, and coordinated both platoons during the final assault that afternoon. After capturing the knoll, the two platoons—reinforced by two sections of the 2/4th Commando Squadron—dug in to await an expected Japanese counter-attack. At about 03:30 on 23 May, a Japanese light machine gun fired into the Australian position. Derrick sat upright to see if his men were all right, and was hit by five bullets from the gun's second burst; striking him from his left hip to the right of his chest. His runner, "Curly" Colby, dragged him behind cover, but Derrick could not be immediately evacuated as Japanese troops attacked at about 04:00. Derrick was in great pain, and told Colby that he had "had it". Despite his wounds, he continued to issue orders for several hours. When day broke, it was discovered that Derrick's platoon were directly overlooked by a Japanese bunker—though this would not have been visible during the assault late the previous evening. When stretcher bearers reached the position at dawn, Derrick insisted that the other wounded be attended to first. Derrick was carried off Freda later that morning, where he was met by the 26th Brigade's commander, Brigadier David Whitehead. The two men briefly conversed before Derrick excused himself, fearing that he had not much time left and wishing to see the padre. Stepping back, Whitehead saluted and sent for Father Arch Bryson. At the hospital, surgeons found that bullets had torn away much of Derrick's liver; he died on 24 May 1945 during a second operation on his wounds. He was buried in the 2/48th Battalion's cemetery on Tarakan that afternoon, and later re-interred at the Labuan War Cemetery, plot 24, row A, grave 9. ## Legacy Tom Derrick was widely mourned. His widow, Beryl, became prostrate with grief on hearing of his death; many members of the Army were affected, with one soldier lamenting it felt as if "the whole war stopped". By the time Derrick's death was officially announced on 30 May, most Australians on Tarakan had heard the news and rumours had spread claiming that he had been speared or shot at short range by a sub-machine gun. The Japanese force on Tarakan learned of Derrick's death and tried to exploit it for propaganda purposes. They printed a leaflet which began "We lament over the death of Lieutenant General Terick CinC of Allied Force in Tarakan" and later included the question "what do you think of the death in action of your Commander in Chief ...?" This leaflet reached few Australian soldiers, and had little impact on them. "Tokyo Rose" also broadcast taunts over "Terick's" death. Derrick's reputation continued to grow after his death, and many Australian soldiers recalled any association, however slight, they had with him. To many Australians, he embodied the 'ANZAC spirit', and he remains perhaps the best-known Australian soldier of the Second World War. Historian Michael McKernan later remarked that, for his war service, Derrick had arguably deserved "a VC and two bars ... at El Alamein, at Sattelberg and now at Tarakan". In a 2004 television interview, then Chief of the Australian Defence Force, General Peter Cosgrove, was asked "Who was the best soldier of all time?" After a short pause, he replied: "Diver Derrick". This sentiment was endorsed by General Sir Francis Hassett. Hassett—who, as a lieutenant colonel, had served at Finschhafen with II Corps headquarters—stated: > From what I learnt; not only was Derrick a magnificent soldier, but also a splendid leader who, immediately he saw a tactical problem, fixed it with either personal bravery or leadership imbued with determination and common sense. Derrick is also remembered for his personal qualities. He was sensitive and reflective. Despite a limited education, he was a "forceful and logical debater, with a thirst for knowledge". Derrick kept a diary, composed poetry, collected butterflies and frequently wrote to his wife, while on active service . Historian Peter Stanley has compared Derrick's leadership abilities with those of Edward 'Weary' Dunlop, Ralph Honner and Roden Cutler. On 7 May 1947, Beryl Derrick attended an investiture ceremony at the Government House, Adelaide, where she was presented with her late husband's Victoria Cross and Distinguished Conduct Medal by the Governor of South Australia, Lieutenant General Sir Charles Norrie. Derrick's Victoria Cross and other medals are now displayed at the Australian War Memorial, Canberra, along with a portrait by Sir Ivor Hele. A street in the neighbouring suburb of Campbell and a rest stop in the Remembrance Driveway between Sydney and Canberra were also named in his honour. In 1995, a public park was named the Derrick Memorial Reserve on Carlisle St, Glanville in his honour, and his VC citation is displayed on a plaque there. In June 2008, a newly built bridge over the Port River on the Port River Expressway was named the Tom 'Diver' Derrick Bridge following a public campaign.
27,428,105
Maya (M.I.A. album)
1,166,905,067
2010 studio album by M.I.A.
[ "2010 albums", "Albums produced by Diplo", "Albums produced by John Hill (record producer)", "Albums produced by M.I.A. (rapper)", "Albums produced by Switch (songwriter)", "Avant-pop albums", "Interscope Geffen A&M Records albums", "Interscope Records albums", "M.I.A. (rapper) albums", "XL Recordings albums" ]
Maya (stylised as ΛΛ Λ Y Λ) is the third studio album by British rapper M.I.A., released on 7 July 2010 on her own label, N.E.E.T. Recordings, through XL Recordings and Interscope Records. Songwriting and production for the album were primarily handled by M.I.A., Blaqstarr and Rusko. M.I.A.'s long-time associates Diplo, Switch and her brother Sugu Arulpragasam also worked on the album, which was mainly composed and recorded at M.I.A.'s house in Los Angeles. The album's tracks centre on the theme of information politics and are intended to evoke what M.I.A. called a "digital ruckus"; with the album, elements of industrial music were incorporated into M.I.A.'s sound for the first time. A deluxe edition was released simultaneously, featuring four bonus tracks. Critics' opinions of the album were generally favourable although divided, with both its musical style and lyrical content each attracting praise and criticism. In its first week of release, the album entered the UK Albums Chart at number 21, becoming her highest-charting album in the UK. It also became her highest-charting album in the US, reaching number nine on the Billboard 200, and debuted in the top 10 in Finland, Norway, Greece and Canada. M.I.A. promoted the album by releasing a series of tracks online, including "XXXO", "It Takes a Muscle" and "Born Free", the latter of which was accompanied by a short film-music video, which generated controversy due to its graphic imagery. She also performed at music festivals in the US and Europe to coincide with the album's release. During her promotion of the album, she became embroiled in a dispute with Lynn Hirschberg of The New York Times. ## Composition and recording English-Tamil musician M.I.A. (Mathangi "Maya" Arulpragasam) released her second album Kala in 2007, which achieved widespread critical acclaim, and was certified gold in the United States and silver in the United Kingdom. Six months after giving birth to her son Ikyhd in February 2009, she began composing and recording her third studio album in a home studio section of the Los Angeles house she had bought with her partner Ben Bronfman. She used instruments such as the portable dynamic-phrase synthesizer Korg Kaossilator to compose. She took the beat machine and began recording atop Mayan pyramids in Mexico. Much of the work on the album was undertaken at her house in Los Angeles, in what she called a "commune environment", before it was completed in a rented studio in Hawaii. She collaborated with writer-producer Blaqstarr because, in her opinion, "he simply makes good music". M.I.A.'s collaboration with Derek E. Miller of Sleigh Bells on the track "Meds and Feds" prompted her subsequent signing of the band to her label N.E.E.T., and according to Miller, this experience gave him the confidence to record the band's debut album Treats. Her creative partnership with the relatively unknown Rusko grew from a sense of frustration at what she saw as her now more mainstream associates suggesting sub-standard tracks due to their busy schedules. Diplo worked on the track "Tell Me Why", but at a studio in Santa Monica, California, rather than at the house. He claimed in an interview that, following the break-up of his personal relationship with M.I.A. some years earlier, he was not allowed to visit the house because "her boyfriend really hates me". Tracks for the album were whittled down from recording sessions lasting up to 30 hours. Producer Rusko, who played guitar and piano on the album, described the pair getting "carried away" in the studio, appreciating the "mad distorted and hectic" sound they were able to create. Rusko said "She's got a kid, a little one year old baby, and we recorded his heart beat. We'd just think of crazy ideas". Rusko has described M.I.A. as the best artist he has ever worked with, saying that she had "been the most creative and I really had a good time making music with her". ## Music and lyrics M.I.A. called the new project "schizophrenic", and spoke of the Internet inspiration that could be found in the songs and the artwork. She also said that the album centred on her "not being able to leave [Los Angeles] for 18 months" and feeling "disconnected". She summed up the album's main theme as information politics. During the recording of the album, she spoke of the combined effects that news corporations and Google have on news and data collection, while stressing the need for alternative news sources that she felt her son's generation would need to ascertain truth. Maya was made to be "so uncomfortably weird and wrong that people begin to exercise their critical-thinking muscles". M.I.A. said "You can Google 'Sri Lanka' and it doesn't come up that all these people have been murdered or bombed, it's 'Come to Sri Lanka on vacation, there are beautiful beaches' ... you're not gonna get the truth till you hit like, page 56, and it's my and your responsibility to pass on the information that it's not easy anymore". Following these comments, M.I.A. received death threats directed at her and her son, which she also cited as an influence on the songs on the album. She summed up the album as a mixture of "babies, death, destruction and powerlessness". The singer revealed that going into recording the album, she had still not accepted that she was a musician, saying, "I'm still in denial, listening to too much Destiny's Child". With Maya, she stated "I was happy being the retarded cousin of rap... Now I'm the retarded cousin of singing." M.I.A. opted to sing, as opposed to rap, on several tracks on the album, telling Rolling Stone in early 2010 that she wished to produce something different from her previous album, which had "more emphasis on production". In a January 2010 interview with NME she spoke of being inspired by the film Food, Inc. and described the album as being about "exploring our faults and flaws" and being proud of them. The closing track, "Space", which was reportedly recorded using an iPhone app, is a ballad which Mikael Wood, writing in Billboard, described as "dreamy" and "sound[ing] like a Sega Genesis practicing its pillow talk". In contrast, Greg Kot of the Chicago Tribune described "Lovalot" as sounding "like it was recorded in a dank alley, the singer's voice reverberating amid percussion that sounds like doors creaking and rats scurrying across garbage cans". "XXXO" draws its inspiration from M.I.A.'s "cheesy pop side", and is based on the theme of the creation of a sex symbol. "Teqkilla" is the only track to address her relationship with Bronfman, through a reference to Seagram, the company owned by his family. "It Takes a Muscle" is a cover version of a track originally recorded in 1982 by Dutch group Spectral Display, and is performed in a reggae style. The opening track "The Message", featuring a male lead vocalist, parodies the words of the traditional song "Dem Bones" to link Google to "the government". Kitty Empire wrote in The Observer that these conspiratorial government connections to Google and the thoughts of Dzhennet Abdurakhmanova, the Russian teenager who bombed Moscow's tube system in revenge for the death of her husband, were inner-world issues pondered in "Lovalot" with "a mixture of nonsense rhyme, militant posturing and pop-cultural free-flow; her London glottal stop mischievously turns 'I love a lot' into 'I love Allah' ". Ann Powers in the Los Angeles Times said that "M.I.A. turns a call to action into a scared girl's nervous tic. Synths click out a jittery, jagged background. The song doesn't justify anything, but it reminds us that there is a person behind every lit fuse". Powers also commented on how "Born Free" mixed the boasting style often found in hip hop music with lines depicting the lives of those enduring poverty and persecution. "Illygirl", a track found only on the deluxe edition of the album, is written from the point of view of an abused but tough teenager, whom critic Robert Christgau said could be the "kid-sister-in-metaphor" of the swaggering persona adopted by M.I.A. on the track "Steppin Up". Samples used on the album were taken from artists as diverse as the electronic duo Suicide and gospel choir the Alabama Sacred Harp Singers. "Internet Connection", one of four bonus tracks on the deluxe edition of the album, was recorded in collaboration with a group of Filipino Verizon workers. M.I.A. described the sound and imagery of the album as capturing a "digital ruckus", adding that "so many of us have become typists and voyeurs". We need a digital moshpit like we've never seen, harder than how people were doing it in the punk era. We need that energy, but digitally". M.I.A. herself picked out "Steppin Up", "Space" and "Teqkilla" as her favourite tracks on the album. She said that she contemplated using only the sound of drills as the backing for "Steppin Up", but concluded that this was "too experimental" an approach. According to Jim Farber of New York Daily News, Maya is an avant-pop album that takes influence from "the most maddeningly catchy bits of electro-clash, hip-hop, Bollywood, dub and dance music". Farber also noted the significant industrial rock influence on the album, likening it to "the late-'80s work of Ministry". Julianne Escobedo Shepherd of The Fader commented on the increasingly industrial feel of the tracks made available prior to the album's release, a style which had not previously been incorporated into her music. On a similar note, Michael Saba of Paste believed the album was "a collection of sparse, industrial-influenced tracks that sound more like post-apocalyptic Nine Inch Nails than Arulpragasam’s trademark realpolitik rap". ## Release and artwork The album was originally set to be released on 29 June 2010, but in May M.I.A.'s record label announced a new release date of 13 July. In late April, the artist posted a twitpic of the track listing for the new album. She also commented that at the time she was "open to suggestions" regarding the album's title. Two weeks later, a blog posting on her record label's official website revealed that the album would be entitled /\\/\\/\Y/\\, which spells out M.I.A.'s own forename, Maya, in leetspeak. The title follows on from previous albums named after her father (2005's Arular) and mother (2007's Kala). Some reviewers used the stylised title while others did not. M.I.A.'s official Myspace page uses both titles. The album was released in conventional physical and digital formats and as an iTunes LP. The album's cover features the singer's face almost completely hidden by YouTube player bars. MTV's Kyle Anderson described the cover, which was previewed in June 2010, as "a typically busy, trippy, disorienting piece of art" and speculated that it might be "a statement about 21st century privacy". Additional art direction for the album was provided by Aaron Parsons. M.I.A. used her mother's Tamil phonebook to find a wedding photographer to provide images for the album. Photographers for the album were Ravi Thiagaraja, M.I.A. and Jamie Martinez. Elements of the artwork had previously been used in one of a series of billboard images, all designed by musicians, which were projected onto landmarks in London by a guerrilla project called BillBored during the 2010 British general election. The deluxe edition of the album features a lenticular slipcase. Music website Prefix listed it as one of the 10 worst album covers of 2010, likening it to a "child's first computer-class-assignment". When questioned about the difficulty of finding her album title on search engines such as Google, she noted that she chose to use forward slashes and backward slashes due to their ease at being typed and because she liked the way the album title looked on music players such as iTunes. She also suggested that it was a deliberate attempt to avoid detection by internet search engines. The Guardian's Sian Rowe commented that M.I.A.'s deliberate "shrinking away from a mainstream audience" by the use of difficult, unsearchable symbols was part of a growing new underground scene perhaps trying to create a "generation gap", where only "the youngest and the most enthusiastic" would seek out such band names by reading the right online sources. ## Promotion On 12 January 2010, M.I.A. posted a video clip on Twitter, which featured a new song, but revealed no information about it other than the heading "Theres space for ol dat I see" (sic). The following day her publicist confirmed that the track was entitled "Space Odyssey" and had been produced in collaboration with Rusko to protest a travel piece about Sri Lanka printed in The New York Times. The track made it onto the final album under the revised title "Space". The same month, she filmed a short film for the song "Born Free". At the end of April the track was released as a promotional single, and the short film accompanying the song was released. The film, directed by Romain Gavras, depicts a military unit rounding up red-headed young men who are then shot or forced to run across a minefield. The film, which also features nudity and scenes of drug use, caused widespread controversy and was either removed or labelled with an age restriction on YouTube. In the weeks following the release of the film, M.I.A. was the most blogged about artist on the Internet, according to MP3 blog aggregator The Hype Machine. M.I.A. found the controversy "ridiculous", saying that videos of real-life executions had not generated as much controversy as her video. In the run-up to the album's release, "XXXO", which Entertainment Weekly described as the "first official single" from the forthcoming album, "Steppin Up", "Teqkilla" and "It Takes a Muscle" were released online. On 6 July 2010 she made the entire album available via her Myspace page. On 20 September, "Story To Be Told" received a video, on its own website, featuring the song's lyrics in CAPTCHA formatting. In December, "It Takes a Muscle" was released as a two-track promotional single. The new album was publicised during Jay-Z's performance at the Coachella Valley Music and Arts Festival in April, when a blimp flew across the venue announcing that M.I.A.'s new album would be released on 29 June 2010. M.I.A. promoted the album with a series of appearances at music festivals, including the Hard festival in New York and The Big Chill in Herefordshire. Her performance at the latter was cut short due to a stage invasion by fans. She also performed at the Flow Festival in Finland, where she was joined onstage by Derek E. Miller playing guitar during her performance of "Meds and Feds", and the Lokerse Feesten in Lokeren, Flanders, Belgium, where her performance drew a crowd of 13,500, the biggest of the 10-day music festival. In September she announced a tour that would last until the end of the year. M.I.A. also promoted the album with an appearance on the "Late Show with David Letterman", during which she performed "Born Free" with Martin Rev of Suicide playing keyboards, backed by a group of dancers styled to look like M.I.A. In November 2010 she appeared on the British television show Later... with Jools Holland, performing "Born Free" and "It Takes a Muscle", the latter with members of The Specials. While promoting the album, M.I.A. became involved in a dispute with Lynn Hirschberg of The New York Times, who interviewed her in March 2010 and whose resulting article portrayed the singer as pretentious and attention seeking. In response, M.I.A. posted Hirschberg's telephone number on her Twitter page and later uploaded her own audio recording of the interview, highlighting the discrepancies between what she said and what was reported. The piece was criticised for its yellow journalism by some, however M.I.A. received varying degrees of support and criticism for the ensuing fallout from the media. Benjamin Boles wrote in Now that, while Hirschberg's piece came across as a "vicious ... character assassination", M.I.A's subsequent actions were "childish" and made her "the laughing stock of the internet". The paper later printed a correction on the story, acknowledging that some quotes had been taken out of context. The incident prompted Boots Riley of the band Street Sweeper Social Club to comment on how artists had access to media that allowed writers to be held accountable and that M.I.A.'s move was "brilliant". ## Critical reception Maya received moderately positive reviews from critics. At Metacritic, which assigns a normalised rating out of 100 based on reviews from mainstream critics, the album received an average score of 68 based on 41 reviews, which indicates "generally favorable reviews". Reviews of the album began to appear a month before its release after the album leaked in low quality onto the internet. Simon Vozick-Levinson of Entertainment Weekly called the album "surely the year's most divisive major-label release". Charles Aaron, writing in Spin, gave the album four and a half out of five stars, his review deeming the song "Lovalot" her "riskiest gambit yet". Matthew Bennett of Clash gave a similar score, calling it a "towering work". Mojo writer Roy Wilkinson called it a "startling fusillade of to-the-moon pop music". Writing for the BBC Online, Matthew Bennett characterised the album as "loud, proud, and taking no prisoners" and also praised the album's lighter tracks, such as "Teqkilla", which he called "enjoyably demented but utterly catchy". Rolling Stone writer Rob Sheffield said the album was M.I.A.'s "most aggressive, confrontational and passionate yet", praising her "voracious ear for alarms, sirens, explosions, turning every jolt into a breakbeat" and her consequent lyrics as "expansive". Los Angeles Times writer Ann Powers commended the album as "an attempt by an artist who's defined herself through opposition to engage with the system that she has entered, for better or worse, and to still remain recognizable to herself" characterising Maya's foregrounded ideas as "a struggle worthy of a revolutionary". In his consumer guide for MSN Music, critic Robert Christgau gave the album an A rating and complimented its "beats and the spunky, shape-shifting, stubbornly political, nouveau riche bundle of nerves who holds them together". Other critics were not as complimentary towards the album. Charlotte Heathcote of British newspaper the Daily Express said that, while M.I.A. could "still lay claim to being one of our most imaginative, uncompromising artists", there were "only glimmers of brilliance" on the album. Chicago Tribune writer Greg Kot gave the album two and a half out of four stars and expressed a mixed response towards M.I.A.'s "[embracing] pop more fervently than ever. Entertainment Weekly's Leah Greenblatt was critical of the album, stating that it sounded "murky and almost punishingly discordant, as if the album has been submerged underwater and then set upon by an arsenal of exceptionally peeved power tools". She went on to state that nothing on the album sounded "truly vital", or as revolutionary as M.I.A. wanted the public to believe. Stephen Troussé, writing in Uncut, described the album as "anticlimactic" and "self-satisfied" and said that it suffered from "diminished horizons". Mehan Jayasuriya of PopMatters noted M.I.A.'s "self-aggrandizing" as a weakness, adding that Maya lacks "the focus and confidence of M.I.A.'s previous albums". Jesse Cataldo of Slant Magazine noted that the album "has the feel of a vanity project" and wrote "It may be an above-average album, but its aesthetic matches her persona only at its shallowest levels, in the thinness of its ideas and the often-forceful ugliness of its message". Chris Richard of The Washington Post called it "a disorienting mix of industrial clatter and digital slush" and noted "there isn't much to sing along to". ### Accolades In December 2010, NME named "XXXO" and "Born Free" the number two and number 11 best tracks of the year respectively. Maya appeared in a number of magazines' lists of the best albums of the year. The album was placed at number five on the "2010 Pitchfork Readers Poll" list of the "Most Underrated Album" of the year. Spin placed Maya at number eight in its list of the best releases of 2010, and Rolling Stone listed it at number 19 in its countdown. ## Commercial performance Maya debuted at number 21 on the UK Albums Chart on first-week sales of 7,138 copies, 18 places higher than the peak position achieved by Kala, immediately making it M.I.A.'s highest-charting album in the UK. The following week it dropped out of the top 40. It also charted in a number of other European countries, reaching the top 10 in Finland, Greece and Norway. In the United States, it debuted at number nine on the Billboard 200, nine places higher than the peak position achieved by Kala, although it sold only 28,000 copies in its first week of release, compared with the 29,000 which the earlier album sold in the same period. Maya fell to number 34 in its second week on the chart, selling 11,000 copies. As of September 2013, the album had sold 99,000 copies in the US. The album also topped Billboard's Dance/Electronic Albums chart and reached the top five on two of the magazine's other charts. Maya also entered the top 10 on the Canadian Albums Chart. The single "XXXO" reached the top 40 in Spain and the UK, and "Teqkilla" reached number 93 on the Canadian Hot 100 on digital downloads alone. ## Track listing ## Personnel Credits adapted from the liner notes of the deluxe edition of Maya. - Maya Arulpragasam – mixing (tracks 1, 5, 12, 15); production (tracks 4–6, 9, 10, 12–15); art direction, creative direction, executive producer, photography - Ben H. Allen – mixing (tracks 3, 11) - Sugu Arulpragasam – production (track 1) - Blaqstarr – production (tracks 3, 8, 13, 14, 16); mixing (tracks 13, 14, 16) - Diplo – production (tracks 7, 11) - Robert Gardner – mix assistance (tracks 3, 11) - John Hill – production (tracks 4, 5) - Jaime Martínez – photography - Derek E. Miller – mixing, production (track 10) - Aaron Parsons – art direction - Neal Pogue – mixing (track 2) - Rusko – production (tracks 2–4, 6, 12) - Shane P. Stoneback – mixing (track 10) - Switch – mixing (track 2); production (tracks 4, 5, 9); vocal production (track 15) - Ravi Thiagaraja – photography ## Charts ### Weekly charts ### Monthly charts ### Year-end charts ## Release history
43,239,543
AI Mark IV radar
1,167,287,054
Operational model of the world's first air-to-air radar system
[ "Aircraft radars", "Military equipment introduced from 1940 to 1944", "Military radars of the United Kingdom", "World War II British electronics", "World War II radars" ]
Radar, Airborne Interception, Mark IV (AI Mk. IV), produced by USA as SCR-540, was the world's first operational air-to-air radar system. Early Mk. III units appeared in July 1940 on converted Bristol Blenheim light bombers, while the definitive Mk. IV reached widespread availability on the Bristol Beaufighter heavy fighter by early 1941. On the Beaufighter, the Mk. IV arguably played a role in ending the Blitz, the Luftwaffe's night bombing campaign of late 1940 and early 1941. Early development was prompted by a 1936 memo of Henry Tizard on the topic of night fighting. The memo was sent to Robert Watt, director of the radar research efforts, who agreed to allow physicist Edward George "Taffy" Bowen to form a team to study the problem of air interception. The team had a test bed system in flights later that year, but progress was delayed for four years by emergency relocations, three abandoned production designs, and Bowen's increasingly adversarial relationship with Watt's replacement, Albert Percival Rowe. Ultimately, Bowen was forced from the team just as the system was finally maturing. The Mk. IV series operated at a frequency of about 193 megahertz (MHz) with a wavelength of 1.5 metres, and offered detection ranges against large aircraft up to 20,000 feet (6.1 km). It had numerous operational limitations, including a maximum range that increased with the aircraft's altitude and a minimum range that was barely close enough to allow the pilot to see the target. Considerable skill was required of the radar operator to interpret the displays of its two cathode ray tubes (CRTs) for the pilot. It was only with the increasing proficiency of the crews, along with the installation of new ground-based radar systems dedicated to the interception task, that interception rates began to increase. These roughly doubled every month through the spring of 1941, during the height of The Blitz. The Mk. IV was used in the front lines for only a short period. The introduction of the cavity magnetron in 1940 led to rapid progress in microwave-frequency radars, which offered far greater accuracy and were effective at low altitudes. The prototype Mk. VII began to replace the Mk. IV at the end of 1941, and the AI Mk. VIII largely relegated the Mk. IV to second-line duties by 1943. The Mk. IV's receiver, originally a television receiver, was used as the basis of the ASV Mk. II radar, Chain Home Low, AMES Type 7, and many other radar systems throughout the war. ## Development ### Genesis By late 1935, Robert Watt's development of what was then known as Range and Direction Finding (RDF) at Bawdsey Manor in Suffolk on the east coast of England had succeeded in building a system able to detect large aircraft at ranges over 40 miles (64 km). On 9 October, Watt wrote a memo calling for the construction of a chain of radar stations running down the east coast of England and Scotland, spaced about 20 miles (32 km) apart, providing early warning for the entire British Isles. This became known as Chain Home (CH), and soon the radars themselves became known by the same name. Development continued, and by the end of 1935 the range had improved to over 80 miles (130 km), reducing the number of stations required. During 1936 the experimental system at Bawdsey was tested against a variety of simulated attacks, along with extensive development of interception theory carried out at RAF Biggin Hill. One observer was Hugh Dowding, initially as the director of research for the RAF, and later as the commander of RAF Fighter Command. Dowding noted that the CH stations provided so much information that operators had problems relaying it to the pilots, and the pilots had problems understanding it. He addressed this through the creation of what is today known as the Dowding system. The Dowding system relied on a private telephone network forwarding information from the CH stations, Royal Observer Corps (ROC), and pip-squeak radio direction finding (RDF) to a central room where the reports were plotted on a large map. This information was then telephoned to the four regional Group headquarters, who re-created the map covering their area of operations. Details from these maps would then be sent to each Group's Sectors, covering one or two main airbases, and from there to the pilots via radio. This process took time, during which the target aircraft moved. As the CH systems were only accurate to about 1 km at best, subsequent reports were scattered and could not place a target more accurately than about 5 miles (8.0 km). This was fine for daytime interceptions; the pilots would have normally spotted their targets within this range. ### Night bombing Henry Tizard, whose Committee for the Scientific Survey of Air Defence spearheaded development of the CH system, grew concerned that CH would be too effective. He expected that the Luftwaffe would suffer so many losses that they would be forced to call off daylight attacks, and would turn to a night bombing effort. Their predecessors in World War I did the same when the London Air Defence Area successfully blocked daytime raids, and attempts to intercept German bombers at night proved comically ineffective. Tizard's concerns would prove prophetic; Bowen called it "one of the best examples of technological forecasting made in the twentieth century". Tizard was aware that tests showed an observer would only be able to see an aircraft at night at a range of about 1,000 feet (300 m), perhaps 2,000 feet (610 m) under the very best moonlit conditions, an accuracy that the Dowding system could not provide. Adding to the problem would be the loss of information from the ROC, who would not be able to spot the aircraft except under the very best conditions. If the interception was to be handled by radar, it would have to be arranged in the short time between initial detection and the aircraft passing beyond the CH sites on the shoreline. Tizard put his thoughts in a 27 April 1936 letter to Hugh Dowding, who was at that time the Air Member for Research and Development. He also sent a copy to Watt, who forwarded it to the researchers who were moving to their new research station at Bawdsey Manor. In a meeting at the Crown and Castle pub, Bowen pressed Watt for permission to form a group to study the possibility of placing a radar on the aircraft itself. This would mean the CH stations would only need to get the fighter into the general area of the bomber, the fighter would be able to use its own radar for the rest of the interception. Watt was eventually convinced that the staffing needed to support development of both CH and a new system was available, and the Airborne Group was spun off from the CH effort in August 1936. ### Early efforts Bowen started the Airborne Interception radar (AI) efforts by discussing the issue with two engineers at nearby RAF Martlesham Heath, Fred Roland, and N.E. Rowe. He also made a number of visits to Fighter Command headquarters at RAF Bentley Priory and discussed night fighting techniques with anyone who proved interested. The first criteria for an airborne radar, operable by either the pilot or an observer, included: - weight not to exceed 200 pounds (91 kg), - installed space of 8 cubic feet (0.23 m<sup>3</sup>) or less, - maximum power use of 500 W (watts), and - antennas of 1 foot (30 cm) length or less. Bowen led a new team to build what was then known as RDF2, the original systems becoming RDF1. They began looking for a suitable receiver system, and immediately had a stroke of good luck; EMI had recently constructed a prototype receiver for the experimental BBC television broadcasts on 6.7 m wavelength (45 MHz). The receiver used seven or eight vacuum tubes (valves) on a chassis only 3 inches (7.6 cm) in height and about 18 inches (46 cm) long. Combined with a CRT display, the entire system weighed only 20 pounds (9.1 kg). Bowen later described it as "far and away better than anything which [had] been achieved in Britain up to that time." Only one receiver was available, which was moved between aircraft for testing. A transmitter of the required power was not available in portable form. Bowen decided to gain some familiarity with the equipment by building a ground-based transmitter. Placing the transmitter in Bawdsey's Red Tower and the receiver in the White Tower, they found they were able to detect aircraft as far as 40 to 50 miles (64–80 km) away. ### RDF 1.5 With the basic concept proven, the team then looked for a suitable aircraft to carry the receiver. Martlesham provided a Handley Page Heyford bomber, a reversal of duties from the original Daventry Experiment that led to the development of CH in which a Heyford was the target. One reason for the selection of this design was that its Rolls-Royce Kestrel engines had a well-shielded ignition system which gave off minimal electrical noise. Mounting the receiver in the Heyford was not a trivial task; the standard half-wave dipole antenna needed to be about 3.5 metres (11 ft) long to detect wavelengths of 6.7 m. The solution was eventually found by stringing a cable between the Heyford's fixed landing gear struts. A series of dry cell batteries lining the aircraft floor powered the receiver, providing high voltage for the CRT through an ignition coil taken from a Ford car. When the system took to the air for the first time in the autumn of 1936, it immediately detected aircraft flying in the circuit at Martlesham, 8 to 10 miles (13–16 km) away, in spite of the crudity of the installation. Further tests were just as successful, with the range pushed out to 12 miles (19 km). It was around this time that Watt arranged for a major test of the CH system at Bawdsey with many aircraft involved. Dowding had been promoted to Air Officer Commanding Fighter Command, and was on hand to watch. Things did not go well; for unknown reasons the radar did not pick up the approaching aircraft until they were far too close to arrange interception. Dowding was watching the screens intently for any sign of the bombers, failing to find one when he heard them pass overhead. Bowen averted total disaster by quickly arranging a demonstration of his system in the Red Tower, which picked out the aircraft as they re-formed 50 miles (80 km) away. The system, then known as RDF 1.5, would require a large number of ground-based transmitters to work in an operational setting. Moreover, good reception was only achieved when the target, interceptor, and transmitter were roughly in a line. Due to these limitations, the basic concept was considered unworkable as an operational system, and all effort moved to designs with both the transmitter and receiver in the interceptor aircraft. Bowen would later lament this decision in his book Radar Days, where he noted his feelings about failing to follow up on the RDF 1.5 system: > With hindsight, it is now clear that this was a grave mistake. ... In the first place, it would have given them an interim device on which test interceptions could have been carried out at night, two whole years before the outbreak of war. This would have provided pilots and observers with training in the techniques of night interception, something they did not actually get until war was declared. Another attempt to revive the RDF 1.5 concept, today known more generally as bistatic radar, was made in March 1940 when a modified set was mounted in Bristol Blenheim serial. L6622. This set was tuned to the transmissions of the new Chain Home Low transmitters, dozens of which were being set up along the UK coastline. These experiments did not prove successful, with a detection range on the order of 4 miles (6.4 km), and the concept was abandoned for good. ### Giant acorns, shorter wavelengths, and ASV The team received a number of Western Electric Type 316A large acorn vacuum tubes in early 1937. These were suitable for building transmitter units of about 20 W continual power for wavelengths of 1 to 10 m (300 to 30 MHz). Percy Hibberd built a prototype transmitter with pulses of a few hundred watts and fitted it to the Heyford in March 1937. In testing the transmitter proved only barely suitable in the air-to-air role, with short detection ranges due to its relatively low power. But to everyone's surprise, it was able to easily pick out the wharves and cranes at the Harwich docks a few miles south of Bawdsey. Shipping appeared as well, but the team was unable to test this very well as the Heyford was forbidden to fly over water. After this success, Bowen was granted two Avro Anson patrol aircraft, K6260 and K8758, along with five pilots stationed at Martlesham to test this ship-detection role. Early tests demonstrated a problem with noise from the ignition system interfering with the receiver, but this was soon resolved by fitters at the Royal Aircraft Establishment (RAE). Meanwhile, Hibberd had successfully built a new push–pull amplifier using two of the same tubes but working in the 1.25-meter band, an upper-VHF band (around 220 MHz); below 1.25 m the sensitivity dropped off sharply. Gerald Touch, originally from the Clarendon Laboratory, converted the EMI receiver to this wavelength by using the existing set as the intermediate frequency (IF) stage of a superheterodyne circuit. The original 45 MHz frequency would remain the IF setting for many following radar systems. On its first test on 17 August, Anson K6260 with Touch and Keith Wood aboard immediately detected shipping in the English Channel at a range of 2 to 3 miles (3.2–4.8 km). The team later increased the wavelength slightly to 1.5 m to improve sensitivity of the receiver, and this 200 MHz setting would be common to many radar systems of this era. After hearing of the success, Watt called the team and asked if they would be available for testing in September, when a combined fleet of Royal Navy ships and RAF Coastal Command aircraft would be carrying out military exercises in the Channel. On the afternoon of 3 September the aircraft successfully detected the battleship HMS Rodney, the aircraft carrier HMS Courageous and the light cruiser HMS Southampton, receiving very strong returns. The next day they took off at dawn and, in almost complete overcast, found Courageous and Southampton at a distance of 5 to 6 miles (8.0–9.7 km). As they approached the ships and eventually became visible, they could see the Courageous launching aircraft to intercept them. The promise of the system was not lost on observers; Albert Percival Rowe of the Tizard Committee commented that "This, had they known, was the writing on the wall for the German Submarine Service." Airborne radar for detecting ships at sea came to be known as Air-to-Surface-Vessel (ASV) radar. Its successes led to continued demands for additional tests. Growing interest and increased efforts in ASV contributed to delays in airborne intercept sets; the team spent a considerable time in 1937 and 1938 working on the ASV problem. ### ASV emerges In May 1938 A.P. Rowe took over Bawdsey Manor from Watt, who had been appointed Director of Communications Development at the Air Ministry. The remainder of 1938 was taken up with practical problems in the development of ASV. One change was the use of the new Western Electric 4304 tubes in place of the earlier 316As. These allowed a further increase in power to pulses around 2 kW, which provided detection of ships at 12 to 15 miles (19–24 km). Their test target was the Cork Lightship, a small boat anchored about 4 miles (6.4 km) from the White Tower. This performance against such a small vessel was enough to prompt the Army to begin work on what would become the Coast Defence (CD) radars. The Army cell had first been set up on 16 October 1936 to develop the Gun Laying radar systems. Another change was due to every part of the equipment having different power requirements. The tubes for the transmitter used 6 V to heat their filaments, but 4 V was needed for the receiver tubes and 2 V for the filament of the CRT. The CRT also needed 800 V for its electron gun, but the transmitter tubes 1000 V for their modulators (drivers). At first, the team used motor-generator sets placed in the Anson and Battle fuselages, or batteries connected in various ways as in the earliest sets in the Heyfords. Bowen decided the solution was to build a power supply that would produce all of these DC voltages from a single 240 V 50 Hz supply using transformers and rectifiers. This would allow them to power the radar systems using mains power while the aircraft were on the ground. British aero engines were normally equipped with a power take-off shaft that led to the rear of the engine. In twin engine aircraft like the Anson, one of these would be used for a generator that powered the aircraft instruments at 24 V DC, the other would be left unconnected and available for use. Following a suggestion from Watt to avoid Air Ministry channels, in October Bowen flew one of the Battles to the Metropolitan-Vickers (Metrovick) plant in Sheffield, where he pulled the DC generator off the engine, dropped it on the table, and asked for an AC alternator of similar size and shape. Arnold Tustin, Metrovick's lead engineer, was called in to consider the problem, and after a few minutes he returned to say that he could supply an 80 V unit at 1200 to 2400 Hz and 800 W, even better than the 500 W requested. Bowen had an order for 18 pre-production units placed as soon as possible, and the first units started arriving at the end of October. A second order for 400 more quickly followed. Eventually about 133,800 of these alternators would be produced during the war. ### Working design To better test the needs of AI, an aircraft with the speed needed to intercept a modern bomber was needed. In October 1938 the team was provided with two Fairey Battle light bombers, which had performance and size more suited to the night fighter role. Battles K9207 and K9208, and the crew to fly them, were sent to Martlesham; K9208 was selected to carry the radar, while K9207 was used as a target and support aircraft. By 1939, it was clear that war was looming, and the team began to turn their primary attention from ASV back to AI. A new set, built by combining the transmitter unit from the latest ASV units with the EMI receiver, first flew in a Battle in May 1939. The system demonstrated a maximum range that was barely adequate, around 2 to 3 miles (3.2–4.8 km), but the too-long minimum range proved to be a far greater problem. The minimum range of any radar system is due to its pulse width, the length of time that the transmitter is turned on before it turns off so the receiver can listen for reflections from targets. If the echo from the target is received while the transmitter is still sending, the echo will be swamped by the transmitted pulse backscattering off local sources. For instance, a radar with a pulse width of 1 μs would not be able to see returns from a target less than 150 m away, because the radar signal travelling at the speed of light would cover the round trip distance of 300 m before that 1 μs interval had passed. In the case of ASV this was not a problem; aircraft would not approach a ship on the surface more closely than its altitude of perhaps a few thousand feet, so a longer pulse width was fine. But in the AI role, the minimum range was pre-defined by the pilot's eyesight, at 300 m or less for night interception, which demanded sub-microsecond pulse widths. This proved very difficult to arrange, and ranges under 1,000 feet were difficult to produce. Gerald Touch invested considerable effort in solving this problem and eventually concluded that a sub-1 μs transmitter pulse was possible. However, when this was attempted it was found that signals would leak through to the receiver and cause it to be blinded for a period, longer than 1 μs. He developed a solution using a time base generator that both triggered the transmitter pulse as well as cut out the front-end of the receiver, causing it to become far less sensitive during this period. This concept became known as squegging. In extensive tests in Anson K6260, Touch finally settled on a minimum range of 800 feet (240 m) as the best compromise between visibility and sensitivity. Additionally, the sets demonstrated a serious problem with ground reflections. The broadcast antenna sent out the pulse over a very wide area covering the entire forward side of the aircraft. This meant that some of the broadcast energy struck the ground and reflected back to the receiver. The result was a solid line across the display at a distance equal to the aircraft's altitude, beyond which nothing could be seen. This was fine when the aircraft was flying at 15,000 feet (4.6 km) or more and the ground return was at about the maximum useful range, but meant that interceptions carried out at lower altitudes offered increasingly shorter range. ### Dowding visits In May 1939 the unit was transferred to a Battle, and in mid-June "Stuffy" Dowding was taken on a test flight. Bowen operated the radar and made several approaches from various points. Dowding was impressed, and asked for a demonstration of the minimum range. He instructed Bowen to have the pilot hold position once they had made their closest approach on the radar scope so they could look up and see how close that really was. Bowen relates the outcome: > For the previous 30 or 40 minutes our heads had been under the black cloth shielding the cathode-ray tubes. I whipped the cloth off and Stuffy looked straight ahead and said "Where is it? I can't see it." I pointed straight up; we were flying almost directly underneath the target. "My God" said Stuffy "tell him to move away, we are too close." Dowding's version of the same events differs. He states he was "tremendously impressed" by the potential, but pointed out to Bowen that the 1,000 foot minimum range was a serious handicap. He makes no mention of the close approach, and his wording suggests that it did not take place. Dowding reports that when they met again later in the day, Bowen stated that he had made a sensational advance, and the minimum range had been reduced to only 220 feet (67 m). Dowding reports this uncritically, but the historical record demonstrates no such advance had been made. On their return to Martlesham, Dowding outlined his concerns about night interceptions and the characteristics of a proper night fighter. Since the interceptions were long affairs, the aircraft needed to have long endurance. To ensure that friendly fire was not an issue, pilots would be required to identify all targets visually. This meant a separate radar operator would be needed, so the pilot would not lose his night vision by looking at the CRTs. And finally, since the time needed to arrange an interception was so long, the aircraft required armament that could guarantee destruction of a bomber in a single pass—there was little chance a second interception could be arranged. Dowding later wrote a memo considering several aircraft for the role, rejecting the Boulton Paul Defiant two-seat turret fighter due to its cramped rear turret area. He was sure the Bristol Beaufighter would be perfect for the role, but it would not be ready for some time. So he selected the Bristol Blenheim light bomber for the immediate term, sending two of the early prototypes to Martlesham Heath to be fitted with the radar from the Battles. Blenheim K7033 was fitted with the radar, while K7034 acted as the target. Both of these aircraft lost a propeller in flight but landed safely; K7033s propeller was never found, but K7034's was returned to Martlesham the next day by an irate farmer. ### Mk. I Even at the 1.5 m wavelength, antennas of practical size had relatively low gain and very poor resolution; the transmitter antenna created a fan-shaped signal over 90 degrees wide. This was not useful for homing on a target, so some system of direction indication was required. The team seriously considered phase comparison as a solution, but could not find a suitable phase shifting circuit. Instead, a system of multiple receiver antennas was adopted, each one located so that only a certain section of the sky was visible. Two horizontal receivers were mounted on either side of the fuselage and only saw reflections from the left or right, slightly overlapping in the middle. Two vertical receivers were mounted above and below the wing, seeing reflections above or below the aircraft. Each pair of antennas was connected to a motorized switch that rapidly switched between the pairs, a technique known as lobe switching. Both signals were then sent to a cathode ray tube (CRT) for display, with one of them passing through a voltage inverter. If the target was to the left, the display would show a longer blip on the left than the right. When the target was dead ahead, the blips would be equal length. There was an inherently limited accuracy to such a solution, about five degrees but it was a practical solution in terms of limiting the antenna sizes. By this point the Air Ministry was desperate to get any unit into service. Satisfied with his visit in May, Dowding suggested that the Mk. I was good enough for operational testing purposes. On 11 June 1939, AI was given the highest priority and provisions were made to supply 11 additional Blenheims to No 25 squadron at RAF Hawkinge (for a total of 21). Since each of the parts came from different suppliers, and the fitters were unfamiliar with any of it, members of the AI team would have to hand-assemble the components as they arrived and instruct the fitters on the sets. Watt was waiting for the order, and in 1938 had arranged for production of the transmitters at Metrovick and receivers at A.C. Cossor. These turned out to be the wrong products: Metrovick had been told to directly copy ("Chinese") the 1937 design by Percy Hibberd, but Bawdsey had delivered the wrong prototype to Metrovick, who copied it. The Cossor receivers were found to be unusable, weighing as much as the entire transmitter and receiver, and having sensitivity about half that of the EMI lash-up. ### Pye strip It was at this point that the team had yet another stroke of luck. Bowen's former thesis advisor at King's College, London, was Edward Appleton, who had worked with Watt and Harold Pye during the 1920s. Pye had since gone on to form his own radio company, Pye Ltd., and was active in the television field. They had recently introduced a new television set based on an innovative vacuum tube developed by Philips of Holland, the EF50 pentode. Appleton mentioned the Pye design to Bowen, who found it to be a great improvement over the EMI version, and was happy to learn there had been a small production run that could be used for their experiments. The design became widely known as the Pye strip. The Pye strip was such an advance on the EMI unit that the EF50 became a key strategic component. As a German invasion of the west loomed in 1940, the British contacted Philips and arranged a plan to remove the company's board of directors to the UK, along with 25,000 more EF50s and another 250,000 bases, onto which Mullard, Philips's UK subsidiary, could build complete tubes. A destroyer, HMS Windsor, was dispatched to pick them up in May, and left the Netherlands only days before the German invasion of the country on 15 May 1940. The Pye strip, and its 45 MHz intermediate frequency, would be re-used in many other wartime radar systems. New Blenheims eventually arrived at Martlesham, these having been experimentally converted to heavy fighters with the addition of four .303 British (7.7 mm) Browning machine guns and four 20mm Hispano autocannon, while removing the mid-upper turret to reduce weight by 800 lb (360 kg) and drag by a small amount. These arrived without any of the racking or other fittings required to mount the radar, which had to be constructed by local fitters. Further deliveries were not the Blenheim Mk. IF and IIF models originally provided, but the new Mk. IVF versions with a longer and redesigned nose. The gear had to be re-fitted for the new aircraft, and the receivers and CRTs were mounted in the enlarged nose, allowing the operator to indicate corrections to the pilot through hand signals as a backup if the intercom failed. By September, several Blenheims were equipped with what was now officially known as AI Mk. I and training of the crews began with No. 25 Squadron at RAF Northolt. Robert Hanbury Brown, a physicist who would later work on radar in the US, and Keith Wood joined them in August 1939, helping fitters keep the systems operational, and coming up with useful methods for interception. Near the end of August, Dowding visited the base and saw the radars in the nose and pointed out to Bowen that the enemy gunners would see the light from the CRTs and shoot the operator. The sets were re-fitted once again, returning to the rear of the fuselage, which caused more delays. With the units in the rear, the only communications method was via the intercom. Contemporary systems used the radio as the intercom as well, but the TR9D sets used in RAF aircraft used the voice channel for 15 seconds every minute for the pip-squeak system, blocking communications. Even when modified sets were supplied that addressed this, the radar was found to interfere strongly with the intercom. A speaking tube was tried but found to be useless. Newer VHF radios being developed through this same period did not suffer these problems, and the Blenheims were moved to the front of the queue to receive these units. ### Emergency move Bawdsey, right on the eastern coast in a relatively secluded location, could not effectively be protected from air attack or even bombardment from boats offshore. The need to move the team to a more protected location on the opening of hostilities had been identified long before the war. During a visit to his alma mater at Dundee University, Watt approached the rector to ask about potentially basing the team there, on short notice. When the Germans invaded Poland and war was declared on 3 September 1939, the research teams packed up and arrived in Dundee to find the rector only dimly recalling the conversation and having nothing prepared for their arrival. Students and professors had since returned after the summer break, and only two small rooms were available for the entire group. The AI group and their experimental aircraft of D Flight, Aeroplane and Armament Experimental Establishment (A&AEE), moved to an airport some distance away at Perth, Scotland. The airport was completely unsuitable for the fitting work, with only a single small hangar available for aircraft work while a second was used for offices and labs. This required most of the aircraft to remain outside while others were worked on inside. Nevertheless, the initial group of aircraft was completed by October 1939. With this success, more and more aircraft arrived at the airport to have the AI team fit radars, most of these being the ASV units for patrol aircraft like the Lockheed Hudson and Short Sunderland patrol aircraft, followed by experimental fittings to Fleet Air Arm Fairey Swordfish torpedo bombers and Supermarine Walrus. Bernard Lovell joined the radar team at the personal suggestion of Patrick Blackett, an original member of the Tizard Committee. He arrived at Dundee and met Sidney Jefferson, who told him he had been transferred to the AI group. The conditions at Perth were so crude that it was clearly affecting work, and Lovell decided to write to Blackett about it on 14 October. Among many concerns, he noted that; > The situation here is really unbelievable. Here they are shouting for hundreds of aircraft to be fitted. The fitters are working 7 days per week, and occasionally 15 hour days. In their own words, "the apparatus is tripe even for a television receiver." Blackett removed any direct reference to Lovell and passed it to Tizard, who discussed the issue with Rowe during his next visit to Dundee. Rowe immediately surmised who had written the letter and called Lovell in to discuss it. Lovell thought little of it at the time, but later learned that Rowe had written back to Tizard on 26 October: > He clearly has no idea that I am aware he has written to Blackett. Judging purely from the letter you quoted to me I expected to find Lovell was a nasty piece of work who should be removed from the work. I find, however, that this is not the case. Rowe surmised from the conversation that the main problem was that Perth was simply not suitable for the work. He decided that most of the research establishment, now known as the Air Ministry Research Establishment (AMRE), would remain in Dundee while the AI team should be moved to a more suitable location. This time the chosen location was RAF St Athan in Wales, about 15 miles (24 km) from Cardiff. St Athan was a large base that also served as an RAF training ground, and should have been an ideal location. When the AI team arrived on 5 November 1939, they found themselves being housed in a disused hangar with no office space. A small amount of relief was found by using abandoned Heyford wings as partitions, but this proved largely useless as the weather turned cold. As the main doors of the hangar were normally left open during the day, it was often too cold to hold a screwdriver. Bowen complained that the conditions "would have produced a riot in a prison farm." Ironically, Bawdsey was ignored by the Germans for the entire war, while St Athan was attacked by a Junkers Ju 88 only weeks after the team arrived. The single bomb struck the runway directly, but failed to explode. ### Mk. II With October's deliveries, the Air Ministry began plans for a production AI Mk. II. This differed largely by the addition of a new timebase system, which it was hoped would reduce the minimum range to a very useful 400 feet (120 m). When the new units were installed, it was found the minimum range had increased to 1000 feet. This problem was traced to unexpectedly high capacitance in the tubes, and with further work they were only able to return to the Mk. I's 800 feet. Blenheims from a number of squadrons were fitted with the Mk. II, with three aircraft each being allotted to No. 23, 25, 29, 219, 600 and 604 Squadrons in May 1940. Two experimental versions of the Mk. II were tested. The AIH unit used GEC VT90 Micropup valves in place of the Acorns for additional power, the H standing for high power of about 5 kW. A test unit fitted to a Blenheim IF proved promising in March and a second was delivered in early April but development was ended for unknown reasons. The AIL had a locking timebase, which improved maximum range, at the cost of a greatly increased minimum range of 3,000 to 3,500 feet (0.91–1.07 km) and work was abandoned. While aircraft were being delivered, Bowen, Tizard and Watt pressed the Air Ministry to appoint someone to command the entire night fighting system, from ensuring aircraft delivery and radar production to the training of pilots and ground crew. This led to the formation of the Night Interception Committee (so-named in July 1940) under the direction of Richard Peirse. Peirse raised the Night Interception Unit at RAF Tangmere on 10 April 1940; it was later renamed the Fighter Interception Unit (FIU). Bowen led a series of lectures at Bentley Priory, on the theory of radar guided night interception and concluded that the fighter would require a speed advantage of 20 to 25% over its target. The main Luftwaffe bombers—the Junkers Ju 88, Dornier Do 17Z, and Heinkel He 111—were capable of flying at about 250 miles per hour (400 km/h), at least with a medium load. This implied a fighter would need to fly at least 300 miles per hour (480 km/h) and the Blenheim, fully loaded, was capable of only 280 miles per hour (450 km/h). Bowen's concerns over the poor speed of the Blenheim were proved right in combat. ### Mk. III The Mk. II was used for only a short time when the team replaced its transmitter section with one from the ASV Mk. I, which used the new Micropup valves. The new AI Mk. III sets were experimentally fitted to about twenty Blenheim IFs in April 1940, where they demonstrated an improved maximum range of 3 to 4 miles (4.8–6.4 km). However, they still suffered from a long minimum range, from 800 to 1,500 ft depending on how the receiver was adjusted. This led to what Hanbury Brown describes as "the great minimum range controversy". From October 1939, working around the clock to install the remaining Mk. I sets at Perth and St Athan, the team had had no time for further development of the electronics. They were aware that the minimum range was still greater than was satisfactory but Bowen and Hanbury Brown were convinced there was a simple solution they could implement once the initial installations were completed. Meanwhile, the current sets continued to be installed, although all were aware of their problems. On 24 January 1940 Arthur Tedder, the Director General for Research, admitted to Tizard that: > I am afraid much, if not most, of the trouble is due to our fatal mistake in rushing ahead into production and installation of AI before it was ready for production, installation, or for use. This unfortunate precipitance necessarily wrecked research work on AI since it involved diverting the research team from research proper to installation. The issue of minimum range continued to be raised, working its way through the Air Ministry and eventually to Harold Lardner, head of what was then known as the Stanmore Research Centre. Rowe and his deputy Bennett Lewis were called to meet with Lardner to discuss the issue. Apparently without informing Lardner of Bowen and Hanbury Brown's potential solution, or the fact that they could not work on it due to the ongoing installations, they agreed to have Lewis investigate the matter. Lewis then sent a contract to EMI to see what they could do. According to both Bowen and Hanbury Brown, Rowe and Lewis instigated these events deliberately to pull control of the AI project from the AI team. At Dundee, Lewis raised the issue and two solutions to improving the range were considered. The Mk. IIIA consisted of a set of minor changes to the transmitter and receiver with the goal of reducing the minimum range to about 800 feet (240 m). Lewis' own solution was the Mk. IIIB, which used a second transmitter that broadcast a signal that mixed with the main one to cancel it out during the end of the pulse. He believed this would reduce the minimum range to only 600 feet (180 m). Two copies of the IIIA entered tests in May 1940 and demonstrated little improvement, with the range reduced to only 950 feet (290 m), but at the cost of significantly reduced maximum range of only 8,500 feet (2.6 km). Tests of the IIIB waited while the AI team moved from St Athan to Worth Matravers in May, and were eventually overtaken by events. Development of both models was cancelled in June 1940. Word that Lewis was developing his own solutions to the minimum range problem reached the AI team at St Athan some time in early 1940. Bowen was extremely upset. He had become used to the way the researchers had been put into an ill-advised attempt at production but now Rowe was directly removing them from the research effort as well. Tizard heard of the complaints and visited Dundee in an attempt to smooth them over, which evidently failed. On 29 March 1940 a memo from Watt's DCD office announced a reorganization of the Airborne Group. Gerald Touch would move to the RAE to help develop production, installation and maintenance procedures for the Mk. IV, several other members would disperse to RAF airfields to help train the ground and air crews directly on the units, while the rest of the team, including Lovell and Hodgkin, would re-join the main radar research teams in Dundee. Bowen was notably left out of the reorganization; his involvement in AI ended. In late July, Bowen was invited to join the Tizard Mission, which left for the US in August 1940. ### Prototype use Mk. III went into extensive testing at No. 25 Sqn in May 1940 and another troubling problem was found. As the target aircraft moved to the sides of the fighter, the error in the horizontal angle grew. Eventually, at about 60 degrees to the side, the target was indicated as being on the other side of the fighter. Hanbury Brown concluded that the problem was due to reflections between the fuselage and engine nacelles, due to the change to the long-nose IVF from the short-nose IF and IIF. In previous examples they had used the fuselage of the aircraft as the reflector, positioning and angling the antennas to run along the nose or wing leading edges. He tried moving the horizontal antennas to the outside of the nacelles, but this had little effect. Another attempt using vertically oriented antennas "completely cured the problem", and allowed the antennas to be positioned anywhere along the wing. When he later tried to understand why the antennas had always been horizontal, he found this had come from the ASV trials where it was found this reduced reflections from the waves. Given the parallel development of the ASV and AI systems, this arrangement had been copied to the AI side without anyone considering other solutions. At a meeting of the Night Interception Committee on 2 May it was decided that the bomber threat was greater than submarines, and the decision was made to move 80 of the 140 ASV Mk. I transmitters to AI, adding to 70 being constructed by EKCO (E.K. Cole). These would be turned into 60 IIIA's and 40 IIIB's. At a further meeting on 23 May, Tizard, perhaps prompted by comments from Director of Signals (Air), suggested that the units were not suitable for operational use, especially due to low reliability, and should be confined to daylight training missions. By 26 July 70 Blenheims were equipped with Mk. III and the RAE wrote an extensive report on the system. They too had concerns about what they called "partially reliable" systems and pointed out that a significant problem was due to the unreliable antenna connections and cabling. But they went further and stated that the self-exciting concept would simply not work for a production system. These systems used transmitter circuitry as an oscillator to produce the operating frequency, but they had the disadvantage of taking some time to stabilize and then shut down again. Hanbury Brown agreed with this assessment, as did Edmund Cook-Yarborough who had led work on the IIIB at Dundee. ### Mk. IV The RAE's comments about the self-exciting transmitter were not random: they were referring to work that was just coming to fruition at EMI as a direct result of Lewis' earlier contract. EMI engineers Alan Blumlein and Eric White had developed a system that dispensed with a self-exciting transmitter circuit and instead used a separate modulator that fed the signal into the transmitter for amplification. The oscillator signal was also sent to the receiver, using it to damp its sensitivity. The combined effect was to sharpen the transmitted pulse, while reducing 'ringing' in the receiver. In a test in May 1940, Hanbury Brown was able to clearly see the return at a range of 500 feet (150 m), and could still make it out when they approached to 400. Touch, now at RAE Farnborough and having delivered improved versions of ASV, quickly adapted the new oscillator to the existing Mk. III transmitter. Adapting the vertical transmitting "arrowhead", folded twin-dipole antenna design on the nose of the aircraft, from Hanbury Brown's work with the Mk. III eliminated any remaining problems. In its first operational tests in July 1940, the new AI Mk. IV demonstrated the ability to detect another Blenheim at a range of 20,000 feet (6.1 km) and continued to track it down to a minimum of 500. Hanbury Brown stated that "it did everything that we had originally hoped that airborne radar would do for night-fighting". He went on to note that even though Mk. IV arrived only one year after the first Mk. I's, it felt like they had been working for ten years. A production contract for 3,000 units was immediately started at EMI, Pye, and EKCO. When they left for the US in August, the Tizard Mission team took a Mk. IV, ASV Mk. II and IFF Mk. II with them, via the National Research Council (Canada). During the following discussions, it was agreed that the US would produce AI, while Canada would produce ASV. Western Electric arranged a production license for the Mk. IV in the US, where it was known as the SCR-540. Deliveries began for the P-70 (A-20 Havoc) and PV-1 aircraft in 1942. ## Operational use ### Early operations Throughout the development of the Mk. I to III, various units had been flying the systems in an effort to develop suitable interception techniques. Very early on it was decided to dispense with the full reporting chain of the Dowding system and have the radar operators at the Chain Home (CH) sites talk to the fighters directly, greatly reducing delays. This improved matters, and on an increasing number of occasions aircraft received direction from the CH stations towards real targets. The crews were bound to get lucky eventually, and this came to pass on the night of 22/23 July 1940, when a Blenheim IF of the FIU received direction from the Poling CH station and picked up the target at 8,000 feet (2.4 km) range. The CH radar operator directed them until the observer visually spotted a Do 17. The pilot closed to 400 feet (120 m) before opening fire, continuing to close until they were so close that oil spewing from the target covered their windscreen. Breaking off, the Blenheim flipped upside down, and with no visibility the pilot didn't recover until reaching 700 feet (210 m). The target crashed off Bognor Regis, on the south coast of England. This was the first confirmed successful use of airborne radar known to history. In spite of this success, it was clear the Blenheim was simply not going to work as a fighter. On several occasions the CH stations directed the fighters to a successful radar capture, only to have the target slowly pull away from the fighter. In one case the Blenheim was able to see the target, but when it spotted them the aircraft increased power and disappeared. From 1 to 15 October 1940 Mk. III-equipped fighters from RAF Kenley made 92 flights, performed 28 radar interceptions, and made zero kills. The arrival of the Mk. IV in July 1940 improved matters, but it was the delivery of the Bristol Beaufighter starting in August that produced a truly effective system. The Beaufighter had considerably more powerful engines, speed that allowed it to catch its targets, and a powerful gun pack of four 20 mm cannon that could easily destroy a bomber in a single pass. Squadron use began in October, and its first victory came soon after on 19/20 November when a Beaufighter IF of No. 604 Squadron destroyed a Ju 88A-5 near Chichester, very close to the first success of the Mk. III. ### Dowding and AI Through August and September 1940 the Luftwaffe met the Dowding system in the Battle of Britain, and in spite of great effort, failed to defeat Fighter Command. Tizard's letter of 1936 proved prophetic; with their loss during the day, the Luftwaffe moved to a night campaign. The Blitz began in earnest in September. Dowding had been under almost continual criticism from all quarters long before this point; he was still in power after the normal retirement age for officers, had a prickly personality that earned him the nickname "Stuffy", and kept tight-fisted control over Fighter Command. He was also criticized for his inactivity in ending the fight between Keith Park and Trafford Leigh-Mallory, commanders of 11 and 12 Group around London. Nevertheless, he had the favour of Winston Churchill and the demonstrated success of the Battle of Britain, which rendered most complaints moot. The Blitz changed everything. In September 1940 the Luftwaffe flew 6,135 night sorties, leading to only four combat losses. The Dowding system was incapable of handling night interceptions in a practical manner, and Dowding continued to state that the only solution was to get AI into operation. Seeking alternatives, the Chief of the Air Staff, Cyril Newall, convened a review committee under the direction of John Salmond. Salmond built a heavyweight panel including Sholto Douglas, Arthur Tedder, Philip Joubert de la Ferté, and Wilfrid Freeman. At their first series of meetings on 14 September, the Night Defence Committee began collecting a series of suggestions for improvements, which were discussed in depth on 1 October. These were passed on to Dowding for implementation, but he found that many of their suggestions were already out of date. For instance, they suggested building new radars that could be used over land, allowing the fight to continue throughout the raid. A contract for this type of radar had already been sent out in June or July. They suggested that the filter room at RAF Bentley Priory be devolved down to the Group headquarters to improve the flow of information, but Dowding had already gone a step further and devolved night interception to the Sector level at the airfields. Dowding accepted only four of the suggestions. This was followed by another report at the request of Churchill, this time by Admiral Tom Phillips. Phillips returned his report on 16 October, calling for standing patrols by Hawker Hurricane fighters guided by searchlights, the so-called cat's eye fighters. Dowding replied that the speed and altitude of modern aircraft made such efforts almost useless, stating that Phillips was proposing to "merely revert to a Micawber-like method of ordering them to fly about and wait for something to turn up." He again stated that AI was the only solution to the problem. Phillips had not ignored AI, but pointed out that "At the beginning of the war, AI was stated to be a month or two ahead. After more than a year, we still hear that in a month or so it may really achieve results." Dowding's insistence on waiting for AI led directly to his dismissal on 24 November 1940. Many historians and writers, including Bowen, have suggested his dismissal was unwise, and that his identification of AI radar as the only practical solution was ultimately correct. While this may be true, the cat's eye force did result in a number of kills during the Blitz, although their effectiveness was limited and quickly overshadowed by the night fighter force. In May 1941 cat's eye fighters claimed 106 kills to the night fighters' 79, but flew twice as many sorties to do so. Coincidentally a similar system to cat's eye fighters, Wilde Sau, would be arrived at independently by the Luftwaffe later in the war. ### GCI In spite of best efforts, AI's maximum range remained fixed at the aircraft's altitude, which allowed Luftwaffe aircraft to escape interception by flying at lower altitudes. With a five-mile (8 km) accuracy in the ground direction, that meant anything below 25,000 feet (7.6 km) would be subject to this problem, which accounted for the vast majority of Luftwaffe sorties. The lack of ground-based radar coverage over land was another serious limitation. On 24 November 1939, Hanbury Brown wrote a memo on Suggestions for Fighter Control by RDF calling for a new type of radar that would directly display both the target aircraft and the intercepting fighter, allowing ground controllers to directly control the fighter without need for interpretation. The solution was to mount a radar on a motorized platform so it rotated continually, sweeping the entire sky. A motor in the CRT display would rotate the beam deflection plates in synchronicity, so blips seen when the antenna was at a particular angle would be displayed at the same angle on the scope display. Using a phosphor that lasted at least one rotation, blips for all targets within range would be drawn on the display at their correct relative angles, producing a map-like image known as a PPI. With both the bombers and fighters now appearing on the same display, the radar operator could now direct an intercept directly, eliminating all of the delays. The problem was finding a radar that was suitably small; CH radar's huge towers obviously could not be swung about in this fashion. By this time the Army had made considerable progress on adapting the AI electronics to build a new radar for detecting ships in the English Channel, CD, with an antenna that was small enough to be swung in bearing. In 1938, RAF pilots noted they could avoid detection by CH while flying at low altitudes, so in August 1939, Watt ordered 24-CD sets under the name Chain Home Low (CHL), using them to fill gaps in CH coverage. These systems were initially rotated by pedalling on a bicycle frame driving a gear set. A joke of the era "was that one could always identify one of the W.A.A.F. R.D.F. operators by her bulging calf muscles and unusually slim figure". Motorized controls for CHL were introduced in April 1941. By late 1939 it was realized that the rotation of the beam on the radar display could be accomplished using electronics. In December 1939, G.W.A Dummer began development of such a system, and in June 1940 a modified CHL radar was motorized to continually spin in bearing, and connected to one of these new displays. The result was a 360 degree view of the airspace around the radar. Six copies of the prototype Ground Control Interception radars (GCI) were hand-built at AMES (Air Ministry Experimental Station) and RAE during November and December 1940, and the first went operational at RAF Sopley on New Year's Day 1941, with the rest following by the end of the month. Prior to their introduction in December 1940 the interception rate was 0.5%; by May 1941, with a number of operational GCI stations and better familiarity, it was 7%, with a kill rate of around 2.5%. ### End of The Blitz It was only the combination of AI Mk. IV, the Beaufighter and GCI radars that produced a truly effective system, and it took some time for the crews of all involved to gain proficiency. As they did, interception rates began to increase geometrically: - In January 1941, three aircraft were shot down - In February, this improved to four, including the first kill by a Beaufighter - In March, twenty-two aircraft were shot down - In April, this improved to forty-eight - In May, this improved to ninety-six The percentage of these attributed to the AI equipped force continued to rise; thirty-seven of the kills in May were by AI equipped Beaus or Havocs, and by June these accounted for almost all of the kills. By this point, the Luftwaffe had subjected the UK to a major air campaign and caused an enormous amount of destruction and displacement of civilians. However, it failed to bring the UK to peace talks, nor had any obvious effect on economic output. At the end of May the Germans called off The Blitz, and from then on the UK would be subject to dramatically lower rates of bombing. How much of this was due to the effects of the night fighter force has been a matter of considerable debate among historians. The Germans were turning their attention eastward, and most of the Luftwaffe was sent to support these efforts. Even in May, the losses represent only 2.4% of the attacking force, a tiny number that was easily replaceable by the Luftwaffe. ### Baedeker Blitz Arthur Harris was appointed Air Officer Commanding-in-Chief of RAF Bomber Command on 22 February 1942, and immediately set about implementing his plan to destroy Germany through dehousing. As part of their move to area attacks, on the night of 28 March a force dropped explosives and incendiaries on Lübeck, causing massive damage. Adolf Hitler and other Nazi leaders were enraged, and ordered retaliation. On the night of 23 April 1942, a small raid was made against Exeter, followed the next day by a pronouncement by Gustaf Braun von Stumm that they would destroy every location found in the Baedeker tourist guides that was awarded three stars. Raids of ever-increasing size followed over the next week, in what became known in the UK as the Baedeker Blitz. This first series of raids ended in early May. When Cologne was greatly damaged during the first 1,000-bomber raid, the Luftwaffe returned for another week of raids between 31 May to 6 June. The first raids came as a surprise and were met by ineffective responses. On the first raid a Beaufighter from 604 Squadron shot down a single bomber, while the next three raids resulted in no kills, and the next a single kill again. But as the pattern of the attacks grew more obvious—short attacks against smaller coastal cities—the defense responded. Four bombers were shot down on the night of 3/4 May, two more on 7/8th, one on 18th, two on the 23rd. The Luftwaffe changed their tactics as well; their bombers would approach at low altitude, climb to spot the target, and then dive again after releasing their bombs. This meant that interceptions with the Mk. IV were possible only during the bomb run. In the end, the Baedeker raids failed to cause any reduction in the RAF's raids over Germany. Civilian losses were considerable, with 1,637 killed, 1,760 injured, and 50,000 homes destroyed or damaged. In comparison to The Blitz this was relatively minor; 30,000 civilians were killed and 50,000 injured by the end of that campaign. Luftwaffe losses were 40 bombers and 150 aircrew. Although the night fighters were not particularly successful, accounting for perhaps 22 aircraft from late April to the end of June, their shortcomings were on the way to being addressed. ### AIS, replacement The Airborne Group had been experimenting with microwave systems as early as 1938 after discovering that a suitable arrangement of the acorn tubes could be operated at wavelengths as short as 30 cm. However, these had very low output, and operated well within the region of reduced sensitivity on the receiver side, so detection ranges were very short. The group gave up on further development for the time being. Development continued largely at the urging of the Admiralty, who saw it as a solution to detecting the conning towers of partially submerged U-boats. After a visit by Tizard to GEC's Hirst Research Centre in Wembley in November 1939, and a follow-up visit by Watt, the company took up development and developed a working 25 cm set using modified VT90s by the summer of 1940. With this success, Lovell and a new addition to the Airborne Group, Alan Lloyd Hodgkin, began experimenting with horn-type antennas that would offer significantly higher angular accuracy. Instead of broadcasting the radar signal across the entire forward hemisphere of the aircraft and listening to echoes from everywhere in that volume, this system would allow the radar to be used like a flashlight, pointed in the direction of observation. This would greatly increase the amount of energy falling on a target, and improve detection capability. On 21 February 1940, John Randall and Harry Boot first ran their cavity magnetron at 10 cm (3 GHz). In April, GEC was told of their work and asked if they could improve the design. They introduced new sealing methods and an improved cathode, delivering two examples capable of generating 10 kW of power at 10 cm, an order of magnitude better than any existing microwave device. At this wavelength, a half-dipole antenna was only a few centimetres long, and allowed Lovell's team to begin looking at parabolic reflectors, producing a beam only 5 degrees wide. This had the enormous advantage of avoiding ground reflections by simply not pointing the antenna downwards, allowing the fighter to see any target at its altitude or above it. Through this period, Rowe finally concluded that Dundee was unsuitable for any of the researchers, and decided to move again. This time he selected Worth Matravers on the southern coast, where all of the radar teams could once again work together. Due to confused timing and better planning on the part of the AI team, they arrived at Worth Matravers from St Athan before the long convoy from Dundee could make its way south. This caused a traffic jam that further upset Rowe. Nevertheless, everything was set up by the end of May 1940, with the AI team working primarily from huts south of Worth Matravers, and carrying out installations at a nearby airfield. With this move the entire group became the Ministry of Aircraft Production Research Establishment (MAPRE), only to be renamed again as the Telecommunications Research Establishment (TRE) in November 1940. Soon after the move, Rowe formed a new group under Herbert Skinner to develop the magnetron into an AI system, at that time known as AI, Sentimetric (AIS). Lovell adapted his parabolic antennas to the magnetron with relative ease, and the AIS team immediately detected a passing aircraft when they turned on the set for the first time on 12 August 1940. The next day they were asked to demonstrate the set for managers, but no airplane happened to be flying by. Instead, they had one of the workers bicycle along a nearby cliff carrying a small plate of aluminum sheet. This neatly demonstrated its ability to detect objects very close to the ground. As AIS rapidly developed into the AI Mk. VII, development of the Mk. IV's follow-ons, the Mk. V, and Mk. VI (see below) saw vacillating support. Considerable additional development of AIS was required, with the first production version arriving in February 1942, and subsequently requiring an extended period of installation development and testing. The first kill by a Mk. VII set was on the night of 5/6 June 1942. ### Serrate As microwave systems entered service, along with updated versions of aircraft carrying them, the problem arose of what to do with those aircraft carrying Mk. IV that were otherwise serviceable. One possibility, suggested as early as 1942, was homing in on the Luftwaffe's own radar sets. The basic operational frequencies of the Luftwaffe's counterpart to the Mk. IV, the FuG 202 Lichtenstein BC radar, had been discovered in December 1942. On 3 April 1943 the Air Interception Committee ordered the TRE to begin considering the homing concept under the codename Serrate. As luck would have it, this proved to be perfect timing. In the late afternoon of 9 May 1943, a crew from IV/NJG.3 defected to the UK by flying their fully equipped Ju 88R-1 night fighter, D5+EV, to RAF Dyce in Scotland, giving the TRE their first direct look at the Lichtenstein. The antenna array of the original Mk. IV was limited by practical factors to be somewhat shorter than the 75 cm that would be ideal for their 1.5 m signals. Lichtenstein operated at 75 cm, making the Mk. IV's antennas almost perfectly suited to pick them up. Sending the signals through the existing motorized switch to a new receiver tuned to the Lichtenstein's frequency produced a display very similar to the one created by the Mk. IV's own transmissions. However, the signal no longer had to travel from the RAF fighter and back again; instead, the signals would only have to travel from the German aircraft to the fighter. According to the radar equation this makes the system eight times as sensitive, and the system displayed its ability to track enemy fighters at ranges as great as 50 miles (80 km). Homing on the enemy's broadcasts meant that there was no accurate way to calculate the range to the target; radar ranging measurements are based on timing the delay between broadcast and reception, and there was no way to know when the enemy's signal was originally broadcast. This meant that the homing device could only be used for the initial tracking, and the final approach would have to be carried out by radar. The extra range of the Mk. VIII was not required in this role as Serrate would bring the fighter within easy tracking range, and the loss of a Mk. IV would not reveal the secret of the magnetron to the Germans. For this reason, the Mk. IV was considered superior to the newer radars for this role, in spite of any technical advantages of the newer designs. Serrate was first fitted to Beaufighter Mk. VIF aircraft of No. 141 Squadron RAF in June 1943. They began operations using Serrate on the night of 14 June, and by 7 September had claimed 14 German fighters shot down, for 3 losses. The squadron was later handed to No. 100 Group RAF, who handled special operations within Bomber Command including jamming and similar efforts. In spite of their successes, it was clear that the Beaufighter lacked the speed needed to catch the German aircraft, and Mosquitoes began to replace them late in 1943. The Germans became aware of their losses to night fighters, and began a rush program to introduce a new radar operating on different frequencies. This led to the lower-VHF band FuG 220 Lichtenstein SN-2, which began to reach operational units in small numbers between August and October 1943, with about 50 units in use by November. In February 1944, No. 80 Sqn noticed a marked decrease in FuG 202 transmissions. By this time the Germans had produced 200 SN-2 sets, and this had reached 1,000 by May. This set deliberately selected a frequency close to that of their ground-based Freya radar sets, in the hopes that these sources would swamp any wide-band receiver set used on RAF aircraft. Early Serrate units were effectively useless by June 1944, and their replacements were never as successful. ## Further development ### Mk. IVA and Mk. V Experience demonstrated that the final approach to the target required fast action, too quick for the radar operator to easily communicate corrections to the pilot. In 1940, Hanbury Brown wrote a paper On Obtaining Visuals from AI Contacts which demonstrated mathematically that the time delays inherent to the interception system were seriously upsetting the approach. In the short term he suggested the fighters make their approach to dead astern while still 2,500 feet (760 m) out, and then fly straight in. For the longer term, he suggested adding a pilot's indicator that directly demonstrated the direction needed to intercept. This led to Hanbury Brown's work on the Mark IVA, which differed from the Mk. IV primarily by having an additional display unit in front of the pilot. The radar operator had an additional control, the strobe, which could be adjusted to pick out returns at a particular range. Only those returns were sent to the pilot's display, resulting in much less clutter. Unlike the operator's display, the pilot's showed the target's location as a single dot in a bore-scope like fashion; if the dot was above and to the right of the centre of the display, the pilot had to turn to the right and climb to intercept. The result was what was known as a flying spot indicator, a single selected target showing a direct indication of the target's relative position. Tests were carried out starting in October 1940, and quickly demonstrated a number of minor problems. One of the minor issues is that the crosshairs on the tube that indicated the center would block the spot. A more serious concern was the lack of range information, which the FIU pilots considered critical. Hanbury Brown went to work on these issues, and returned an updated version in December. A U-shaped reticle in the center of the display provided a centre location that left the spot visible. Additionally, the circuitry included a second timebase that produced a longer signal as the fighter approached its target. The output was timed so the line was centred horizontally on the dot. This presented the range in an easily understandable fashion; the line looked like the wings of an aircraft, which naturally grow larger as the fighter approaches it. The U-shaped centring post was sized so the tips of the U were the same width as the range indication line when the target was at 2,500 feet (0.76 km), which indicated that the pilot should throttle back and begin his final approach. Two vertical lines to the sides of the display, the goal posts, indicated that the target was 1,000 feet (300 m) ahead and it was time to look up to see it. Two smaller lines indicated a range of 500 feet (150 m), at which point the pilot should have seen the target, or had to break away to avoid collision. At a meeting on 30 December 1940, it was decided to begin limited production of the new indicators as an add-on unit for existing Mk. IV systems, creating the AI Mk. IVA. The first examples arrived in January 1941, with more units from ADEE and Dynatron following in early February. Hanbury Brown's involvement with AI came to an abrupt end during testing of the new unit. During a flight in February 1941 at 20,000 feet (6.1 km) his oxygen supply failed and he suddenly awoke in an ambulance on the ground. He was no longer allowed to fly on tests, and moved to working on radar beacon systems. Continued work displayed a number of minor problems, and the decision was made to introduce a redesigned unit with significant improvements in packaging, insulation, and other practical changes. This would become the AI Mk. V, which began to arrive from Pye in late February and immediately demonstrated a host of problems. By this time the microwave units were being designed, and the Mk. V was almost cancelled. A contract for over 1,000 units was allowed to continue in case of delays in the new units. By May the issues with the Pye design were ironed out, and the FIU's testing revealed it to be superior to the Mk. IV, especially in terms of maintenance. An RAE report agreed. The first updated Mk. V sets arrived in April 1942 and were fitted to the de Havilland Mosquito as they became available. A Mk. V equipped Mosquito claimed its first kill on 24/25 June, when a Mosquito NF.II from No. 151 Squadron shot down a Dornier Do 217E-4 over the North Sea. In practice it was found that pilots had considerable difficulty looking up from the display at the last minute, and the system was used only experimentally. By this time the microwave units had started to arrive in small numbers, so Mk. V production was repeatedly delayed pending their arrival, and eventually cancelled. Starting in the summer of 1942 the TRE development team began experimenting with systems to project the display onto the windscreen, and by October had combined this with an image of the existing GGS Mk. II gyro gunsight to produce a true head-up display known as the Automatic Pilot's Indicator, or API. A single example was fitted to a Beaufighter and tested through October, and numerous modifications and follow-on examples were trialled over the next year. ### Mk. VI As AI began to prove itself through early 1940 the RAF realised that the radar supply would soon outstrip the number of suitable aircraft available. With large numbers of single-engine single-seat aircraft already in the night fighter units, some way to fit these with radar was desired. The Air Ministry formed the AI Mk. VI Design Committee to study this in the summer of 1940. The resulting AI Mk. VI design was essentially a Mk. IVA with an additional system that automatically set the strobe range. With no target visible, the system moved the strobe from its minimum setting to a maximum range of about 6 miles (9.7 km) and then started over at the minimum again. This process took about four seconds. If a target was seen, the strobe would stick to it, allowing the pilot to approach the target using his C-scope. The pilot would fly under ground control until the target suddenly appeared on his pilot indicator, and then intercept it. A prototype of the automatic strobe unit was produced in October, along with a new Mk. IVA-like radar unit with a manual strobe for testing. EMI was then asked to provide another breadboard prototype of the strobe unit for air testing, which was delivered on 12 October. A raft of problems were found and addressed. Among these, it was found that the strobe would often stick to the ground reflection, and when it did not, would not stick until it had a strong signal at shorter ranges, or might stick to the wrong target. Eventually a panacea button was added to unstick the strobe in these cases. As the Mk. IVA was modified into its improved Mk. V, the Mk. VI followed suit. But by early 1941 it was decided to make the Mk. VI an entirely new design, to more easily fit in small aircraft. EMI had already been awarded a contract for a dozen prototype units in October 1940 for delivery in February, but these continued changes made this impossible. Nevertheless, they presented a production contract for 1,500 units in December. Between December and March, production examples began arriving and displayed an enormous number of problems, which the engineers worked through one-by one. By July the systems were ready for use, and began being installed in the new Defiant Mk. II early in August, but these demonstrated a problem where the system would lock-on to transmissions from other AI aircraft in the area, which resulted in further modifications. It was not until the beginning of December 1941 that these issues were fully solved and the units were cleared for squadron use. By this point, supplies of the Beaufighter and the new Mosquito had improved dramatically, and the decision was made to remove all single-engine designs from the night fighter force during 1942. Two Defiant units did switch to the Mk. VI, but they operated for only about four months before converting to the Mosquito. Production for the AI role ended, and the electronics were converted to Monica tail warning radars for the bomber force, until the mid-1944 knowledge of the Germans' Flensburg radar detector, which spotted Monica transmissions, was revealed to the British. The Mk. VI had a brief overseas career. One of the early units was experimentally fitted to a Hurricane Mk. IIc, and this led to a production of a single flight of such designs starting in July 1942. These conversions were given such a low priority that they were not complete until the spring of 1943. Some of these aircraft were sent to Calcutta where they claimed a number of Japanese bombers. An experimental fit on Hawker Typhoon iA R7881 was carried out, with the system packed into a standard underwing drop tank. This was available in March 1943 and underwent lengthy trials lasting into 1944, but nothing came of this work. ## Description The Mk. IV was a complex lash-up of systems, known collectively in the RAF as the Airborne Radio Installation 5003 (ARI 5003). Individual parts included the R3066 or R3102 receiver, T3065 transmitter, Modulator Type 20, Transmitter Aerial Type 19, Elevation Aerial Type 25, Azimuth Aerial Type 21 and 25, Impedance Matching Unit Type 35, Voltage Control Panel Type 3, and Indicator Unit Type 20 or 48. ### Antenna layout As the Mk. IV system worked on a single frequency, it naturally lent itself towards the Yagi antenna design, which had been brought to the UK when the Japanese patents were sold to the Marconi Company. "Yagi" Walters developed a system for AI use using five Yagi antennas. Transmissions took place from a single arrowhead antenna mounted on the nose of the aircraft. This consisted of a folded dipole with a passive director in front of it, both bent rearward at about 35 degrees, projecting from the nosecone on a mounting rod. For vertical reception, the receiver antennas consisted of two half-wave unipoles mounted above and below the wing, with a reflector behind them. The wing acted as a signal barrier, allowing the antennas to see only the portion of the sky above or below the wing as well as directly in front. These antennas were angled rearward at the same angle as the transmitter. The horizontal receivers and directors were mounted on rods projecting from the leading edge of the wing, the antennas aligned vertically. The fuselage and engine nacelles formed the barriers for these antennas. All four receiver antennas were connected via separate leads to a motorized switch that selected each one of the inputs in turn, sending it into the amplifier. The output was then switched, using the same system, to one of four inputs into the CRTs. The entire radar dipole aerial setup for the AI Mk.IV was simple in comparison to the 32-dipole Matratze (mattress) transceiving array fitted to the noses of the earliest German night fighters to use AI radar, for their own UHF-band Lichtenstein B/C airborne radar design from 1942 to 1943. ### Displays and interpretation The Mk. IV display system consisted of two 3-inch (7.6 cm) diameter cathode ray tubes connected to a common timebase generator normally set to cross the display in the time it would take to receive a signal from 20,000 feet (6.1 km). The displays were installed beside each other at the radar operator's station at the rear of the Beaufighter. The tube on the left showed the vertical situation (altitude) and the one on the right showed the horizontal situation (azimuth). Each receiver antenna was sent to one of the channels of the displays in turn, causing one of the displays to refresh. For instance, at a given instant the switch might be set to send the signal to the left side of the azimuth display. The timebase generator was triggered to start sweeping the CRT dot up the screen after the transmission ended. Reflections would cause the dot to be deflected to the left, creating a blip whose vertical location could be measured against a scale to determine range. The switch would then move to the next position and cause the right-hand side of the display to be redrawn, but the signal inverted so the dot moved to the right. The switching occurred fast enough that the display looked continuous. Because each antenna was aimed to be sensitive primarily in a single direction, the length of the blips depended on the position of the target relative to the fighter. For instance, a target located 35 degrees above the fighter would cause the signal in the upper vertical receiver to be maximized, causing a long blip to appear on the upper trace, and none on the lower trace. Although less sensitive directly forward, both vertical antennas could see directly in front of the fighter, so a target located dead ahead caused two slightly shorter blips, one on either side of the centreline. For interception, the radar operator had to compare the length of the blips on the displays. If the blip was slightly longer on the right than left side of the azimuth display, for instance, he would instruct the pilot to turn right in an effort to centre the target. Interceptions normally resulted in a stream of left/right and up/down corrections while reading out the (hopefully) decreasing range. The trailing edge of the transmitter pulse was not perfectly sharp and caused the receiver signals to ring for a short time even if they were turned on after the pulse was ostensibly complete. This leftover signal caused a large permanent blip known as the transmitter break through which appeared at the short-range end of the tubes (left and bottom). A control known as the Oscillator Bias allowed the exact timing of the receiver's activation relative to the transmitter pulse to be adjusted, normally so the remains of the pulse were just visible. Due to the wide pattern of the transmission antenna, some of the signal always hit the ground, reflecting some of it back at the aircraft to cause a ground return. This was so powerful that it was received on all of the antennas, even the upper vertical receiver which would otherwise be hidden from signals below it. As the shortest distance, and thus the strongest signal, was received from reflections directly below the aircraft, this caused a strong blip to appear across all the displays at the range of the fighter's altitude. The ground further in front of the aircraft also caused returns, but these were increasingly distant (see slant range) and only some of the signal was reflected back at the aircraft while an increasing portion was scattered forward and away. Ground returns at further distances were thus smaller, resulting in a roughly triangular series of lines at the top or right side of the displays, known as the "Christmas tree effect", beyond which it was not possible to see targets. ### Serrate operation Serrate used the Mk. IV equipment for reception and display, replacing only the receiver unit. This could be switched in or out of the circuit from the cockpit, which turned off the transmitter as well. In a typical interception, the radar operator would use Serrate to track the German fighter, using the directional cues from the displays to direct the pilot on an intercept course. Range was not supplied, but the operator could make a rough estimate by observing the signal strength and the way the signals changed as the fighter maneuvered. After following Serrate to an estimated range of 6,000 feet (1.8 km), the fighter's own radar would be turned on for the final approach. ### IFF use Starting in 1940, British aircraft were increasingly equipped with the IFF Mk. II system, which allowed radar operators to determine whether a blip on their screen was a friendly aircraft. IFF was a responder that sent out a pulse of radio signal immediately on reception of a radio signal from a radar system. The IFF's transmission mixed with the radar's own pulse, causing the blip to stretch out in time from a small peak to an extended rectangular shape. The rapid introduction of new types of radars working on different frequencies meant the IFF system had to respond to an ever-increasing list of signals, and the direct response of the Mk. II required an ever-increasing number of sub-models, each turned to different frequencies. By 1941 it was clear that this was going to grow without bound, and a new solution was needed. The result was a new series of IFF units which used the indirect interrogation technique. These operated on a fixed frequency, different from the radar. The interrogation signal was sent from the aircraft by pressing a button on the radar, which caused the signal to be sent out in pulses synchronized to the radar's main signal. The received signal was amplified and mixed into the same video signal as the radar, causing the same extended blip to appear. ### Homing systems Transponder systems used on the ground provide the ability to home in on the transponder's location, a technique that was widely used with the Mk. IV, as well as many other AI and ASV radar systems. Homing transponders are similar to IFF systems in general terms, but used shorter pulses. When a signal was received from the radar, the transponder responded with a short pulse on the same frequency, the original radar pulse would not be reflected so there was no need to lengthen the signal as in the case of IFF. The pulse was sent to the Mk. IV's display and appeared as a sharp blip. Depending on the location of the transponder relative to the aircraft, the blip would be longer on the left or right of the azimuth display, allowing the operator to guide the aircraft to the transponder using exactly the same methods as a conventional aircraft intercept. Due to the physical location of the transponder, on the ground, the receiver antenna with the best view of the transponder was the one mounted under the wing. The radar operator would normally pick up the signal on the lower side of the elevation display, even at very long distances. Since the signal from the beacon was quite powerful, the Mk. IV included a switch that set the timebase to 60 miles (97 km) for long-distance pickup. Once they approached the general area, the signal would be strong enough to begin to appear on the azimuth (left-right) tube. ### BABS Another system used with the Mk. IV was the Beam-Approach Beacon System, or BABS, which indicated the runway centreline. The general concept pre-dated the Mk. IV and was essentially a UK version of the German Lorenz beam system. Lorenz, or Standard Beam Approach as it was known in the UK, used a single transmitter located off the far end of the active runway that was alternately connected to one of two slightly directional antennas using a motorized switch. The antennas were aimed so they sent their signals to the left and right of the runway, but their signals overlapped down the centreline. The switch spent 0.2 seconds connected to the left antenna (as seen from the aircraft) and then 1 second on the right. To use Lorenz, a conventional radio was tuned to the transmission, and the operator would listen for the signal and try to determine if they heard dots or dashes. If they heard dots, the short 0.2 s pulse, they would know they were too far to the left, and turned to the right in order to reach the centreline. Dashes indicated they should turn left. In the centre the receiver could hear both signals, which merged to form a steady tone, the equisignal. For BABS, the only change was to change the broadcast's transmissions to a series of short pulses rather than a continuous signal. These pulses were sent out when triggered by the AI radar's signals and were powerful enough that they could be picked up by the Mk. IV receiver within a few miles. On reception, the Mk. IV would receive either the dots or dashes, and the operator would see an alternating series of blips centred in the display, popping out and then disappearing as the BABS antennas switched. The duration of the blip indicated whether the aircraft was to the left or right, and became a continuous blip on the centreline. This technique was known as AI beam approach (AIBA). Due to it being based on the same basic equipment as the original Mk. IV AI, BABS could also be used with the Rebecca equipment, originally developed to home on ground transponders for dropping supplies over occupied Europe. The later Lucero unit was essentially an adapter for a Rebecca receiver, mating it to any existing display; AI, ASV, or H2S. ## See also - Air warfare of World War II - European theatre of World War II - History of radar - Turbinlite - Air Ministry Experimental Station (AMES)
12,888,768
Toronto Magnetic and Meteorological Observatory
1,080,842,161
Observatory in Toronto, Ontario, Canada
[ "Astronomical observatories in Canada", "Geophysical observatories", "Meteorological observatories", "Relocated buildings and structures in Canada", "University of Toronto buildings" ]
The Toronto Magnetic and Meteorological Observatory is a historical observatory located on the grounds of the University of Toronto, in Toronto, Ontario, Canada. The original building was constructed in 1840 as part of a worldwide research project run by Edward Sabine to determine the cause of fluctuations in magnetic declination. Measurements from the Toronto site demonstrated that sunspots were responsible for this effect on Earth's magnetic field. When this project concluded in 1853, the observatory was greatly expanded by the Canadian government and served as the country's primary meteorological station and official timekeeper for over fifty years. The observatory is considered the birthplace of Canadian astronomy. ## Sabine's study Compasses tended to "wander" from north when measurements were taken at different locations or even at a single location over a period of time. The astronomer Edmund Halley noted this and the problems it would cause for navigation in 1701. It was also believed that whatever was causing this effect might be causing changes in the weather, and that studying magnetic variations might lead to better weather prediction. In 1833 the British Association for the Advancement of Science commissioned a series of magnetic measurements across the United Kingdom. Under the direction of Major Edward Sabine of the Royal Artillery, a multi-year measuring project began, with the results to be published in 1838. As the measurements were being made a number of proposals were put forth to expand the program worldwide. In 1836 the German explorer and naturalist Alexander von Humboldt wrote to Prince Augustus Frederick, Duke of Sussex, then President of the Royal Society, stating that a formal program was important to a nation with dominions spread across the globe. At the seventh meeting of the British Association in Liverpool in 1837, Sabine declared that "the magnetism of the earth cannot be counted less than one of the most important branches of the physical history of the planet we inhabit" and mapping its variations would be "regarded by our contemporaries and by posterity as a fitting enterprise of a maritime people; and a worthy achievement of a nation which has ever sought to rank foremost in every arduous undertaking". In 1837, the British Government funded the installation of a magnetic observatory at Greenwich. The Association continued to press for the construction of similar observatories around the world, and in 1838 their suggestions were accepted by the Government and funds were provided. In 1839 the British Government and the Royal Society prepared four expeditions to build magnetic observation stations in Cape Town; St. Helena; Hobart, Tasmania and (eventually) Toronto, Ontario. Teams of Royal Artillery officers were sent out to take the measurements. The team assigned to Canada originally planned to build their observatory on Saint Helen's Island off Montreal, but the local rocks proved to have a high magnetic influence, and the decision was made to move to Toronto instead. The team arrived in 1839, and set up camp at Fort York in a disused barracks while construction started on new buildings. The observatory was given 10 acres (4.0 ha) of land to the west of King's College; the Ontario Legislature now occupies the area on which the college was located. The observatory, officially "Her Majesty's Magnetical and Meteorological Observatory at Toronto", was completed the following year. It consisted of two log buildings, one for the magnetic instruments and the other a smaller semi-buried building nearby for "experimental determinations". The north end of the main building was connected to a small conical dome which contained a theodolite used to make astronomical measurements for the accurate determination of the local time. The buildings were constructed with as little metal as possible; when metal was required, non-magnetic materials such as brass or copper were used. A small barracks was built nearby to house the crew. Using the measurements from the Toronto and Hobart sites, Sabine noticed both short-term fluctuations in magnetic declination over a period of hours, and longer-term variations over months. He quickly concluded that the short term variations were due to the day/night cycle, while the longer term ones were due to the number of visible sunspots. He published two introductory papers on the topic in the Philosophical Transactions of the Royal Society. The first, in 1851, was a collection of early measurements; the second in 1852 correlated with Heinrich Schwabe's sunspot measurements, which had been made widely available in Alexander von Humboldt's Cosmos, also published in 1851. With further data collected from the Toronto site, Sabine was able to demonstrate conclusively that the eleven-year sunspot cycle caused a similarly periodic variation in the Earth's magnetic field. He presented a third and conclusive paper on the topic in 1856, "On Periodical Laws Discoverable in the Mean Effects of the Larger Magnetic Disturbances", in which he singled out the Toronto site for particular praise. Sir John Henry Lefroy, a pioneer in the study of terrestrial magnetism served as director of the magnetic observatory from 1842 to 1853; In 1960, the Ontario Heritage Foundation, Ministry of Citizenship and Culture erected a Provincial Military Plaque in his honour on the University of Toronto campus. ## Meteorological service In 1853 the Royal Society's project was concluded, and the observatory was set to be abandoned. After a lengthy debate, the fledgling colonial government decided to take over its operation. Rather than disappearing like its three counterparts, the Toronto observatory was upgraded, and its mission was expanded as it became a meteorological station (see Meteorological Service of Canada) under the direction of the Ministry of Marine and Fisheries. During the expansion, the original buildings were replaced with a permanent structure. The new building was designed in 1853 by local architect Frederick Cumberland, who was also working on the design of University College, which was being built just north of the Observatory to replace King's College. The new observatory design called for a stone building, with an attached tower containing the theodolite. The new building was completed in 1855, and stood directly opposite the entrance of today's Convocation Hall. During its time as a meteorological station, the observatory collected reports from 312 observation stations in Canada and another 36 in the United States. Each station was equipped with a "Mercurial Barometer, two Thermometers (a maximum and a minimum Thermometer), an Anemometer to measure the velocity of the wind, a Wind Vane and a Rain Gauge". Reports were sent in coded form to the Observatory at 8 am and 8 pm every day, Eastern Standard Time (then known as "75th meridian time"), and used to produce a chart predicting the weather for the following 36 hours. These predictions were then telegraphed across the country, and charts were distributed to newspapers and the Board of Trade, where they could be viewed by the public. With the installation of telephones, the Observatory also offered weather reports on demand, which was an important service to fruit vendors, who used the reports to plan shipping. Among its other uses, in 1880, measurements from the site were used as part of the effort to develop standard time. The observatory remained the official timekeeper for Canada until 1905, when that responsibility was transferred to Ottawa's Dominion Observatory. At exactly 11:55 am the clocks in Toronto fire halls were rung by an electrical signal from the Observatory. In 1881 the observatory's director, Charles Carpmeal, suggested adding a high-quality telescope to the observatory. He felt that direct solar observations would lead to a better understanding of sunspot effects on weather (as late as 1910 the observatory's then-director, Robert Frederic Stupart, noted that "sun spots have more to do with our weather conditions than have the rings around the moon."). Coincidentally, the Canadian government (having formed in 1867) was interested in taking part in the major international effort to accurately record the December 1882 Transit of Venus. Funds were provided for the purchase of a 6-inch (150 mm) refracting telescope from T. Cooke & Sons. The dome was originally designed to mount a small transit, and the lengthy telescope, over 2 metres long, had a limited field of view though the dome's opening. A large stone pillar was constructed inside the tower, raising the telescope to bring it closer to the dome and improve its field of view. Unfortunately, the new telescope was unable to take part in the transit measurements due to bad weather, and missed the 1895 Transit of Mercury for the same reason. ## Relocation By the 1890s, the observatory had become crowded by the rapidly growing university. Electrification of the tramways along College Street just to the south, and the large quantities of metal used in the modern buildings surrounding the site threw off the instruments. A new magnetic observatory opened in 1898 in Agincourt, at that time largely empty fields, (found on later maps on the north end of George Forfar farm east of Midland Avenue near Highway 401 or where Health Canada Protection Branch building resides today) leaving the downtown campus location with its meteorological and solar observation duties. By 1907, new university buildings completely surrounded the observatory; dust from the construction clogged meteorological instruments, and at night electric lighting made astronomical work impossible. The Meteorological Office decided to abandon the site and move to a new building at the north end of campus at 315 Bloor Street West, trading the original Observatory to the University in exchange for the new parcel of land. There was some discussion regarding what to do with the Cooke telescope, since the Meteorological Office had little use for this purely astronomical instrument. No other use was immediately forthcoming, and the telescope moved along with the Meteorological Office to their new Bloor Street Observatory. The university assumed ownership of the now-disused observatory building and was originally going to abandon it. Louis Beaufort Stewart, a lecturer in the Faculty of Applied Science and Engineering, campaigned for it to be saved for the Department of Surveying and Geodesy. He eventually arranged for the building to be re-constructed on a more suitable site. Demolition work was carried out in 1907: the stones were simply left in place over the winter, and were used the following year to construct a re-arranged building just east of the main University College building (south of Hart House). In 1930 the Meteorological Office no longer used the Cooke telescope, and agreed to donate it to the university if they would handle its removal. Both the telescope and the observatory dome were moved to the observatory building. The telescope moved once again in 1952 to the David Dunlap Observatory north of the city, and in 1984 it was donated to the Canada Science and Technology Museum. The Department of Surveying and Geodesy used the observatory until the 1950s. Since then the office areas have been used for a variety of purposes, including a police substation and a telephone switchboard. Renamed as the Louis Beaufort Stewart Observatory, the building was handed over to the Students' Administrative Council (now University of Toronto Students' Union) in 1953, which has used the building since then. The dome, now unused, receives a yearly multi-colour paint job by engineering students. ## Heritage The property is listed on the City of Toronto's Heritage Register since 1973. The listing notes it was opened as an observatory in 1857, designed by Cumberland and Storm.
73,089,641
A History of British Fishes
1,158,466,855
1835–1836 book by William Yarrell
[ "1836 non-fiction books", "Biology in the United Kingdom", "Fauna of the United Kingdom", "Ichthyological literature", "Natural history books", "Woodcuts" ]
A History of British Fishes is a natural history book by William Yarrell, serialised in nineteen parts from 1835, and then published bound in two volumes in 1836. It is a handbook or field guide systematically describing every type of fish found in the British Isles, with an article for each species. Yarrell was a London bookseller and newsagent with the time and income to indulge his interest in natural history. He was a prominent member of several natural history societies and knew most of the leading British naturalists of his day. He was able to draw on his own extensive library and collection of specimens, his wide network of like-minded naturalist friends, and his access to major libraries to garner material for his writings, the most important of which were A History of British Fishes and the 1843 A History of British Birds. A History of British Fishes followed the example of Thomas Bewick's natural history books in its combination of up-to-date scientific data, accurate illustrations, detailed descriptions and varied anecdotes. The wood engraving illustrations were drawn by Alexander Fussell and engraved by John Thompson; three editions and their two supplements were published by John Van Voorst's company, based in Paternoster Row, London. Yarrell died in 1856, and the third edition was produced posthumously. The work was a commercial success and became the standard reference work for a generation of British ichthyologists. Yarrell's name is commemorated in eight species, three of which are fish, and in the lightfish genus Yarrella. ## Author William Yarrell (1784–1856) was the son of Francis Yarrell and his wife Sarah, . William's father and his cousin William Jones were partners as booksellers and newsagents in London. William joined the business in 1803 after leaving school, and inherited the company in 1850. Yarrell had the free time and income to indulge his hobbies of shooting and fishing, and started to show an interest in rare birds, sending some specimens to the engraver and author Thomas Bewick. He became a keen student of natural history and collector of birds, fish, and other wildlife, and by 1825 he had a substantial collection. He was active in the London learned societies, and held senior posts in several for many years. He was treasurer of the Linnean Society from May 1849, until his death in 1856, vice president of the Zoological Society of London from 1839 to 1851, treasurer of the Royal Entomological Society from 1834 to 1852, and was also on the Council of the Medico-Botanical Society. He knew many of the leading naturalists of his day, which helped him in the production of his books and articles, notably A History of British Fishes and his 1843 A History of British Birds. ## Background ### Written sources Interest in natural history was growing rapidly in the early nineteenth century, and several writers sought to provide definitive lists of species found in Britain, with descriptions and other pertinent information. When Yarrell came to tackle the fish, written sources were limited. Edward Donovan's The Natural History of British Fishes (1802–1808) was the only reasonably recent specialist book, although Thomas Pennant's British Zoology (1812) and Bewick's A Natural History of British Quadrupeds (1808) were among other publications that covered some British fish. The most notable foreign sources were the Histoire naturelle des poissons (1828–1831) by Baron Georges Cuvier and Achille Valenciennes, which contained descriptions of five thousand species of fishes, and Marcus Elieser Bloch's beautifully illustrated twelve-volume Allgemeine Naturgeschichte der Fische (1782–1795). The French book was important because Cuvier and Valenciennes had grouped similar species together, providing a logical order to their book. Yarrell had membership of the libraries of the British Museum and the Linnaean Society, and his friends gave him access to college collections and their own private libraries and notebooks. Yarrell personally owned at least 2000 books, of which about 80 were concerned with fish or fishing. The posthumous sale of his books in 1856 raised £1100. ### Other resources Yarrell was a keen fisherman, and his journeys to English south coast locations like Brighton, Weymouth and Hastings gave him direct access to fresh specimens. He also frequented fish vendors, particularly in London's important markets, and had a network of fisherman-naturalist contacts, eight of whom he named in the preface to his book, notably the Cornishman Jonathan Couch, who provided him with many fish specimens from the southwest of England. Fellow members of the learned societies he belonged to also helped him with specimens. Yarrell had 220 species of fish as preserved specimens in his personal collection, now held in the Natural History Museum. Fish were mostly preserved in spirits of wine, a strong ethanol solution, although whisky was an alternative used in Scotland. As a London-based bookseller and an active member of London's learned societies, Yarrell had contact with many fellow naturalists who could help him with books, illustrations and notes, as well as specimens. He was a life-long friend of clergyman naturalist Leonard Jenyns, and a regular correspondent with the taxidermist John Gould, Sir William Jardine, the Earl of Derby, Edward Lear and Charles Darwin. Yarrell's knowledge of avian anatomy helped Lear develop his bird painting skills by teaching him that feather tracts follow the muscle contours, and he in return provided a drawing of a thicklip grey mullet for the fish book. Yarrell made significant discoveries of his own, including showing that male seahorses and pipefish carried fertilised eggs in a pouch, and clarifying how many Salmo (salmon and trout) species occurred in Britain. ## Format Yarrell was a great admirer of Thomas Bewick (he named a new wildfowl species "Bewick's swan" after the engraver). Bewick's A History of British Birds, published in two volumes in 1797 and in 1804, had brought him nationwide fame, and since Yarrell owned several editions of Bewick's books, he followed the older man's format for his own fish project. Volume 1 has a preface which also acknowledges the people who had helped Yarrell with his project, followed by an introduction discussing the general characteristics of fish (fifteen pages in the first edition) and an alphabetical index before the main species accounts start. There was no established taxonomic sequence for arranging fish, so where possible Yarrell followed Cuvier and Valenciennes, otherwise using anatomical resemblances in features including fins, teeth, and head bones to order his species. Each entry started with a wood engraving of the species, followed by its scientific and English names and their synonyms, and a lead section "Generic characteristics" summarising the key anatomical features. The main text described the fish in more detail, noted when it was recorded as a British species, mentioned interesting anatomical characteristics, described its habits in terms of gregariousness and water depth, and recorded where it could be found in Britain and Europe. Yarrell also ate many of the fish he described so that he could comment on their palatability. A typical example is Yarrell's first entry, for the perch. As well as the expected detailed anatomical and geographical information, in the five-page text he notes: > In rivers, the Perch prefers the sides of the stream rather than the rapid parts of the current, and feeds indiscriminately upon insects, worms, and small fishes ... So remarkable is the Perch for its boldness and voracity, that in a few days ... Mr. Jesse tells us, they came freely and took worms from his fingers ... They are constantly exhibited in the markets of Catholic countries, and, if not sold, are taken back to the ponds from which they were removed in the morning, to be reproduced another day. The flesh of this fish is firm, white, of good flavour, and easy of digestion ... The Perch, though very common, is one of the most beautiful of our fresh-water fishes, and, when in good condition, its colours are brilliant and striking ... ## Production and publication Yarrell's illustrations were wood engravings made using the techniques pioneered by Bewick in which boxwood blocks were engraved on their ends using a burin, a tool with a Vshaped tip. The new illustrations for the fish book were drawn onto the blocks by Alexander Fussell and cut by John Thompson, both of whom also worked on the later bird book. The most expensive part of producing illustrated books in the nineteenth century was the hand colouring of printed plates, mainly by young women. By using monochrome illustrations Yarrell could avoid this outlay and the associated costs of having the illustrations separate from the text and printed on a different grade of paper. The quality of the illustrations in Yarrell's books was very high, because he could afford to employ Thompson and his sons. Thompson senior was later to win a médaille d'or at the 1855 Paris Exhibition. William Swainson suggested to Yarrell that he should produce separate offprints of the illustrations and have them coloured for separate sale as a profitable additional venture, but Yarrell refused. There were practical problems in that the wood engraving blocks were set in the same formes as the letterpress for the text, and, if separated, the extra printing demand would wear out the wooden blocks, especially without the protection of the surrounding raised metal type. Yarrell also objected on principle to the prints being sold separately. The book was originally published in 19 fascicules (parts), each priced at 2/6d (12.5p). The last part contained an index. The publisher of Yarrell's books was John Van Voorst, whose business was in Paternoster Row, a street central to the London publishing trade. He began to specialise in natural history publications and was appointed official bookseller to the London Zoological Society in 1837. Van Voorst often visited Yarrell's house, and joined him to shoot and fish on estates and streams around London. He was a Fellow of the Linnean Society and a founding fellow of the Royal Microscopical Society, established in 1839. ### Editions Three editions and three supplements were published by Van Voorst. - 1835–36 Two volumes originally published in 19 parts. 226 species described and figured, and 140 vignettes. Volume 1, 408 pp., volume 2, 472 pp. - 1839 Supplement, 27 new species. Volume 1, 48 pp., volume 2, 78 pp. - 1841 Second edition, two volumes containing 263 species and 500 figures. Volume 1, 464 pp., volume 2, 628 pp. - 1859 Posthumous third edition, two volumes, edited by explorer and naturalist Sir John Richardson. In this edition, the text was preceded by a "Memoir of William Yarrell" and a list of his publications. Volume 1, 679 pp., volume 2, 673 pp. - 1860 Second supplement to first edition, edited by Sir John Richardson, also being the first supplement to the second edition, 71 pp. ### Other publications Yarrell's many other ichthyological works included an 1839 three-page, 30.5 by 44 cm (12.0 by 17.3 in) oblong folio, On the Growth of the Salmon in Fresh Water, with drawings in the text and six life-sized coloured illustrations of the fish, chapter 8, "Marine Fishes", in William Henry Harvey's 1854 The Sea-Side Book, and an article on Eurasian dace in the Transactions of the Linnean Society of London. ## Reception Publications writing contemporary positive reviews of A History of British Fishes included The Athenaeum, The Gentleman's Magazine, Leigh Hunt's London Journal, the London Medical Gazette and The Quarterly Review. The Gentleman's Magazine said > ... the task could not have been undertaken by one more competent for it. History and patient observations are enriched by a science of no ordinary kind ... We have little hesitation, therefore, in saying that the work before us is, perhaps, the most perfect of its kind which has been yet published. It is written in a style at once clear and satisfactory, and the illustrations are quite equal, if not superior, to those of Bewick's birds and quadrupeds. Indeed, we hardly thought it possible that fish could be so perfectly represented by engravings on wood ... The Quarterly Review saw the book as of wider importance. Near the end of a 35-page review, it states > This book ought to be largely circulated, not only on account of its scientific merits – though these, as we have in part shown, are great and signal – but because it is popularly written throughout, and therefore likely to excite general attention to a subject which ought to be held as one of primary importance by all those gentlemen of education and property who happen to be more immediately connected with some of the most extensive, and which might be among the most useful and important, districts of this empire. The passage continues with the promotion of sea fish as a means to relieve famine. There was a generally appreciative reception from Yarrell's fellow naturalists. Prideaux John Selby, an ornithologist and natural history artist, wrote to Jardine after receiving the first part to say how impressed he was with the beautifully executed woodcuts and the quality of the printing, and later, when he had the complete set, said to the same recipient that it was a "very beautiful work", although a few of the fish could have been better illustrated. Jardine himself published an enthusiastic review in his Magazine of Botany and Zoology. A History of British Fishes and the later A History of British Birds were both immediately commercially successful and became standard texts until the end of the nineteenth century. Van Voorst believed that Yarrell made around £4000 from the two books. Yarrell's name is commemorated in eight species, three of which are fish. These are Yarrell's blenny (Chirolophis ascanii), from the European North Atlantic coasts; the giant devil catfish, Bagarius yarrelli, from the rivers of the Indian subcontinent; and Laemonema yarrellii, a deep sea morid cod from Madeira and the Great Meteor Seamount of the North Atlantic. The lightfish genus Yarrella is also named for him. ## Cited texts
53,844,268
2017 FA Cup final
1,159,451,256
Association football championship match between Arsenal and Chelsea in 2017
[ "2016–17 FA Cup", "2017 sports events in London", "Arsenal F.C. matches", "Chelsea F.C. matches", "Events at Wembley Stadium", "FA Cup finals", "May 2017 sports events in the United Kingdom" ]
The 2017 FA Cup final was an association football match between London rivals Arsenal and Chelsea on 27 May 2017 at Wembley Stadium in London, England. It was the 136th FA Cup final overall of English football's primary cup competition, the Football Association Challenge Cup (FA Cup), organised by the Football Association (FA). This was a rematch of the 2002 FA Cup Final and the first final since 2003 in which the sides had won once in the Premier League against one another, with a 3–0 victory for Arsenal in September 2016, and a 3–1 win for Chelsea the following February. The game was broadcast live in the United Kingdom by both BBC and BT Sport. BBC One provided the free-to-air coverage and BT Sport 2 was the pay-TV alternative. The match was refereed by Anthony Taylor in front of a crowd of 89,472. Arsenal kicked off and dominated the early stages, opening the scoring with a goal from Alexis Sánchez in the fourth minute. On 68 minutes, Victor Moses fell in the Arsenal penalty area under pressure and appealed for a penalty but instead was shown his second yellow card by the referee for diving and was sent off. In the 76th minute, Diego Costa scored for Chelsea to level the score at 1–1: he received the ball from Willian and struck the ball past David Ospina, the Arsenal goalkeeper. Two minutes later Aaron Ramsey scored with a header past Chelsea goalkeeper Thibaut Courtois after a cross from Olivier Giroud, who had come on as a substitute less than a minute earlier, to make it 2–1 to Arsenal. After four minutes of stoppage time, the whistle was blown and Arsenal won the FA Cup final 2–1, to secure a record 13th title, while Arsène Wenger became the most successful manager in the tournament's history with seven wins. Winning the FA Cup would have meant Arsenal qualified for the 2017–18 UEFA Europa League group stage had they not already secured their place in the competition after finishing fifth in the 2016–17 Premier League. They also earned the right to play Chelsea who were the Premier League champions for the 2017 FA Community Shield. ## Route to the final ### Arsenal As a Premier League team, Arsenal started their campaign in the third round and were drawn away at EFL Championship club Preston North End. At Deepdale, Callum Robinson put Preston ahead from close range in the seventh minute to give the home side a 1–0 lead at half-time. A minute after the interval, Aaron Ramsey equalised with a powerful shot from the edge of the Preston penalty area before Olivier Giroud's deflected strike gave Arsenal a 2–1 victory. In the fourth round, they faced fellow Premier League side Southampton away from home at St Mary's Stadium. Danny Welbeck scored twice before the midway point of the first half before crossing to Theo Walcott to score from close range to make it 3–0 at half-time. Walcott completed his hat-trick in the second half, with two assists from Alexis Sánchez, and Arsenal won 5–0. In the fifth round, Arsenal were drawn away against non-League side Sutton United of the National League who were 105 places below them in the English football league system. At Sutton's Gander Green Lane, Arsenal won 2–0 with goals from Lucas Pérez and Walcott either side of half-time. The match was also noted for Sutton United's reserve goalkeeper Wayne Shaw being investigated by the Football Association and Gambling Commission: he had eaten a pie pitchside and admitted after the match that he had known that a betting company had offered odds on him doing so. In the quarter-final, Arsenal were drawn at home at the Emirates Stadium against National League club Lincoln City. Walcott gave Arsenal a one-goal lead in first-half stoppage time before second-half goals from Giroud, Sánchez and Ramsey, and an own goal by Luke Waterfall, gave the home side a 5–0 victory. In the semi-final which took place at Wembley Stadium as a neutral venue, they played against fellow Premier League team Manchester City. After a goalless first half, Sergio Agüero put Manchester City ahead on the hour mark before Nacho Monreal scored the equaliser with a volley from Alex Oxlade-Chamberlain's cross. The match ended 1–1 in regular time and went into extra time. In the 101st minute, Sánchez scored from close range to put Arsenal ahead, a lead which they kept for a 2–1 win and progression to the final. ### Chelsea Chelsea also started their FA Cup campaign in the third round where they were drawn at home at Stamford Bridge against League One side Peterborough United. The home side took the lead through Pedro, and Michy Batshuayi doubled their advantage before half-time. Willian made it 3–0 seven minutes after the interval before John Terry was sent off for a foul on Lee Angol. Three minutes later, Tom Nichols scored for Peterborough but Pedro scored with 15 minutes to go to make the final score 4–1. In the fourth round, they were drawn against Championship team Brentford at home. Goals from Willian and Pedro made it 2–0 after 21 minutes, before Branislav Ivanović's goal on the break and a penalty from Batshuayi gave Chelsea a 4–0 victory. In the fifth round, Chelsea faced Championship side Wolverhampton Wanderers away at Molineux. After a goalless first half, Pedro gave Chelsea the lead with a header midway through the second before Diego Costa secured a 2–0 win with a low strike in the 89th minute. In the quarter-final, Chelsea were drawn at home against fellow Premier League side and FA Cup holders Manchester United. Ander Herrera was sent off for Manchester United in the 35th minute for a second yellow card before N'Golo Kanté scored the game's solitary goal early in the second half with a low driven shot which beat David de Gea. In the semi-final at Wembley Stadium, Chelsea took on Tottenham Hotspur, their London rivals. Willian gave Chelsea the lead in the fifth minute with a free kick before Harry Kane equalised with a low header. Son Heung-min was adjudged to have fouled on Victor Moses on 43 minutes and Willian converted the subsequent penalty to give Chelsea a 2–1 half-time lead. Dele Alli equalised from a Christian Eriksen pass early in the second half but strikes from Eden Hazard and Nemanja Matić secured a 4–2 win for Chelsea and qualification for the final. ## Pre-match Arsenal were appearing in the FA Cup final for the 20th time, and for the third time in four years. They had won the cup twelve times, and were beaten finalists seven times; most recently in 2001. By comparison, Chelsea were making their 12th appearance in a FA Cup final. The club won the cup seven times and lost four finals. The clubs had previously met 13 times in the FA Cup. Arsenal held an advantage in those meetings, winning seven of the last eight; Chelsea won the last FA Cup tie, a 2–1 victory in April 2009. This was the second FA Cup final to feature both sides; the first was won by Arsenal in 2002. The most recent meeting between the two teams was a league encounter in February 2017, Chelsea winning by three goals to one, a result which moved them 12 points clear in first position. The victory was significant given that Chelsea had lost the reverse fixture 3–0 in September 2016, in what BBC journalist Phil McNulty described as a "watershed moment" in their season. While Arsenal struggled to build momentum throughout autumn and winter, Chelsea manager Antonio Conte's tactical switch from 4–3–3 to 3–4–3 thereafter resulted in a 13-match winning run. They won the Premier League with two matches to spare, and later set a new divisional record for the most wins (30). Arsenal ended the season in fifth place, their lowest placing under manager Arsène Wenger, missing out on UEFA Champions League football for the first time in 20 years. Wenger's future had been cast into doubt following a bad run of form in February and March, which included the team losing 10–2 on aggregate against Bayern Munich in the Champions League, the worst aggregate performance by an English club in the history of the tournament. To arrest the decline, Wenger adopted a similar tactical change to Conte, playing three defenders at the back. Arsenal went on to win eight of their last nine fixtures, but Wenger suggested his team were not favourites: "it's quite even or maybe Chelsea are ahead, so it's a bit similar to what happened in the semi-final against Manchester City. That's part of what makes it all exciting as well." Of his future he said, "It will not be my last match anyway, because I will stay, no matter what happens, in football." Former Arsenal player Paul Merson's evaluation was, "Mertesacker is going to be crucial for Arsenal if he plays; he will have to play very well if Arsenal are to have any chance. If he doesn't play well then Chelsea are going to cut through Arsenal like a knife through butter." Conte described Wenger as one of the "greats" in football, and felt he would remain as Arsenal manager come the season's end. "He has done a fantastic job. Sometimes in England I think you undervalue the achievement of qualifying for the Champions League. Only this season they haven't qualified for the Champions League," he continued. Conte reiterated the importance of his players keeping their focus and wanted Chelsea to "pay great attention and focus" to their opponents. Hazard, who was playing in his first FA Cup final, was eager to win the competition: "For Chelsea, for such a big club like this, you need to win one, two, three trophies every season if you can. Now we have the possibility to win another trophy so all the players are ready for that. It's such a great competition for the fans." While Chelsea had no injury or suspension worries, Arsenal had doubts over the fitness of Petr Čech and Shkodran Mustafi, and were already without defenders Laurent Koscielny (suspension) and Gabriel (ankle injury). Per Mertesacker was expected to start; the Germany international only featured once for Arsenal's first team during the season. The day before the final The Guardian reported that Wenger chose David Ospina to start in goal ahead of Čech. Both clubs received an allocation of approximately 28,000 tickets. For adults, these were priced £45, £65, £85 and £115, with concessions in place. Chelsea supporters were situated in the west side of the ground, while Arsenal's were allocated in the east. The remaining 14,000 tickets were distributed to what the FA described as the "football family which includes volunteers representing counties, leagues, local clubs and charities". The losing finalist would receive £1.6 million in total prize money while the winners earned a total of £3.4 million. Security at Wembley Stadium was tightened in the wake of the Manchester Arena bombing and Arsenal cancelled a screening of the game at their ground. Both clubs cancelled plans for open top bus victory parades. The game was broadcast live in the United Kingdom by both BBC and BT Sport. BBC One provided the free-to-air coverage and BT Sport 2 was the pay-TV alternative. It was the first time in the history of the FA Cup that a spidercam was utilised during the match. Sol Campbell and Eddie Newton came onto the pitch to greet the supporters and place the trophy on a plinth. As they departed, the traditional Cup Final hymn, "Abide with Me" was sung by representatives of eight clubs, including Lincoln City, Guernsey, Millwall and Sutton United. The teams emerged moments later led by their managers, and players were greeted by Prince William, Duke of Cambridge. Soprano Emily Haig sang the national anthem and a minute's silence was then held to honour the victims of the Manchester attack. Prince William, Mayor of Greater Manchester Andy Burnham, and FA chairman Greg Clarke laid wreaths on the pitch in tribute. ## Match ### Summary #### First half Arsenal kicked off the match around 5:30 p.m. on 27 May 2017 at Wembley Stadium in front of 89,472 spectators. Chelsea lined up as a 3–4–3 formation with Pedro, Costa and Hazard in attack, while Arsenal adopted a 3–4–2–1 with Welbeck up front. Arsenal dominated the early stages of the match and opened the scoring with a goal from Sánchez in the fourth minute, shooting past the advancing goalkeeper from 6 yards (5.5 m) out with his right foot. The goal was initially flagged as offside as Ramsey was adjudged as being in an offside position. After discussion with his assistant referee, the overrode the decision and awarded Arsenal the goal due to Ramsey not attempting to play the ball. In the tenth minute, Ramsey was shown the first yellow card of the match. In the 15th minute, Sánchez struck from distance but his shot was high, before Costa's shot from around 14 yards (13 m) was blocked by Arsenal's defence. A minute later, Mesut Özil's side-footed shot was cleared off the line by Gary Cahill. On 19 minutes, Arsenal hit the frame of Chelsea's goal twice in quick succession: Welbeck's header struck the post and the ball rebounded on Ramsey's chest from where it hit the post once more before going out. Midway through the half, Hazard passed to Moses whose shot was blocked before Mertesacker stopped Costa's shot. On 29 minutes, a quick break from Arsenal ended with Welbeck opting to shoot from a narrow angle and Cahill making another goal-line clearance. Three minutes later, Sánchez's floated free kick fell to Granit Xhaka whose strike from distance was saved by Thibaut Courtois, the Chelsea goalkeeper. With six minutes of the half remaining, Pedro's shot from the edge of the Arsenal penalty area went over the crossbar. Early in stoppage time, Monreal fouled Pedro near the box but Alonso's free kick was off-target and the half ended 1–0. #### Second half Neither side made any changes to their personnel during the interval and the second half kicked off with neither side dominating. Four minutes in, Pedro's shot was blocked by Mertesacker before Kanté's powerful shot was caught by Ospina in the Arsenal goal. Costa's attempt was then blocked by Mertesacker before Moses was kept out by Arsenal's defence. In the 54th minute, Rob Holding was booked for bringing Costa down on the edge of the Arsenal penalty area: Pedro's subsequent free kick was headed clear by Mertesacker. Two minutes later Moses was shown the yellow card for a foul on Welbeck before Kanté was booked for illegally blocking Ramsey. In the 61st minute, Chelsea made the first substitution of the match when Matić was replaced by Cesc Fàbregas. Héctor Bellerín then took possession of the ball on the edge of the Chelsea penalty area after a run down the left wing by Welbeck, but his low shot was saved by Courtois diving to his left. On 68 minutes, Moses fell in the Arsenal area while close to Monreal and appealed for a penalty but instead was shown his second yellow card by the referee for diving and was sent off. With 18 minutes of the game remaining, Chelsea made their second change with Willian coming on for Pedro. In the 76th minute, Costa scored for Chelsea to level the score at 1–1: he received the ball from Willian, chested it down and struck the ball past Ospina. Giroud then came on for Welbeck, and 38 seconds later Arsenal re-took the lead: Ramsey headed the ball past Courtois after a cross from Giroud to make it 2–1. With ten minutes remaining, David Luiz headed Willian's free kick into the side netting. Arsenal's Oxlade-Chamberlain was then replaced by Francis Coquelin who was booked within a minute for a foul. On 85 minutes, Bellerín received the ball on the halfway line and ran at Luiz, beating him before shooting wide of the Chelsea goal. Costa's strike then hit Ospina squarely in the chest from close range. Batshuayi came on in the 88th minute to replace Costa before Özil side-footed shot struck the Chelsea goal-post. Three minutes into injury time, Arsenal brought on Mohamed Elneny to replace Sánchez. After one further minute of stoppage time, the whistle was blown and Arsenal won the FA Cup final 2–1. ### Details ## Post-match Winning the game secured a record 13th title for Arsenal, while Wenger became the most successful manager in the tournament's history with seven wins. Although winning the FA Cup would have secured a 2017–18 UEFA Europa League group stage qualification, Arsenal had already qualified for the competition with a fifth-placed finish in the 2016–17 Premier League, which saw them fail to qualify for the 2017–18 UEFA Champions League. Due to the circumstances surrounding Mertesacker's appearance, and performance on the day, some Arsenal fans and former players have dubbed the game The Mertesacker Final. Welbeck praised his team but refused to be drawn on Wenger's future, saying "It was a great team performance ... The manager is his own man and he makes his own decision and the board will make the right decision so I can't comment on that." Wenger himself focused on his team's display: "We had an outstanding performance from the first minute onwards. This team has suffered. They've united and responded." Chelsea goalkeeper Courtois refused to blame Moses for the defeat: "We are obviously disappointed but I want to say congratulations to Arsenal. They played a good game ... we went down to 10 men and the red card was correct. Victor Moses doesn't need to apologise." Losing manager Conte said he had been surprised by Arsenal and that his side had started poorly: "Arsenal started very well with great determination. They surprised us a bit but I repeat our first 25 minutes weren't good ... Our season was incredible to win the league in this way, it was great but now its important to look forward and to restart." ## See also - Arsenal F.C.–Chelsea F.C. rivalry
48,927
Camille Saint-Saëns
1,172,169,341
French composer, organist, conductor and pianist (1835–1921)
[ "1835 births", "1921 deaths", "19th-century French composers", "19th-century French male classical pianists", "19th-century French male classical violinists", "19th-century classical composers", "19th-century organists", "20th-century French composers", "20th-century French male classical pianists", "20th-century French male classical violinists", "20th-century classical composers", "20th-century organists", "Burials at Montparnasse Cemetery", "Camille Saint-Saëns", "Child classical musicians", "Classical composers of church music", "Composers awarded knighthoods", "Composers for pedal piano", "Composers for piano", "Composers for pipe organ", "Composers for violin", "Conservatoire de Paris alumni", "French Romantic composers", "French ballet composers", "French classical organists", "French deists", "French male classical composers", "French male organists", "French military personnel of the Franco-Prussian War", "French music critics", "French opera composers", "French people of Norman descent", "Grand Cross of the Legion of Honour", "Grand Crosses of the Order of Saint-Charles", "Honorary Members of the Royal Philharmonic Society", "Male classical organists", "Male opera composers", "Musicians from Paris", "Oratorio composers", "Pupils of Fromental Halévy", "Recipients of the Pour le Mérite (civil class)" ]
Charles-Camille Saint-Saëns (UK: /ˈsæ̃sɒ̃(s)/, US: /sæ̃ˈsɒ̃(s)/, ; 9 October 1835 – 16 December 1921) was a French composer, organist, conductor and pianist of the Romantic era. His best-known works include Introduction and Rondo Capriccioso (1863), the Second Piano Concerto (1868), the First Cello Concerto (1872), Danse macabre (1874), the opera Samson and Delilah (1877), the Third Violin Concerto (1880), the Third ("Organ") Symphony (1886) and The Carnival of the Animals (1886). Saint-Saëns was a musical prodigy; he made his concert debut at the age of ten. After studying at the Paris Conservatoire he followed a conventional career as a church organist, first at Saint-Merri, Paris and, from 1858, La Madeleine, the official church of the French Empire. After leaving the post twenty years later, he was a successful freelance pianist and composer, in demand in Europe and the Americas. As a young man, Saint-Saëns was enthusiastic for the most modern music of the day, particularly that of Schumann, Liszt and Wagner, although his own compositions were generally within a conventional classical tradition. He was a scholar of musical history, and remained committed to the structures worked out by earlier French composers. This brought him into conflict in his later years with composers of the impressionist and expressionist schools of music; although there were neoclassical elements in his music, foreshadowing works by Stravinsky and Les Six, he was often regarded as a reactionary in the decades around the time of his death. Saint-Saëns held only one teaching post, at the École de Musique Classique et Religieuse in Paris, and remained there for less than five years. It was nevertheless important in the development of French music: his students included Gabriel Fauré, among whose own later pupils was Maurice Ravel. Both of them were strongly influenced by Saint-Saëns, whom they revered as a genius. ## Life ### Early life Saint-Saëns was born in Paris, the only child of Jacques-Joseph-Victor Saint-Saëns (1798–1835), an official in the French Ministry of the Interior, and Françoise-Clémence, née Collin. Victor Saint-Saëns was of Norman ancestry, and his wife was from an Haute-Marne family; their son, born in the Rue du Jardinet in the 6th arrondissement of Paris, and baptised at the nearby church of Saint-Sulpice, always considered himself a true Parisian. Less than two months after the christening, Victor Saint-Saëns died of consumption (tuberculosis) on the first anniversary of his marriage. The young Camille was taken to the country for the sake of his health, and for two years lived with a nurse at Corbeil, 29 kilometres (18 mi) to the south of Paris. When Saint-Saëns was brought back to Paris he lived with his mother and her widowed aunt, Charlotte Masson. Before he was three years old he displayed perfect pitch and enjoyed picking out tunes on the piano. His great-aunt taught him the basics of pianism, and when he was seven he became a pupil of Camille-Marie Stamaty, a former pupil of Friedrich Kalkbrenner. Stamaty required his students to play while resting their forearms on a bar situated in front of the keyboard, so that all the pianist's power came from the hands and fingers rather than the arms, which, Saint-Saëns later wrote, was good training. Clémence Saint-Saëns, well aware of her son's precocious talent, did not wish him to become famous too young. The music critic Harold C. Schonberg wrote of Saint-Saëns in 1969, "It is not generally realized that he was the most remarkable child prodigy in history, and that includes Mozart." The boy gave occasional performances for small audiences from the age of five, but it was not until he was ten that he made his official public debut, at the Salle Pleyel, in a programme that included Mozart's Piano Concerto in B (K450), and Beethoven's Third Piano Concerto. Through Stamaty's influence, Saint-Saëns was introduced to the composition professor Pierre Maleden and the organ teacher Alexandre Pierre François Boëly. From the latter he acquired a lifelong love of the music of Bach, which was then little known in France. As a schoolboy Saint-Saëns was outstanding in many subjects. In addition to his musical prowess, he distinguished himself in the study of French literature, Latin and Greek, divinity, and mathematics. His interests included philosophy, archaeology and astronomy, of which, particularly the last, he remained a talented amateur in later life. In 1848, at the age of thirteen, Saint-Saëns was admitted to the Paris Conservatoire, France's foremost music academy. The director, Daniel Auber, had succeeded Luigi Cherubini in 1842, and brought a more relaxed regime than that of his martinet predecessor, though the curriculum remained conservative. Students, even outstanding pianists like Saint-Saëns, were encouraged to specialise in organ studies, because a career as a church organist was seen to offer more opportunities than that of a solo pianist. His organ professor was François Benoist, whom Saint-Saëns considered a mediocre organist but a first-rate teacher; his pupils included Adolphe Adam, César Franck, Charles Alkan, Louis Lefébure-Wély and Georges Bizet. In 1849 Saint-Saëns won the Conservatoire's second prize for organists, and in 1851 the top prize; in the same year he began formal composition studies. His professor was a protégé of Cherubini, Fromental Halévy, whose pupils included Charles Gounod and Bizet. Saint-Saëns's student compositions included a symphony in A major (1850) and a choral piece, Les Djinns (1850), after an eponymous poem by Victor Hugo. He competed for France's premier musical award, the Prix de Rome, in 1852 but was unsuccessful. Auber believed that the prize should have gone to Saint-Saëns, considering him to have more promise than the winner, Léonce Cohen, who made little mark during the rest of his career. In the same year Saint-Saëns had greater success in a competition organised by the Société Sainte-Cécile, Paris, with his Ode à Sainte-Cécile, for which the judges unanimously voted him the first prize. The first piece the composer acknowledged as a mature work and gave an opus number was Trois Morceaux for harmonium (1852). ### Early career On leaving the Conservatoire in 1853, Saint-Saëns accepted the post of organist at the ancient Parisian church of Saint-Merri near the Hôtel de Ville. The parish was substantial, with 26,000 parishioners; in a typical year there were more than two hundred weddings, the organist's fees from which, together with fees for funerals and his modest basic stipend, gave Saint-Saëns a comfortable income. The organ, the work of François-Henri Clicquot, had been badly damaged in the aftermath of the French Revolution and imperfectly restored. The instrument was adequate for church services but not for the ambitious recitals that many high-profile Parisian churches offered. With enough spare time to pursue his career as a pianist and composer, Saint-Saëns composed what became his opus 2, the Symphony in E (1853). This work, with military fanfares and augmented brass and percussion sections, caught the mood of the times in the wake of the popular rise to power of Napoleon III and the restoration of the French Empire. The work brought the composer another first prize from the Société Sainte-Cécile. Among the musicians who were quick to spot Saint-Saëns's talent were the composers Gioachino Rossini, Hector Berlioz and Franz Liszt, and the influential singer Pauline Viardot, who all encouraged him in his career. In early 1858 Saint-Saëns moved from Saint-Merri to the high-profile post of organist of La Madeleine, the official church of the Empire; Liszt heard him playing there and declared him the greatest organist in the world. Although in later life he had a reputation for outspoken musical conservatism, in the 1850s Saint-Saëns supported and promoted the most modern music of the day, including that of Liszt, Robert Schumann and Richard Wagner. Unlike many French composers of his own and the next generation, Saint-Saëns, for all his enthusiasm for and knowledge of Wagner's operas, was not influenced by him in his own compositions. He commented, "I admire deeply the works of Richard Wagner in spite of their bizarre character. They are superior and powerful, and that is sufficient for me. But I am not, I have never been, and I shall never be of the Wagnerian religion." ### 1860s: Teacher and growing fame In 1861 Saint-Saëns accepted his only post as a teacher, at the École de Musique Classique et Religieuse, Paris, which Louis Niedermeyer had established in 1853 to train first-rate organists and choirmasters for the churches of France. Niedermeyer himself was professor of piano; when he died in March 1861, Saint-Saëns was appointed to take charge of piano studies. He scandalised some of his more austere colleagues by introducing his students to contemporary music, including that of Schumann, Liszt and Wagner. His best-known pupil, Gabriel Fauré, recalled in old age: > After allowing the lessons to run over, he would go to the piano and reveal to us those works of the masters from which the rigorous classical nature of our programme of study kept us at a distance and who, moreover, in those far-off years, were scarcely known. ... At the time I was 15 or 16, and from this time dates the almost filial attachment ... the immense admiration, the unceasing gratitude I [have] had for him, throughout my life. Saint-Saëns further enlivened the academic regime by writing, and composing incidental music for, a one-act farce performed by the students (including André Messager). He conceived what would eventually become his best-known piece, The Carnival of the Animals, with his students in mind, but did not finish composing it until 1886, more than twenty years after he left the Niedermeyer school. In 1864 Saint-Saëns caused some surprise by competing a second time for the Prix de Rome. Many in musical circles were puzzled by his decision to enter the competition again, now that he was establishing a reputation as a soloist and composer. He was once more unsuccessful. Berlioz, one of the judges, wrote: > We gave the Prix de Rome the other day to a young man who wasn't expecting to win it and who went almost mad with joy. We were all expecting the prize to go to Camille Saint-Saëns, who had the strange notion of competing. I confess I was sorry to vote against a man who is truly a great artist and one who is already well known, practically a celebrity. But the other man, who is still a student, has that inner fire, inspiration, he feels, he can do things that can't be learnt and the rest he'll learn more or less. So I voted for him, sighing at the thought of the unhappiness that this failure must cause Saint-Saëns. But, whatever else, one must be honest. According to the musical scholar Jean Gallois, it was apropos of this episode that Berlioz made his well-known bon mot about Saint-Saëns, "He knows everything, but lacks inexperience" ("Il sait tout, mais il manque d'inexpérience"). The winner, Victor Sieg, had a career no more notable than that of the 1852 winner, but Saint-Saëns's biographer Brian Rees speculates that the judges may "have been seeking signs of genius in the midst of tentative effort and error, and considered that Saint-Saëns had reached his summit of proficiency". The suggestion that Saint-Saëns was more proficient than inspired dogged his career and posthumous reputation. He himself wrote, "Art is intended to create beauty and character. Feeling only comes afterwards and art can very well do without it. In fact, it is very much better off when it does." The biographer Jessica Duchen writes that he was "a troubled man who preferred not to betray the darker side of his soul". The critic and composer Jeremy Nicholas observes that this reticence has led many to underrate the music; he quotes such slighting remarks as "Saint-Saëns is the only great composer who wasn't a genius", and "Bad music well written". While teaching at the Niedermeyer school Saint-Saëns put less of his energy into composing and performing, although an overture entitled Spartacus was crowned at a competition instituted in 1863 by the Société Sainte Cécile of Bordeaux. But after he left the school in 1865 he pursued both aspects of his career with vigour. In 1867 his cantata Les noces de Prométhée beat more than a hundred other entries to win the composition prize of the Grande Fête Internationale in Paris, for which the jury included Auber, Berlioz, Gounod, Rossini and Giuseppe Verdi. In 1868 he premiered the first of his orchestral works to gain a permanent place in the repertoire, his Second Piano Concerto. Playing this and other works he became a noted figure in the musical life of Paris and other cities in France and abroad during the 1860s. ### 1870s: War, marriage and operatic success In 1870, concerned at the dominance of German music and the lack of opportunity for young French composers to have their works played, Saint-Saëns and Romain Bussine, professor of singing at the Conservatoire, discussed the founding of a society to promote new French music. Before they could take the proposal further the Franco-Prussian War broke out. Saint-Saëns served in the National Guard during the war. During the brief but bloody Paris Commune that followed in March to May 1871 his superior at the Madeleine, the Abbé Deguerry, was murdered by rebels; Saint-Saëns escaped to a brief exile in England. With the help of George Grove and others he supported himself in London, giving recitals. Returning to Paris in May, he found that anti-German sentiments had considerably enhanced support for the idea of a pro-French musical society. The Société Nationale de Musique, with its motto, "Ars Gallica", had been established in February 1871, with Bussine as president, Saint-Saëns as vice-president and Henri Duparc, Fauré, Franck and Jules Massenet among its founder-members. As an admirer of Liszt's innovative symphonic poems, Saint-Saëns enthusiastically adopted the form; his first "poème symphonique" was Le Rouet d'Omphale (1871), premiered at a concert of the Sociéte Nationale in January 1872. In the same year, after more than a decade of intermittent work on operatic scores, Saint-Saëns finally had one of his operas staged. La princesse jaune ("The Yellow Princess"), a one-act, light romantic piece, was given at the Opéra-Comique, Paris in June. It ran for five performances. Throughout the 1860s and early 1870s, Saint-Saëns had continued to live a bachelor existence, sharing a large fourth-floor flat in the Rue du Faubourg Saint-Honoré with his mother. In 1875, he surprised many by marrying. The groom was approaching forty and his bride was nineteen; she was Marie-Laure Truffot, the sister of one of the composer's pupils. The marriage was not a success. In the words of the biographer Sabina Teller Ratner, "Saint-Saëns's mother disapproved, and her son was difficult to live with". Saint-Saëns and his wife moved to the Rue Monsieur-le-Prince, in the Latin Quarter; his mother moved with them. The couple had two sons, both of whom died in infancy. In 1878, the elder, André, aged two, fell from a window of the flat and was killed; the younger, Jean-François, died of pneumonia six weeks later, aged six months. Saint-Saëns and Marie-Laure continued to live together for three years, but he blamed her for André's accident; the double blow of their loss effectively destroyed the marriage. For a French composer of the 19th century, opera was seen as the most important type of music. Saint-Saëns's younger contemporary and rival, Massenet, was beginning to gain a reputation as an operatic composer, but Saint-Saëns, with only the short and unsuccessful La princesse jaune staged, had made no mark in that sphere. In February 1877, he finally had a full-length opera staged. His four-act "drame lyricque", Le timbre d'argent ("The Silver Bell"), to Jules Barbier's and Michel Carré's libretto, reminiscent of the Faust legend, had been in rehearsal in 1870, but the outbreak of war halted the production. The work was eventually presented by the Théâtre Lyrique company of Paris; it ran for eighteen performances. The dedicatee of the opera, Albert Libon, died three months after the premiere, leaving Saint-Saëns a large legacy "To free him from the slavery of the organ of the Madeleine and to enable him to devote himself entirely to composition". Saint-Saëns, unaware of the imminent bequest, had resigned his position shortly before his friend died. He was not a conventional Christian, and found religious dogma increasingly irksome; he had become tired of the clerical authorities' interference and musical insensitivity; and he wanted to be free to accept more engagements as a piano soloist in other cities. After this he never played the organ professionally in a church service, and rarely played the instrument at all. He composed a Messe de Requiem in memory of his friend, which was performed at Saint-Sulpice to mark the first anniversary of Libon's death; Charles-Marie Widor played the organ and Saint-Saëns conducted. In December 1877, Saint-Saëns had a more solid operatic success with Samson et Dalila, his one opera to gain and keep a place in the international repertoire. Because of its biblical subject, the composer had met many obstacles to its presentation in France, and through Liszt's influence the premiere was given at Weimar in a German translation. Although the work eventually became an international success it was not staged at the Paris Opéra until 1892. Saint-Saëns was a keen traveller. From the 1870s until the end of his life he made 179 trips to 27 countries. His professional engagements took him most often to Germany and England; for holidays, and to avoid Parisian winters which affected his weak chest, he favoured Algiers and various places in Egypt. ### 1880s: International figure Saint-Saëns was elected to the Institut de France in 1881, at his second attempt, having to his chagrin been beaten by Massenet in 1878. In July of that year he and his wife went to the Auvergnat spa town of La Bourboule for a holiday. On 28 July he disappeared from their hotel, and a few days later his wife received a letter from him to say that he would not be returning. They never saw each other again. Marie Saint-Saëns returned to her family, and lived until 1950, dying near Bordeaux at the age of ninety-five. Saint-Saëns did not divorce his wife and remarry, nor did he form any later intimate relationship with a woman. Rees comments that although there is no firm evidence, some biographers believe that Saint-Saëns was more attracted to his own sex than to women. After the death of his children and collapse of his marriage, Saint-Saëns increasingly found a surrogate family in Fauré and his wife, Marie, and their two sons, to whom he was a much-loved honorary uncle. Marie told him, "For us you are one of the family, and we mention your name ceaselessly here." In the 1880s Saint-Saëns continued to seek success in the opera house, an undertaking made the more difficult by an entrenched belief among influential members of the musical establishment that it was unthinkable that a pianist, organist and symphonist could write a good opera. He had two operas staged during the decade, the first being Henry VIII (1883) commissioned by the Paris Opéra. Although the libretto was not of his choosing, Saint-Saëns, normally a fluent, even facile composer, worked at the score with unusual diligence to capture a convincing air of 16th-century England. The work was a success, and was frequently revived during the composer's lifetime. When it was produced at Covent Garden in 1898, The Era commented that though French librettists generally "make a pretty hash of British history", this piece was "not altogether contemptible as an opera story". The open-mindedness of the Société Nationale had hardened by the mid-1880s into a dogmatic adherence to Wagnerian methods favoured by Franck's pupils, led by Vincent d'Indy. They had begun to dominate the organisation and sought to abandon its "Ars Gallica" ethos of commitment to French works. Bussine and Saint-Saëns found this unacceptable, and resigned in 1886. Having long pressed the merits of Wagner on a sometimes sceptical French public, Saint-Saëns was now becoming worried that the German's music was having an excessive impact on young French composers. His increasing caution towards Wagner developed in later years into stronger hostility, directed as much at Wagner's political nationalism as at his music. By the 1880s Saint-Saëns was an established favourite with audiences in England, where he was widely regarded as the greatest living French composer. In 1886 the Philharmonic Society of London commissioned what became one of his most popular and respected works, the Third ("Organ") Symphony. It was premiered in London at a concert in which Saint-Saëns appeared as conductor of the symphony and as soloist in Beethoven's Fourth Piano Concerto, conducted by Sir Arthur Sullivan. The success of the symphony in London was considerable, but was surpassed by the ecstatic welcome the work received at its Paris premiere early the following year. Later in 1887 Saint-Saëns's "drame lyrique" Proserpine opened at the Opéra-Comique. It was well received and seemed to be heading for a substantial run when the theatre burnt down within weeks of the premiere and the production was lost. In December 1888 Saint-Saëns's mother died. He felt her loss deeply, and was plunged into depression and insomnia, even contemplating suicide. He left Paris and stayed in Algiers, where he recuperated until May 1889, walking and reading but unable to compose. ### 1890s: Marking time During the 1890s Saint-Saëns spent much time on holiday, travelling overseas, composing less and performing more infrequently than before. A planned visit to perform in Chicago fell through in 1893. He wrote one opera, the comedy Phryné (1893), and together with Paul Dukas helped to complete Frédégonde (1895) an opera left unfinished by Ernest Guiraud, who died in 1892. Phryné was well received, and prompted calls for more comic operas at the Opéra-Comique, which had latterly been favouring grand opera. His few choral and orchestral works from the 1890s are mostly short; the major concert pieces from the decade were the single movement fantasia Africa (1891) and his Fifth ("Egyptian") Piano Concerto, which he premiered at a concert in 1896 marking the fiftieth anniversary of his début at the Salle Pleyel in 1846. Before playing the concerto he read out a short poem he had written for the event, praising his mother's tutelage and his public's long support. Among the concerts that Saint-Saëns undertook during the decade was one at Cambridge in June 1893, when he, Bruch and Tchaikovsky performed at an event presented by Charles Villiers Stanford for the Cambridge University Musical Society, marking the award of honorary degrees to all three visitors. Saint-Saëns greatly enjoyed the visit, and even spoke approvingly of the college chapel services: "The demands of English religion are not excessive. The services are very short, and consist chiefly of listening to good music extremely well sung, for the English are excellent choristers". His mutual regard for British choirs continued for the rest of his life, and one of his last large-scale works, the oratorio The Promised Land, was composed for the Three Choirs Festival of 1913. ### 1900–21: Last years In 1900, after ten years without a permanent home in Paris, Saint-Saëns took a flat in the rue de Courcelles, not far from his old residence in the rue du Faubourg Saint-Honoré. This remained his home for the rest of his life. He continued to travel abroad frequently, but increasingly often to give concerts rather than as a tourist. He revisited London, where he was always a welcome visitor, went to Berlin, where until the First World War, he was greeted with honour, and travelled in Italy, Spain, Monaco and provincial France. In 1906 and 1909 he made highly successful tours of the United States, as a pianist and conductor. In New York on his second visit he premiered his "Praise ye the Lord" for double choir, orchestra and organ, which he composed for the occasion. Despite his growing reputation as a musical reactionary, Saint-Saëns was, according to Gallois, probably the only French musician who travelled to Munich to hear the premiere of Mahler's Eighth Symphony in 1910. Nonetheless, by the 20th century Saint-Saëns had lost much of his enthusiasm for modernism in music. Though he strove to conceal it from Fauré, he did not understand or like the latter's opera Pénélope (1913), of which he was the dedicatee. In 1917 Francis Poulenc, at the beginning of his career as a composer, was dismissive when Ravel praised Saint-Saëns as a genius. By this time, various strands of new music were emerging with which Saint-Saëns had little in common. His classical instincts for form put him at odds with what seemed to him the shapelessness and structure of the musical impressionists, led by Debussy. Nor did Arnold Schönberg's atonality commend itself to Saint-Saëns: > There is no longer any question of adding to the old rules new principles which are the natural expression of time and experience, but simply of casting aside all rules and every restraint. "Everyone ought to make his own rules. Music is free and unlimited in its liberty of expression. There are no perfect chords, dissonant chords or false chords. All aggregations of notes are legitimate." That is called, and they believe it, the development of taste. Holding such conservative views, Saint-Saëns was out of sympathy – and out of fashion – with the Parisian musical scene of the early 20th century, fascinated as it was with novelty. It is often said that he walked out, scandalised, from the premiere of Vaslav Nijinsky and Igor Stravinsky's ballet The Rite of Spring in 1913. In fact, according to Stravinsky, Saint-Saëns was not present on that occasion, but at the first concert performance of the piece the following year he expressed the firm view that Stravinsky was insane. When a group of French musicians led by Saint-Saëns tried to organise a boycott of German music during the First World War, Fauré and Messager dissociated themselves from the idea, though the disagreement did not affect their friendship with their old teacher. They were privately concerned that their friend was in danger of looking foolish with his excess of patriotism, and his growing tendency to denounce in public the works of rising young composers, as in his condemnation of Debussy's En blanc et noir (1915): "We must at all costs bar the door of the Institut against a man capable of such atrocities; they should be put next to the cubist pictures." His determination to block Debussy's candidacy for election to the Institut was successful, and caused bitter resentment from the younger composer's supporters. Saint-Saëns's response to the neoclassicism of Les Six was equally uncompromising: of Darius Milhaud's polytonal symphonic suite Protée (1919) he commented, "fortunately, there are still lunatic asylums in France". Saint-Saëns gave what he intended to be his farewell concert as a pianist in Paris in 1913, but his retirement was soon in abeyance as a result of the war, during which he gave many performances in France and elsewhere, raising money for war charities. These activities took him across the Atlantic, despite the danger from German submarines. In November 1921, Saint-Saëns gave a recital at the Institut for a large invited audience; it was remarked that his playing was as vivid and precise as ever, and that his personal bearing was admirable for a man of eighty-six. He left Paris a month later for Algiers, with the intention of wintering there, as he had long been accustomed to do. While there he died of a heart attack on 16 December 1921. His body was taken back to Paris, and after a state funeral at the Madeleine he was buried at the cimetière du Montparnasse. Heavily veiled, in an inconspicuous place among the mourners from France's political and artistic élite, was his widow, Marie-Laure, whom he had last seen in 1881. ## Music In the early years of the 20th century, the anonymous author of the article on Saint-Saëns in Grove's Dictionary of Music and Musicians wrote: > Saint-Saëns is a consummate master of composition, and no one possesses a more profound knowledge than he does of the secrets and resources of the art; but the creative faculty does not keep pace with the technical skill of the workman. His incomparable talent for orchestration enables him to give relief to ideas which would otherwise be crude and mediocre in themselves ... his works are on the one hand not frivolous enough to become popular in the widest sense, nor on the other do they take hold of the public by that sincerity and warmth of feeling which is so convincing. Although a keen modernist in his youth, Saint-Saëns was always deeply aware of the great masters of the past. In a profile of him written to mark his eightieth birthday, the critic D C Parker wrote, "That Saint-Saëns knows Rameau ... Bach and Handel, Haydn and Mozart, must be manifest to all who are familiar with his writings. His love for the classical giants and his sympathy with them form, so to speak, the foundation of his art." Less attracted than some of his French contemporaries to the continuous stream of music popularised by Wagner, Saint-Saëns often favoured self-contained melodies. Though they are frequently, in Ratner's phrase, "supple and pliable", more often than not they are constructed in three- or four-bar sections, and the "phrase pattern AABB is characteristic". An occasional tendency to neoclassicism, influenced by his study of French baroque music, is in contrast with the colourful orchestral music more widely identified with him. Grove observes that he makes his effects more by characterful harmony and rhythms than by extravagant scoring. In both of those areas of his craft he was normally content with the familiar. Rhythmically, he inclined to standard double, triple or compound metres (although Grove points to a 5/4 passage in the Piano Trio and another in 7/4 in the Polonaise for two pianos). From his time at the Conservatoire he was a master of counterpoint; contrapuntal passages crop up, seemingly naturally, in many of his works. ### Orchestral works The authors of the 1955 The Record Guide, Edward Sackville-West and Desmond Shawe-Taylor write that Saint-Saëns's brilliant musicianship was "instrumental in drawing the attention of French musicians to the fact that there are other forms of music besides opera." In the 2001 edition of Grove's Dictionary, Ratner and Daniel Fallon, analysing Saint-Saëns's orchestral music rate the unnumbered Symphony in A (c. 1850) as the most ambitious of the composer's juvenilia. Of the works of his maturity, the First Symphony (1853) is a serious and large-scale work, in which the influence of Schumann is detectable. The "Urbs Roma" Symphony (1856, unnumbered) in some ways represents a backward step, being less deftly orchestrated, and "thick and heavy" in its effect. Ratner and Fallon praise the Second Symphony (1859) as a fine example of orchestral economy and structural cohesion, with passages that show the composer's mastery of fugal writing. The best known of the symphonies is the Third (1886) which, unusually, has prominent parts for piano and organ. It opens in C minor and ends in C major with a stately chorale tune. The four movements are clearly divided into two pairs, a practice Saint-Saëns used elsewhere, notably in the Fourth Piano Concerto (1875) and the First Violin Sonata (1885). The work is dedicated to the memory of Liszt, and uses a recurring motif treated in a Lisztian style of thematic transformation. Saint-Saëns's four symphonic poems follow the model of those by Liszt, though, in Sackville-West's and Shawe-Taylor's view, without the "vulgar blatancy" to which the earlier composer was prone. The most popular of the four is Danse macabre (1874) depicting skeletons dancing at midnight. Saint-Saëns generally achieved his orchestral effects by deft harmonisation rather than exotic instrumentation, but in this piece he featured the xylophone prominently, representing the rattling bones of the dancers. Le Rouet d'Omphale (1871) was composed soon after the horrors of the Commune, but its lightness and delicate orchestration give no hint of recent tragedies. Rees rates Phaëton (1873) as the finest of the symphonic poems, belying the composer's professed indifference to melody, and inspired in its depiction of the mythical hero and his fate. A critic at the time of the premiere took a different view, hearing in the piece "the noise of a hack coming down from Montmartre" rather than the galloping fiery horses of Greek legend that inspired the piece. The last of the four symphonic poems, La jeunesse d'Hercule ("Hercules's Youth", 1877) was the most ambitious of the four, which, Harding suggests, is why it is the least successful. In the judgment of the critic Roger Nichols these orchestral works, which combine striking melodies, strength of construction and memorable orchestration "set new standards for French music and were an inspiration to such young composers as Ravel". Saint-Saëns wrote a one-act ballet, Javot (1896), the score for the film L'assassinat du duc de Guise (1908), and incidental music to a dozen plays between 1850 and 1916. Three of these scores were for revivals of classics by Molière and Racine, for which Saint-Saëns's deep knowledge of French baroque scores was reflected in his scores, in which he incorporated music by Lully and Charpentier. ### Concertante works Saint-Saëns was the first major French composer to write piano concertos. His First, in D (1858), in conventional three-movement form, is not well known, but the Second, in G minor (1868) is one of his most popular works. The composer experimented with form in this piece, replacing the customary sonata form first movement with a more discursive structure, opening with a solemn cadenza. The scherzo second movement and presto finale are in such contrast with the opening that the pianist Zygmunt Stojowski commented that the work "begins like Bach and ends like Offenbach". The Third Piano Concerto, in E (1869) has another high-spirited finale, but the earlier movements are more classical, the texture clear, with graceful melodic lines. The Fourth, in C minor (1875) is probably the composer's best-known piano concerto after the Second. It is in two movements, each comprising two identifiable sub-sections, and maintains a thematic unity not found in the composer's other piano concertos. According to some sources it was this piece that so impressed Gounod that he dubbed Saint-Saëns "the Beethoven of France" (other sources base that distinction on the Third Symphony). The Fifth and last piano concerto, in F major, was written in 1896, more than twenty years after its predecessor. The work is known as the "Egyptian" concerto; it was written while the composer was wintering in Luxor, and incorporates a tune he heard Nile boatmen singing. The First Cello Concerto, in A minor (1872) is a serious although animated work, in a single continuous movement with an unusually turbulent first section. It is among the most popular concertos in the cello repertory, much favoured by Pablo Casals and later players. The Second, in D minor (1902), like the Fourth Piano Concerto, consists of two movements each subdivided into two distinct sections. It is more purely virtuosic than its predecessor: Saint-Saëns commented to Fauré that it would never be as popular as the First because it was too difficult. There are three violin concertos; the first to be composed dates from 1858 but was not published until 1879, as the composer's Second, in C major. The First, in A, was also completed in 1858. It is a short work, its single 314-bar movement lasting less than a quarter of an hour. The Second, in conventional three-movement concerto form, is twice as long as the First, and is the least popular of the three: the thematic catalogue of the composer's works lists only three performances in his lifetime. The Third, in B minor, written for Pablo de Sarasate, is technically challenging for the soloist, although the virtuoso passages are balanced by intervals of pastoral serenity. It is by some margin the most popular of the three violin concertos, but Saint-Saëns's best-known concertante work for violin and orchestra is probably the Introduction and Rondo Capriccioso, in A minor, Op. 28, a single-movement piece, also written for Sarasate, dating from 1863. It changes from a wistful and tense opening to a swaggering main theme, described as faintly sinister by the critic Gerald Larner, who goes on, "After a multi-stopped cadenza ... the solo violin makes a breathless sprint through the coda to the happy ending in A major". ### Operas Discounting his collaboration with Dukas in the completion of Guiraud's unfinished Frédégonde, Saint-Saëns wrote twelve operas, two of which are opéras comiques. During the composer's lifetime his Henry VIII became a repertory piece; since his death only Samson et Dalila has been regularly staged, although according to Schonberg, Ascanio (1890) is considered by experts to be a much finer work. The critic Ronald Crichton writes that for all his experience and musical skill, Saint-Saëns "lacked the 'nose' of the theatre animal granted, for example, to Massenet who in other forms of music was his inferior". In a 2005 study, the musical scholar Steven Huebner contrasts the two composers: "Saint-Saëns obviously had no time for Massenet's histrionics". Saint-Saëns's biographer James Harding comments that it is regrettable that the composer did not attempt more works of a light-hearted nature, on the lines of La princesse jaune, which Harding describes as like Sullivan "with a light French touch". Although most of Saint-Saëns's operas have remained neglected, Crichton rates them as important in the history of French opera, as "a bridge between Meyerbeer and the serious French operas of the early 1890s". In his view, the operatic scores of Saint-Saëns have, in general, the strengths and weaknesses of the rest of his music – "lucid Mozartian transparency, greater care for form than for content ... There is a certain emotional dryness; invention is sometimes thin, but the workmanship is impeccable." Stylistically, Saint-Saëns drew on a range of models. From Meyerbeer he drew the effective use of the chorus in the action of a piece; for Henry VIII he included Tudor music he had researched in London; in La princesse jaune he used an oriental pentatonic scale; from Wagner he derived the use of leitmotifs, which, like Massenet, he used sparingly. Huebner observes that Saint-Saëns was more conventional than Massenet so far as through composition is concerned, more often favouring discrete arias and ensembles, with less variety of tempo within individual numbers. In a survey of recorded opera Alan Blyth writes that Saint-Saëns "certainly learned much from Handel, Gluck, Berlioz, the Verdi of Aida, and Wagner, but from these excellent models he forged his own style." ### Other vocal music From the age of six and for the rest of his life Saint-Saëns composed mélodies, writing more than 140. He regarded his songs as thoroughly and typically French, denying any influence from Schubert or other German composers of Lieder. Unlike his protégé Fauré, or his rival Massenet, he was not drawn to the song cycle, writing only two during his long career – Mélodies persanes ("Persian Songs", 1870) and Le Cendre rouge ("The Red Ash Tree", 1914, dedicated to Fauré). The poet whose works he set most often was Victor Hugo; others included Alphonse de Lamartine, Pierre Corneille, Amable Tastu, and, in eight songs, Saint-Saëns himself: among his many non-musical talents he was an amateur poet. He was highly sensitive to word setting, and told the young composer Lili Boulanger that to write songs effectively musical talent was not enough: "you must study the French language in depth; it is indispensable." Most of the mélodies are written for piano accompaniment, but a few, including "Le lever du soleil sur le Nil" ("Sunrise over the Nile", 1898) and "Hymne à la paix" ("Hymn to Peace", 1919), are for voice and orchestra. His settings, and chosen verses, are generally traditional in form, contrasting with the free verse and less structured forms of a later generation of French composers, including Debussy. Saint-Saëns composed more than sixty sacred vocal works, ranging from motets to masses and oratorios. Among the larger-scale compositions are the Requiem (1878) and the oratorios Le déluge (1875) and The Promised Land (1913) with an English text by Herman Klein. He was proud of his connection with British choirs, commenting, "One likes to be appreciated in the home, par excellence, of oratorio." He wrote a smaller number of secular choral works, some for unaccompanied choir, some with piano accompaniment and some with full orchestra. In his choral works, Saint-Saëns drew heavily on tradition, feeling that his models should be Handel, Mendelssohn and other earlier masters of the genre. In Klein's view, this approach was old-fashioned, and the familiarity of Saint-Saëns's treatment of the oratorio form impeded his success in it. ### Solo keyboard Nichols comments that, although as a famous pianist Saint-Saëns wrote for the piano throughout his life, "this part of his oeuvre has made curiously little mark". Nichols excepts the Étude en forme de valse (1912), which he observes still attracts pianists eager to display their left-hand technique. Although Saint-Saëns was dubbed "the French Beethoven", and his Variations on a Theme of Beethoven in E (1874) is his most extended work for unaccompanied piano, he did not emulate his predecessor in composing piano sonatas. He is not known even to have contemplated writing one. There are sets of bagatelles (1855), études (two sets – 1899 and 1912) and fugues (1920), but in general Saint-Saëns's works for the piano are single short pieces. In addition to established forms such as the song without words (1871) and the mazurka (1862, 1871 and 1882) popularised by Mendelssohn and Chopin, respectively, he wrote descriptive pieces such as "Souvenir d'Italie" (1887), "Les cloches du soir" ("Evening bells", 1889) and "Souvenir d'Ismaïlia" (1895). Unlike his pupil, Fauré, whose long career as a reluctant organist left no legacy of works for the instrument, Saint-Saëns published a modest number of pieces for organ solo. Some of them were written for use in church services – "Offertoire" (1853), "Bénédiction nuptiale" (1859), "Communion" (1859) and others. After he left the Madeleine in 1877 Saint-Saëns wrote ten more pieces for organ, mostly for concert use, including two sets of preludes and fugues (1894 and 1898). Some of the earlier works were written to be played on either the harmonium or the organ, and a few were primarily intended for the former. ### Chamber Saint-Saëns wrote more than forty chamber works between the 1840s and his last years. One of the first of his major works in the genre was the Piano Quintet (1855). It is a straightforward, confident piece, in a conventional structure with lively outer movements and a central movement containing two slow themes, one chorale-like and the other cantabile. The Septet (1880), for the unusual combination of trumpet, two violins, viola, cello, double bass and piano, is a neoclassical work that draws on 17th-century French dance forms. At the time of its composition Saint-Saëns was preparing new editions of the works of baroque composers including Rameau and Lully. The Caprice sur des airs danois et russes (1887) for flute, oboe, clarinet and piano, and the Barcarolle in F major (1898) for violin, cello, harmonium and piano are further examples of Saint-Saëns's sometimes unorthodox instrumentation. In Ratner's view, the most important of Saint-Saëns's chamber works are the sonatas: two for violin, two for cello, and one each for oboe, clarinet and bassoon, all seven with piano accompaniment. The First Violin Sonata dates from 1885, and is rated by Grove's Dictionary as one of the composer's best and most characteristic compositions. The Second (1896) signals a stylistic change in Saint-Saëns's work, with a lighter, clearer sound for the piano, characteristic of his music from then onwards. The First Cello Sonata (1872) was written after the death of the composer's great-aunt, who had taught him to play the piano more than thirty years earlier. It is a serious work, in which the main melodic material is sustained by the cello over a virtuoso piano accompaniment. Fauré called it the only cello sonata from any country to be of any importance. The Second (1905) is in four movements, and has the unusual feature of a theme and variations as its scherzo. The woodwind sonatas are among the composer's last works and part of his efforts to expand the repertoire for instruments for which hardly any solo parts were written, as he confided to his friend Jean Chantavoine in a letter dated to 15 April 1921: "At the moment I am concentrating my last reserves on giving rarely considered instruments the chance to be heard." Ratner writes of them, "The spare, evocative, classical lines, haunting melodies, and superb formal structures underline these beacons of the neoclassical movement." Gallois comments that the Oboe Sonata begins like a conventional classical sonata, with an andantino theme; the central section has rich and colourful harmonies, and the molto allegro finale is full of delicacy, humour and charm with a form of tarantella. For Gallois the Clarinet Sonata is the most important of the three: he calls it "a masterpiece full of impishness, elegance and discreet lyricism" amounting to "a summary of the rest". The work contrasts a "doleful threnody" in the slow movement with the finale, which "pirouettes in 4/4 time", in a style reminiscent of the 18th century. The same commentator calls the Bassoon Sonata "a model of transparency, vitality and lightness", containing humorous touches but also moments of peaceful contemplation. Saint-Saëns also expressed an intention to write a sonata for the cor anglais, but did not do so. The composer's most famous work, The Carnival of the Animals (1887), although far from a typical chamber piece, is written for eleven players, and is considered by Grove's Dictionary to be part of Saint-Saëns's chamber output. Grove rates it as "his most brilliant comic work, parodying Offenbach, Berlioz, Mendelssohn, Rossini, his own Danse macabre and several popular tunes". He forbade performances of it during his lifetime, concerned that its frivolity would damage his reputation as a serious composer. ### Recordings Saint-Saëns was a pioneer in recorded music. In June 1904 The Gramophone Company of London sent its producer Fred Gaisberg to Paris to record Saint-Saëns as accompanist to the mezzo-soprano Meyriane Héglon in arias from Ascanio and Samson et Dalila, and as soloist in his own piano music, including an arrangement of sections of the Second Piano Concerto (without orchestra). Saint-Saëns made more recordings for the company in 1919. In the early days of the LP record, Saint-Saëns's works were patchily represented on disc. The Record Guide (1955) lists one recording apiece of the Third Symphony, Second Piano Concerto and First Cello Concerto, alongside several versions of Danse Macabre, The Carnival of the Animals, the Introduction and Rondo Capriccioso and other short orchestral works. In the latter part of the 20th century and the early 21st, many more of the composer's works were released on LP and later CD and DVD. The 2008 Penguin Guide to Recorded Classical Music contains ten pages of listings of Saint-Saëns works, including all the concertos, symphonies, symphonic poems, sonatas and quartets. Also listed are an early Mass, collections of organ music, and choral songs. A recording of twenty-seven of Saint-Saëns's mélodies was released in 1997. With the exception of Samson et Dalila the operas have been sparsely represented on disc. A recording of Henry VIII was issued on CD and DVD in 1992. Hélène was released on CD in 2008. There are several recordings of Samson et Dalilah, under conductors including Sir Colin Davis, Georges Prêtre, Daniel Barenboim and Myung-Whun Chung. In the early 2020s the Centre de musique romantique française's Bru Zane label issued new recordings of Le Timbre d'argent (conducted by François-Xavier Roth, 2020), La Princesse jaune (Leo Hussain, 2021), and Phryné (Hervé Niquet, 2022). ## Honours and reputation Saint-Saëns was made a Chevalier of the Legion of Honour in 1867 and promoted to Officier in 1884, and Grand Croix in 1913. Foreign honours included the British Royal Victorian Order (CVO) in 1902, the Monégasque Order of Saint-Charles in 1904 and honorary doctorates from the universities of Cambridge (1893) and Oxford (1907). In its obituary notice, The Times commented: > The death of M. Saint-Saëns not only deprives France of one of her most distinguished composers; it removes from the world the last representative of the great movements in music which were typical of the 19th century. He had maintained so vigorous a vitality and kept in such close touch with present-day activities that, though it had become customary to speak of him as the doyen of French composers, it was easy to forget the place he actually took in musical chronology. He was only two years younger than Brahms, was five years older than Tchaikovsky, six years older than Dvořák, and seven years older than Sullivan. He held a position in his own country's music certain aspects of which may be fitly compared with each of those masters in their own spheres. In a short poem, "Mea culpa", published in 1890 Saint-Saëns accused himself of lack of decadence, and commented approvingly on the excessive enthusiasms of youth, lamenting that such things were not for him. An English commentator quoted the poem in 1910, observing, "His sympathies are with the young in their desire to push forward, because he has not forgotten his own youth when he championed the progressive ideals of the day." The composer sought a balance between innovation and traditional form. The critic Henry Colles, wrote, a few days after the composer's death: > In his desire to maintain "the perfect equilibrium" we find the limitation of Saint-Saëns's appeal to the ordinary musical mind. Saint-Saëns rarely, if ever, takes any risks; he never, to use the slang of the moment, "goes off the deep end". All his greatest contemporaries did. Brahms, Tchaikovsky, and even Franck, were ready to sacrifice everything for the end each wanted to reach, to drown in the attempt to get there if necessary. Saint-Saëns, in preserving his equilibrium, allows his hearers to preserve theirs. Grove concludes its article on Saint-Saëns with the observation that although his works are remarkably consistent, "it cannot be said that he evolved a distinctive musical style. Rather, he defended the French tradition that threatened to be engulfed by Wagnerian influences and created the environment that nourished his successors". Since the composer's death writers sympathetic to his music have expressed regret that he is known by the musical public for only a handful of his scores such as The Carnival of the Animals, the Second Piano Concerto, the Third Violin Concerto, the Organ Symphony, Samson et Dalila, Danse macabre and the Introduction and Rondo Capriccioso. Among his large output, Nicholas singles out the Requiem, the Christmas Oratorio, the ballet Javotte, the Piano Quartet, the Septet for trumpet, piano and strings, and the First Violin Sonata as neglected masterpieces. In 2004, the cellist Steven Isserlis said, "Saint-Saens is exactly the sort of composer who needs a festival to himself ... there are Masses, all of which are interesting. I've played all his cello music and there isn't one bad piece. His works are rewarding in every way. And he's an endlessly fascinating figure." ## See also - Camille Awards ## Notes, references and sources
954
Albert Speer
1,171,998,592
German architect and Minister of War Production
[ "1905 births", "1981 deaths", "20th-century German architects", "20th-century German male writers", "Albert Speer", "Architects from Mannheim", "Architects in the Nazi Party", "Articles containing video clips", "German memoirists", "German neoclassical architects", "German people convicted of crimes against humanity", "Holocaust perpetrators", "Karlsruhe Institute of Technology alumni", "Members of the Prussian State Council (Nazi Germany)", "Members of the Reichstag of Nazi Germany", "Nazi Germany ministers", "Nazi Party officials", "Neurological disease deaths in England", "People convicted by the International Military Tribunal in Nuremberg", "People from the Grand Duchy of Baden", "Politicians from Mannheim", "Recipients of the Knights Cross of the War Merit Cross", "Speer family", "Technical University of Berlin alumni", "Technical University of Munich alumni" ]
Berthold Konrad Hermann Albert Speer (/ʃpɛər/; ; 19 March 1905 – 1 September 1981) was a German architect who served as the Minister of Armaments and War Production in Nazi Germany during most of World War II. A close ally of Adolf Hitler, he was convicted at the Nuremberg trial and sentenced to 20 years in prison. An architect by training, Speer joined the Nazi Party in 1931. His architectural skills made him increasingly prominent within the Party, and he became a member of Hitler's inner circle. Hitler commissioned him to design and construct structures including the Reich Chancellery and the Nazi party rally grounds in Nuremberg. In 1937, Hitler appointed Speer as General Building Inspector for Berlin. In this capacity he was responsible for the Central Department for Resettlement that evicted Jewish tenants from their homes in Berlin. In February 1942, Speer was appointed as Reich Minister of Armaments and War Production. Using misleading statistics, he promoted himself as having performed an armaments miracle that was widely credited with keeping Germany in the war. In 1944, Speer established a task force to increase production of fighter aircraft. It became instrumental in exploiting slave labor for the benefit of the German war effort. After the war, Albert Speer was among the 24 "major war criminals" charged with the crimes of the Nazi regime before the International Military Tribunal. He was found guilty of war crimes and crimes against humanity, principally for the use of slave labor, narrowly avoiding a death sentence. Having served his full term, Speer was released in 1966. He used his writings from the time of imprisonment as the basis for two autobiographical books, Inside the Third Reich and Spandau: The Secret Diaries. Speer's books were a success; the public was fascinated by an inside view of the Third Reich. Speer died of a stroke in 1981. Little remains of his personal architectural work. Through his autobiographies and interviews, Speer carefully constructed an image of himself as a man who deeply regretted having failed to discover the monstrous crimes of the Third Reich. He continued to deny explicit knowledge of, and responsibility for, the Holocaust. This image dominated his historiography in the decades following the war, giving rise to the "Speer Myth": the perception of him as an apolitical technocrat responsible for revolutionizing the German war machine. The myth began to fall apart in the 1980s, when the armaments miracle was attributed to Nazi propaganda. Adam Tooze wrote in The Wages of Destruction that the idea that Speer was an apolitical technocrat was "absurd". Martin Kitchen, writing in Speer: Hitler's Architect, stated that much of the increase in Germany's arms production was actually due to systems instituted by Speer's predecessor (Fritz Todt) and furthermore that Speer was intimately involved in the "Final Solution". ## Early years and personal life Speer was born in Mannheim, into an upper-middle-class family. He was the second of three sons of Luise Máthilde Wilhelmine (Hommel) and Albert Friedrich Speer. In 1918, the family leased their Mannheim residence and moved to a home they had in Heidelberg. Henry T. King, deputy prosecutor at the Nuremberg trials who later wrote a book about Speer said, "Love and warmth were lacking in the household of Speer's youth." His brothers, Ernst and Hermann, bullied him throughout his childhood. Speer was active in sports, taking up skiing and mountaineering. He followed in the footsteps of his father and grandfather and studied architecture. Speer began his architectural studies at the University of Karlsruhe instead of a more highly acclaimed institution because the hyperinflation crisis of 1923 limited his parents' income. In 1924, when the crisis had abated, he transferred to the "much more reputable" Technical University of Munich. In 1925, he transferred again, this time to the Technical University of Berlin where he studied under Heinrich Tessenow, whom Speer greatly admired. After passing his exams in 1927, Speer became Tessenow's assistant, a high honor for a man of 22. As such, Speer taught some of his classes while continuing his own postgraduate studies. In Munich Speer began a close friendship, ultimately spanning over 50 years, with Rudolf Wolters, who also studied under Tessenow. In mid-1922, Speer began courting Margarete (Margret) Weber (1905–1987), the daughter of a successful craftsman who employed 50 workers. The relationship was frowned upon by Speer's class-conscious mother, who felt the Webers were socially inferior. Despite this opposition, the two married in Berlin on 28 August 1928; seven years elapsed before Margarete was invited to stay at her in-laws' home. The couple would have six children together, but Albert Speer grew increasingly distant from his family after 1933. He remained so even after his release from imprisonment in 1966, despite their efforts to forge closer bonds. ## Party architect and government functionary ### Joining the Nazis (1931–1934) In January 1931, Speer applied for Nazi Party membership, and on 1 March 1931, he became member number 474,481. The same year, with stipends shrinking amid the Depression, Speer surrendered his position as Tessenow's assistant and moved to Mannheim, hoping to make a living as an architect. After he failed to do so, his father gave him a part-time job as manager of his properties. In July 1932, the Speers visited Berlin to help out the Party before the Reichstag elections. While they were there his friend, Nazi Party official Karl Hanke recommended the young architect to Joseph Goebbels to help renovate the Party's Berlin headquarters. When the commission was completed, Speer returned to Mannheim and remained there as Hitler took office in January 1933. The organizers of the 1933 Nuremberg Rally asked Speer to submit designs for the rally, bringing him into contact with Hitler for the first time. Neither the organizers nor Rudolf Hess were willing to decide whether to approve the plans, and Hess sent Speer to Hitler's Munich apartment to seek his approval. This work won Speer his first national post, as Nazi Party "Commissioner for the Artistic and Technical Presentation of Party Rallies and Demonstrations". Shortly after Hitler came into power, he began to make plans to rebuild the chancellery. At the end of 1933, he contracted Paul Troost to renovate the entire building. Hitler appointed Speer, whose work for Goebbels had impressed him, to manage the building site for Troost. As Chancellor, Hitler had a residence in the building and came by every day to be briefed by Speer and the building supervisor on the progress of the renovations. After one of these briefings, Hitler invited Speer to lunch, to the architect's great excitement. Speer quickly became part of Hitler's inner circle; he was expected to call on him in the morning for a walk or chat, to provide consultation on architectural matters, and to discuss Hitler's ideas. Most days he was invited to dinner. In the English version of his memoirs, Speer says that his political commitment merely consisted of paying his "monthly dues". He assumed his German readers would not be so gullible and told them the Nazi Party offered a "new mission". He was more forthright in an interview with William Hamsher in which he said he joined the party in order to save "Germany from Communism". After the war, he claimed to have had little interest in politics at all and had joined almost by chance. Like many of those in power in the Third Reich, he was not an ideologue, "nor was he anything more than an instinctive anti-Semite." The historian Magnus Brechtken, discussing Speer, said he did not give anti-Jewish public speeches and that his anti-Semitism can best be understood through his actions—which were anti-Semitic. Brechtken added that, throughout Speer's life, his central motives were to gain power, rule, and acquire wealth. ### Nazi architect (1934–1937) When Troost died on 21 January 1934, Speer effectively replaced him as the Party's chief architect. Hitler appointed Speer as head of the Chief Office for Construction, which placed him nominally on Hess's staff. One of Speer's first commissions after Troost's death was the Zeppelinfeld stadium in Nuremberg. It was used for Nazi propaganda rallies and can be seen in Leni Riefenstahl's propaganda film Triumph of the Will. The building was able to hold 340,000 people. Speer insisted that as many events as possible be held at night, both to give greater prominence to his lighting effects and to hide the overweight Nazis. Nuremberg was the site of many official Nazi buildings. Many more buildings were planned. If built, the German Stadium in Nurenberg would have accommodated 400,000 spectators. Speer modified Werner March's design for the Olympic Stadium being built for the 1936 Summer Olympics. He added a stone exterior that pleased Hitler. Speer designed the German Pavilion for the 1937 international exposition in Paris. ### Berlin's General Building Inspector (1937–1942) On 30 January 1937, Hitler appointed Speer as General Building Inspector for the Reich Capital. This carried with it the rank of State Secretary in the Reich government and gave him extraordinary powers over the Berlin city government. He was to report directly to Hitler, and was independent of both the mayor and the Gauleiter of Berlin. Hitler ordered Speer to develop plans to rebuild Berlin. These centered on a three-mile-long grand boulevard running from north to south, which Speer called the Prachtstrasse, or Street of Magnificence; he also referred to it as the "North–South Axis". At the northern end of the boulevard, Speer planned to build the Volkshalle, a huge domed assembly hall over 700 feet (210 m) high, with floor space for 180,000 people. At the southern end of the avenue, a great triumphal arch, almost 400 feet (120 m) high and able to fit the Arc de Triomphe inside its opening, was planned. The existing Berlin railroad termini were to be dismantled, and two large new stations built. Speer hired Wolters as part of his design team, with special responsibility for the Prachtstrasse. The outbreak of World War II in 1939 led to the postponement, and later the abandonment, of these plans, which, after Nazi capitulation, Speer himself considered as “awful”. Plans to build a new Reich chancellery had been underway since 1934. Land had been purchased by the end of 1934 and starting in March 1936 the first buildings were demolished to create space at Voßstraße. Speer was involved virtually from the beginning. In the aftermath of the Night of the Long Knives, he had been commissioned to renovate the Borsig Palace on the corner of Voßstraße and Wilhelmstraße as headquarters of the Sturmabteilung (SA). He completed the preliminary work for the new chancellery by May 1936. In June 1936 he charged a personal honorarium of 30,000 Reichsmark and estimated the chancellery would be completed within three to four years. Detailed plans were completed in July 1937 and the first shell of the new chancellery was complete on 1 January 1938. On 27 January 1938, Speer received plenipotentiary powers from Hitler to finish the new chancellery by 1 January 1939. For propaganda Hitler claimed during the topping-out ceremony on 2 August 1938, that he had ordered Speer to complete the new chancellery that year. Shortages of labor meant the construction workers had to work in ten-to-twelve-hour shifts. The SS built two concentration camps in 1938 and used the inmates to quarry stone for its construction. A brick factory was built near the Oranienburg concentration camp at Speer's behest; when someone commented on the poor conditions there, Speer stated, "The Yids got used to making bricks while in Egyptian captivity". The chancellery was completed in early January 1939. The building itself was hailed by Hitler as the "crowning glory of the greater German political empire". During the Chancellery project, the pogrom of Kristallnacht took place. Speer made no mention of it in the first draft of Inside the Third Reich. It was only on the urgent advice of his publisher that he added a mention of seeing the ruins of the Central Synagogue in Berlin from his car. Kristallnacht accelerated Speer's ongoing efforts to dispossess Berlin's Jews from their homes. From 1939 on, Speer's Department used the Nuremberg Laws to evict Jewish tenants of non-Jewish landlords in Berlin, to make way for non-Jewish tenants displaced by redevelopment or bombing. Eventually, 75,000 Jews were displaced by these measures. Speer denied he knew they were being put on Holocaust trains and claimed that those displaced were, "Completely free and their families were still in their apartments". He also said: " ... en route to my ministry on the city highway, I could see ... crowds of people on the platform of nearby Nikolassee Railroad Station. I knew that these must be Berlin Jews who were being evacuated. I am sure that an oppressive feeling struck me as I drove past. I presumably had a sense of somber events." Matthias Schmidt said Speer had personally inspected concentration camps and described his comments as an "outright farce". Martin Kitchen described Speer's often repeated line that he knew nothing of the "dreadful things" as hollow—because not only was he fully aware of the fate of the Jews he was actively participating in their persecution. As Germany started World War II in Europe, Speer instituted quick-reaction squads to construct roads or clear away debris; before long, these units would be used to clear bomb sites. Speer used forced Jewish labor on these projects, in addition to regular German workers. Construction stopped on the Berlin and Nüremberg plans at the outbreak of war. Though stockpiling of materials and other work continued, this slowed to a halt as more resources were needed for the armament industry. Speer's offices undertook building work for each branch of the military, and for the SS, using slave labor. Speer's building work made him among the wealthiest of the Nazi elite. ## Minister of Armaments ### Appointment and increasing power As one of the younger and more ambitious men in Hitler's inner circle, Speer was approaching the height of his power. In 1938, Prussian Minister President Hermann Göring had appointed him to the Prussian State Council. In 1941, he was elected to the Reichstag from electoral constituency 2 (Berlin–West). On 8 February 1942, Reich Minister of Armaments and Munitions Fritz Todt died in a plane crash shortly after taking off from Hitler's eastern headquarters at Rastenburg. Speer arrived there the previous evening and accepted Todt's offer to fly with him to Berlin. Speer cancelled some hours before take-off because the previous night he had been up late in a meeting with Hitler. Hitler appointed Speer in Todt's place. Martin Kitchen, a British historian, says that the choice was not surprising. Speer was loyal to Hitler, and his experience building prisoner of war camps and other structures for the military qualified him for the job. Speer succeeded Todt not only as Reich Minister but in all his other powerful positions, including Inspector General of German Roadways, Inspector General for Water and Energy and Head of the Nazi Party's Office of Technology. At the same time, Hitler also appointed Speer as head of the Organisation Todt, a massive, government-controlled construction company. Characteristically Hitler did not give Speer any clear remit; he was left to fight his contemporaries in the regime for power and control. As an example, he wanted to be given power over all armaments issues under Göring's Four Year Plan. Göring was reluctant to grant this. However Speer secured Hitler's support, and on 1 March 1942, Göring signed a decree naming Speer "General Plenipotentiary for Armament Tasks" in the Four Year Plan. Speer proved to be ambitious, unrelenting and ruthless. Speer set out to gain control not just of armaments production in the army, but in the whole armed forces. It did not immediately dawn on his political rivals that his calls for rationalization and reorganization were hiding his desire to sideline them and take control. By April 1942, Speer had persuaded Göring to create a three-member Central Planning Board within the Four Year Plan, which he used to obtain supreme authority over procurement and allocation of raw materials and scheduling of production in order to consolidate German war production in a single agency. Speer was fêted at the time, and in the post-war era, for performing an "armaments miracle" in which German war production dramatically increased. This miracle was brought to a halt in the summer of 1943 by, among other factors, the first sustained Allied bombing. Other factors probably contributed to the increase more than Speer himself. Germany's armaments production had already begun to result in increases under his predecessor, Todt. Naval armaments were not under Speer's supervision until October 1943, nor the Luftwaffe's armaments until June of the following year. Yet each showed comparable increases in production despite not being under Speer's control. Another factor that produced the boom in ammunition was the policy of allocating more coal to the steel industry. Production of every type of weapon peaked in June and July 1944, but there was now a severe shortage of fuel. After August 1944, oil from the Romanian fields was no longer available. Oil production became so low that any possibility of offensive action became impossible and weaponry lay idle. As Minister of Armaments, Speer was responsible for supplying weapons to the army. With Hitler's full agreement, he decided to prioritize tank production, and he was given unrivaled power to ensure success. Hitler was closely involved with the design of the tanks, but kept changing his mind about the specifications. This delayed the program, and Speer was unable to remedy the situation. In consequence, despite tank production having the highest priority, relatively little of the armaments budget was spent on it. This led to a significant German Army failure at the Battle of Prokhorovka, a major turning point on the Eastern Front against the Soviet Red Army. As head of Organisation Todt, Speer was directly involved in the construction and alteration of concentration camps. He agreed to expand Auschwitz and some other camps, allocating 13.7 million Reichsmarks for the work to be carried out. This allowed an extra 300 huts to be built at Auschwitz, increasing the total human capacity to 132,000. Included in the building works was material to build gas chambers, crematoria and morgues. The SS called this "Professor Speer's Special Programme". Speer realized that with six million workers drafted into the armed forces, there was a labor shortage in the war economy, and not enough workers for his factories. In response, Hitler appointed Fritz Sauckel as a "manpower dictator" to obtain new workers. Speer and Sauckel cooperated closely to meet Speer's labor demands. Hitler gave Sauckel a free hand to obtain labor, something that delighted Speer, who had requested 1,000,000 "voluntary" laborers to meet the need for armament workers. Sauckel had whole villages in France, Holland and Belgium forcibly rounded up and shipped to Speer's factories. Sauckel obtained new workers often using the most brutal methods. In occupied areas of the Soviet Union, that had been subject to partisan action, civilian men and women were rounded up en masse and sent to work forcibly in Germany. By April 1943, Sauckel had supplied 1,568,801 "voluntary" laborers, forced laborers, prisoners of war and concentration camp prisoners to Speer for use in his armaments factories. It was for the maltreatment of these people, that Speer was principally convicted at the Nuremberg Trials. ### Consolidation of arms production Following his appointment as Minister of Armaments, Speer was in control of armaments production solely for the Army. He coveted control of the production of armaments for the Luftwaffe and Kriegsmarine as well. He set about extending his power and influence with unexpected ambition. His close relationship with Hitler provided him with political protection, and he was able to outwit and outmaneuver his rivals in the regime. Hitler's cabinet was dismayed at his tactics, but, regardless, he was able to accumulate new responsibilities and more power. By July 1943, he had gained control of armaments production for the Luftwaffe and Kriegsmarine. In August 1943, he took control of most of the Ministry of Economics, to become, in Admiral Dönitz's words, "Europe's economic dictator". His formal title was changed on 2 September 1943, to "Reich Minister for Armaments and War Production". He had become one of the most powerful people in Nazi Germany. Speer and his hand-picked director of submarine construction Otto Merker believed that the shipbuilding industry was being held back by outdated methods, and revolutionary new approaches imposed by outsiders would dramatically improve output. This belief proved incorrect, and Speer and Merker's attempt to build the Kriegsmarine's new generation of submarines, the Type XXI and Type XXIII, as prefabricated sections at different facilities rather than at single dockyards contributed to the failure of this strategically important program. The designs were rushed into production, and the completed submarines were crippled by flaws which resulted from the way they had been constructed. While dozens of submarines were built, few ever entered service. In December 1943, Speer visited Organisation Todt workers in Lapland, where he seriously damaged his knee and was incapacitated for several months. He was under the dubious care of Professor Karl Gebhardt at a medical clinic called Hohenlychen where patients "mysteriously failed to survive". In mid-January 1944, Speer had a lung embolism and fell seriously ill. Concerned about retaining power, he did not appoint a deputy and continued to direct work of the Armaments Ministry from his bedside. Speer's illness coincided with the Allied "Big Week", a series of bombing raids on the German aircraft factories that were a devastating blow to aircraft production. His political rivals used the opportunity to undermine his authority and damage his reputation with Hitler. He lost Hitler's unconditional support and began to lose power. In response to the Allied Big Week, Adolf Hitler authorized the creation of a Fighter Staff committee. Its aim was to ensure the preservation and growth of fighter aircraft production. The task force was established by 1 March 1944, orders of Speer, with support from Erhard Milch of the Reich Aviation Ministry. Production of German fighter aircraft more than doubled between 1943 and 1944. The growth, however, consisted in large part of models that were becoming obsolescent and proved easy prey for Allied aircraft. On 1 August 1944, Speer merged the Fighter Staff into a newly formed Armament Staff committee. The Fighter Staff committee was instrumental in bringing about the increased exploitation of slave labor in the war economy. The SS provided 64,000 prisoners for 20 separate projects from various concentration camps including Mittelbau-Dora. Prisoners worked for Junkers, Messerschmitt, Henschel and BMW, among others. To increase production, Speer introduced a system of punishments for his workforce. Those who feigned illness, slacked off, sabotaged production or tried to escape were denied food or sent to concentration camps. In 1944, this became endemic; over half a million workers were arrested. By this time, 140,000 people were working in Speer's underground factories. These factories were death-traps; discipline was brutal, with regular executions. There were so many corpses at the Dora underground factory, for example, that the crematorium was overwhelmed. Speer's own staff described the conditions there as "hell". The largest technological advance under Speer's command came through the rocket program. It began in 1932 but had not supplied any weaponry. Speer enthusiastically supported the program and in March 1942 made an order for A4 rockets, the predecessor of the world's first ballistic missile, the V-2 rocket. The rockets were researched at a facility in Peenemünde along with the V-1 flying bomb. The V-2's first target was Paris on 8 September 1944. The program while advanced proved to be an impediment to the war economy. The large capital investment was not repaid in military effectiveness. The rockets were built at an underground factory at Mittelwerk. Labor to build the A4 rockets came from the Mittelbau-Dora concentration camp. Of the 60,000 people who ended up at the camp 20,000 died, due to the appalling conditions. On 14 April 1944, Speer lost control of Organisation Todt to his Deputy, Franz Xaver Dorsch. He opposed the assassination attempt against Hitler on 20 July 1944. He was not involved in the plot, and played a minor role in the regime's efforts to regain control over Berlin after Hitler survived. After the plot Speer's rivals attacked some of his closest allies and his management system fell out of favor with radicals in the party. He lost yet more authority. ### Defeat of Nazi Germany Losses of territory and a dramatic expansion of the Allied strategic bombing campaign caused the collapse of the German economy from late 1944. Air attacks on the transport network were particularly effective, as they cut the main centres of production off from essential coal supplies. In January 1945, Speer told Goebbels that armaments production could be sustained for at least a year. However, he concluded that the war was lost after Soviet forces captured the important Silesian industrial region later that month. Nevertheless, Speer believed that Germany should continue the war for as long as possible with the goal of winning better conditions from the Allies than the unconditional surrender they insisted upon. During January and February, Speer claimed that his ministry would deliver "decisive weapons" and a large increase in armaments production which would "bring about a dramatic change on the battlefield". Speer gained control over the railways in February, and asked Heinrich Himmler to supply concentration camp prisoners to work on their repair. By mid-March, Speer had accepted that Germany's economy would collapse within the next eight weeks. While he sought to frustrate directives to destroy industrial facilities in areas at risk of capture, so that they could be used after the war, he still supported the war's continuation. Speer provided Hitler with a memorandum on 15 March, which detailed Germany's dire economic situation and sought approval to cease demolitions of infrastructure. Three days later, he also proposed to Hitler that Germany's remaining military resources be concentrated along the Rhine and Vistula rivers in an attempt to prolong the fighting. This ignored military realities, as the German armed forces were unable to match the Allies' firepower and were facing total defeat. Hitler rejected Speer's proposal to cease demolitions. Instead, he issued the "Nero Decree" on 19 March, which called for the destruction of all infrastructure as the army retreated. Speer was appalled by this order, and persuaded several key military and political leaders to ignore it. During a meeting with Speer on 28/29 March, Hitler rescinded the decree and gave him authority over demolitions. Speer ended them, though the army continued to blow up bridges. By April, little was left of the armaments industry, and Speer had few official duties. Speer visited the Führerbunker on 22 April for the last time. He met Hitler and toured the damaged Chancellery before leaving Berlin to return to Hamburg. On 29 April, the day before committing suicide, Hitler dictated a final political testament which dropped Speer from the successor government. Speer was to be replaced by his subordinate, Karl-Otto Saur. Speer was disappointed that Hitler had not selected him as his successor. After Hitler's death, Speer offered his services to the so-called Flensburg Government, headed by Hitler's successor, Karl Dönitz. He took a role in that short-lived regime as Minister of Industry and Production. Speer provided information to the Allies, regarding the effects of the air war, and on a broad range of subjects, beginning on 10 May. On 23 May, two weeks after the surrender of German forces, British troops arrested the members of the Flensburg Government and brought Nazi Germany to a formal end. ## Post-war ### Nuremberg trial Speer was taken to several internment centres for Nazi officials and interrogated. In September 1945, he was told that he would be tried for war crimes, and several days later, he was moved to Nuremberg and incarcerated there. Speer was indicted on four counts: participating in a common plan or conspiracy for the accomplishment of crime against peace; planning, initiating and waging wars of aggression and other crimes against peace; war crimes; and crimes against humanity. The chief United States prosecutor, Robert H. Jackson, of the U.S. Supreme Court said, "Speer joined in planning and executing the program to dragoon prisoners of war and foreign workers into German war industries, which waxed in output while the workers waned in starvation." Speer's attorney, Hans Flächsner, successfully contrasted Speer from other defendants and portrayed him as an artist thrust into political life who had always remained a non-ideologue. Speer was found guilty of war crimes and crimes against humanity, principally for the use of slave labor and forced labor. He was acquitted on the other two counts. He had claimed that he was unaware of Nazi extermination plans, and the Allies had no proof that he was aware. His claim was revealed to be false in a private correspondence written in 1971 and publicly disclosed in 2007. On 1 October 1946, he was sentenced to 20 years' imprisonment. While three of the eight judges (two Soviet and American Francis Biddle) advocated the death penalty for Speer, the other judges did not, and a compromise sentence was reached after two days of discussions. ### Imprisonment On 18 July 1947, Speer was transferred to Spandau Prison in Berlin to serve his prison term. There he was known as Prisoner Number Five. Speer's parents died while he was incarcerated. His father, who died in 1947, despised the Nazis and was silent upon meeting Hitler. His mother died in 1952. As a Nazi Party member, she had greatly enjoyed dining with Hitler. Wolters and longtime Speer secretary Annemarie Kempf, while not permitted direct communication with Speer in Spandau, did what they could to help his family and carry out the requests Speer put in letters to his wife—the only written communication he was officially allowed. Beginning in 1948, Speer had the services of Toni Proost, a sympathetic Dutch orderly to smuggle mail and his writings. In 1949, Wolters opened a bank account for Speer and began fundraising among those architects and industrialists who had benefited from Speer's activities during the war. Initially, the funds were used only to support Speer's family, but increasingly the money was used for other purposes. They paid for Toni Proost to go on holiday, and for bribes to those who might be able to secure Speer's release. Once Speer became aware of the existence of the fund, he sent detailed instructions about what to do with the money. Wolters raised a total of DM158,000 for Speer over the final seventeen years of his sentence. The prisoners were forbidden to write memoirs. Speer was able to have his writings sent to Wolters, however, and they eventually amounted to 20,000 pages. He had completed his memoirs by November 1953, which became the basis of Inside the Third Reich. In Spandau Diaries, Speer aimed to present himself as a tragic hero who had made a Faustian bargain for which he endured a harsh prison sentence. Much of Speer's energy was dedicated to keeping fit, both physically and mentally, during his long confinement. Spandau had a large enclosed yard where inmates were allocated plots of land for gardening. Speer created an elaborate garden complete with lawns, flower beds, shrubbery, and fruit trees. To make his daily walks around the garden more engaging Speer embarked on an imaginary trip around the globe. Carefully measuring distance travelled each day, he mapped distances to the real-world geography. He had walked more than 30,000 kilometres (19,000 mi), ending his sentence near Guadalajara, Mexico. Speer also read, studied architectural journals, and brushed up on English and French. In his writings, Speer claimed to have finished five thousand books while in prison. His sentence of twenty years amounted to 7,305 days, which only allotted one and a half days per book. Speer's supporters maintained calls for his release. Among those who pledged support for his sentence to be commuted were Charles de Gaulle and US diplomat George Wildman Ball. Willy Brandt was an advocate of his release, putting an end to the de-Nazification proceedings against him, which could have caused his property to be confiscated. Speer's efforts for an early release came to naught. The Soviet Union, having demanded a death sentence at trial, was unwilling to entertain a reduced sentence. Speer served a full term and was released at midnight on 1 October 1966. ### Release and later life Speer's release from prison was a worldwide media event. Reporters and photographers crowded both the street outside Spandau and the lobby of the Hotel Berlin where Speer spent the night. He said little, reserving most comments for a major interview published in Der Spiegel in November 1966. Although he stated he hoped to resume an architectural career, his sole project, a collaboration for a brewery, was unsuccessful. Instead, he revised his Spandau writings into two autobiographical books, Inside the Third Reich (in German, Erinnerungen, or Reminiscences) and Spandau: The Secret Diaries. He later published a work about Himmler and the SS which has been published in English as The Slave State: Heinrich Himmler's Masterplan for SS Supremacy or Infiltration: How Heinrich Himmler Schemed to Build an SS Industrial Empire (in German, Der Sklavenstaat - Meine Auseinandersetzung mit der SS). Speer was aided in shaping the works by Joachim Fest and Wolf Jobst Siedler from the publishing house Ullstein. He found himself unable to re-establish a relationship with his children, even with his son Albert who had also become an architect. According to Speer's daughter Hilde Schramm, "One by one my sister and brothers gave up. There was no communication." He supported Hermann, his brother, financially after the war. However, his other brother Ernst had died in the Battle of Stalingrad, despite repeated requests from his parents for Speer to repatriate him. Following his release from Spandau, Speer donated the Chronicle, his personal diary, to the German Federal Archives. It had been edited by Wolters and made no mention of the Jews. David Irving discovered discrepancies between the deceptively edited Chronicle and independent documents. Speer asked Wolters to destroy the material he had omitted from his donation but Wolters refused and retained an original copy. Wolters' friendship with Speer deteriorated and one year before Speer's death Wolters gave Matthias Schmidt access to the unedited Chronicle. Schmidt authored the first book that was highly critical of Speer. Speer's memoirs were a phenomenal success. The public was fascinated by an inside view of the Third Reich and a major war criminal became a popular figure almost overnight. Importantly, he provided an alibi to older Germans who had been Nazis. If Speer, who had been so close to Hitler, had not known the full extent of the crimes of the Nazi regime and had just been "following orders", then they could tell themselves and others they too had done the same. So great was the need to believe this "Speer Myth" that Fest and Siedler were able to strengthen it—even in the face of mounting historical evidence to the contrary. ### Death Speer made himself widely available to historians and other enquirers. In October 1973, he made his first trip to Britain, flying to London to be interviewed on the BBC Midweek programme. In the same year, he appeared on the television programme The World at War. Speer returned to London in 1981 to participate in the BBC Newsnight programme. He suffered a stroke and died in London on 1 September. He had remained married to his wife, but he had formed a relationship with a German woman living in London and was with her at the time of his death. His daughter, Margret Nissen, wrote in her 2005 memoirs that after his release from Spandau he spent all of his time constructing the "Speer Myth". ## The Speer myth ### The Good Nazi After his release from Spandau, Speer portrayed himself as the "good Nazi". He was well-educated, middle class, and bourgeois, and could contrast himself with those who, in the popular mind, typified "bad Nazis". In his memoirs and interviews, he had distorted the truth and made so many major omissions that his lies became known as "myths". Speer even invented his own birth's circumstances, stating falsely that he was born at midday amid crashes of thunder and bells of the nearby Christ Church, whereas it was between three and five o'clock, and the church was built only some years after. Speer took his myth-making to a mass media level and his "cunning apologies" were reproduced frequently in post-war Germany. Isabell Trommer writes in her biography of Speer that Fest and Siedler were co-authors of Speer's memoirs and co-creators of his myths. In return they were paid handsomely in royalties and other financial inducements. Speer, Siedler and Fest had constructed a masterpiece; the image of the "good Nazi" remained in place for decades, despite historical evidence indicating that it was false. Speer had carefully constructed an image of himself as an apolitical technocrat who deeply regretted having failed to discover the monstrous crimes of the Third Reich. This construction was accepted almost at face value by historian Hugh Trevor-Roper when investigating the death of Adolf Hitler for British Intelligence and in writing The Last Days of Hitler. Trevor-Roper frequently refers to Speer as "a technocrat [who] nourished a technocrat's philosophy", one who cared only for his building projects or his ministerial duties, and who thought that politics was irrelevant, at least until Hitler's Nero Decree which Speer, according to his own telling, worked assiduously to counter. Trevor-Roper – who calls Speer an administrative genius whose basic instincts were peaceful and constructive – does take Speer to task, however, for his failure to recognize the immorality of Hitler and Nazism, calling him "the real criminal of Nazi Germany": > For ten years he sat at the very centre of political power; his keen intelligence diagnosed the nature and observed the mutations of Nazi government and policy; he saw and despised the personalities around him; he heard their outrageous orders and understood their fantastic ambitions; but he did nothing. Supposing politics to be irrelevant, he turned aside and built roads and bridges and factories, while the logical consequences of government by madmen emerged. Ultimately, when their emergence involved the ruin of all his work, Speer accepted the consequences and acted. Then it was too late; Germany had been destroyed. After Speer's death, Matthias Schmidt published a book that demonstrated that Speer had ordered the eviction of Jews from their Berlin homes. By 1999, historians had amply demonstrated that Speer had lied extensively. Even so, public perceptions of Speer did not change substantially until Heinrich Breloer aired a biographical film on TV in 2004. The film began a process of demystification and critical reappraisal. Adam Tooze in his book The Wages of Destruction said Speer had manoeuvred himself through the ranks of the regime skillfully and ruthlessly and that the idea he was a technocrat blindly carrying out orders was "absurd". Trommer said he was not an apolitical technocrat; instead, he was one of the most powerful and unscrupulous leaders in the Nazi regime. Kitchen said he had deceived the Nuremberg Tribunal and post-war Germany. Brechtken said that if his extensive involvement in the Holocaust had been known at the time of his trial he would have been sentenced to death. The image of the good Nazi was supported by numerous Speer myths. In addition to the myth that he was an apolitical technocrat, he claimed he did not have full knowledge of the Holocaust or the persecution of the Jews. Another myth posits that Speer revolutionized the German war machine after his appointment as Minister of Armaments. He was credited with a dramatic increase in the shipment of arms that was widely reported as keeping Germany in the war. Another myth centered around a faked plan to assassinate Hitler with poisonous gas. The idea for this myth came to him after he recalled the panic when car fumes came through an air ventilation system. He fabricated the additional details. Brechtken wrote that his most brazen lie was fabricated during an interview with a French journalist in 1952. The journalist described an invented scenario in which Speer had refused Hitler's orders and Hitler had left with tears in his eyes. Speer liked the scenario so much that he wrote it into his memoirs. The journalist had unwittingly collaborated in one of his myths. Speer also sought to portray himself as an opponent of Hitler's leadership. Despite his opposition to the 20 July plot, he falsely claimed in his memoirs to have been sympathetic to the plotters. He maintained Hitler was cool towards him for the remainder of his life after learning they had included him on a list of potential ministers. This formed a key element of the myths Speer encouraged. Speer also falsely claimed that he had realised the war was lost at an early stage, and thereafter worked to preserve the resources needed for the civilian population's survival. In reality, he had sought to prolong the war until further resistance was impossible, thus contributing to the large number of deaths and the extensive destruction Germany suffered in the conflict's final months. ### Denial of responsibility Speer maintained at the Nuremberg trials and in his memoirs that he had no direct knowledge of the Holocaust. He admitted only to being uncomfortable around Jews in the published version of the Spandau Diaries. In his final statement at Nuremberg, Speer gave the impression of apologizing, although he did not directly admit any personal guilt and the only victim he mentioned was the German people. Historian Martin Kitchen states that Speer was actually "fully aware of what had happened to the Jews" and was "intimately involved in the 'Final Solution'". Brechtken said Speer only admitted to a generalized responsibility for the Holocaust to hide his direct and actual responsibility. Speer was photographed with slave laborers at Mauthausen concentration camp during a visit on 31 March 1943; he also visited Gusen concentration camp. Although survivor Francisco Boix testified at the Nuremberg trials about Speer's visit, Taylor writes that, had the photo been available, he would have been hanged. In 2005, The Daily Telegraph reported that documents had surfaced indicating that Speer had approved the allocation of materials for the expansion of Auschwitz concentration camp after two of his assistants inspected the facility on a day when almost a thousand Jews were massacred. Heinrich Breloer, discussing the construction of Auschwitz, said Speer was not just a cog in the work—he was the "terror itself". Speer did not deny being present at the Posen speeches to Nazi leaders at a conference in Posen (Poznań) on 6 October 1943, but claimed to have left the auditorium before Himmler said during his speech: "The grave decision had to be taken to cause this people to vanish from the earth", and later, "The Jews must be exterminated". Speer is mentioned several times in the speech, and Himmler addresses him directly. In 2007, The Guardian reported that a letter from Speer dated 23 December 1971, had been found in a collection of his correspondence with Hélène Jeanty, the widow of a Belgian resistance fighter. In the letter, Speer says, "There is no doubt—I was present as Himmler announced on October 6, 1943, that all Jews would be killed." ### Armaments miracle Speer was credited with an "armaments miracle". During the winter of 1941–42, in the light of Germany's disastrous defeat in the Battle of Moscow, the German leadership including Friedrich Fromm, Georg Thomas and Fritz Todt had come to the conclusion that the war could not be won. The rational position to adopt was to seek a political solution that would end the war without defeat. Speer in response used his propaganda expertise to display a new dynamism of the war economy. He produced spectacular statistics, claiming a sixfold increase in munitions production, a fourfold increase in artillery production, and he sent further propaganda to the newsreels of the country. He was able to curtail the discussion that the war should be ended. The armaments "miracle" was a myth; Speer had used statistical manipulation to support his claims. The production of armaments did go up; however, this was due to the normal causes of reorganization before Speer came to office, the relentless mobilization of slave labor and a deliberate reduction in the quality of output to favor quantity. By July 1943 Speer's armaments propaganda became irrelevant because a catalogue of dramatic defeats on the battlefield meant the prospect of losing the war could no longer be hidden from the German public. ## Architectural legacy Little remains of Speer's personal architectural works, other than the plans and photographs. No buildings designed by Speer during the Nazi era are extant in Berlin, other than the 4 entrance pavilions and underpasses leading to the Victory Column or Siegessäule, and the Schwerbelastungskörper, a heavy load-bearing body built around 1941. The concrete cylinder, 14 metres (46 ft) high, was used to measure ground subsidence as part of feasibility studies for a massive triumphal arch and other large structures planned within Hitler's post-war renewal project for the city of Berlin as the world capital Germania. The cylinder is now a protected landmark and is open to the public. The tribune of the Zeppelinfeld stadium in Nuremberg, though partly demolished, can also be seen. During the war, the Speer-designed New Reich Chancellery was largely destroyed by air raids and in the Battle of Berlin. The exterior walls survived, but they were eventually dismantled by the Soviets. Unsubstantiated rumors have claimed that the remains were used for other building projects such as the Humboldt University, Mohrenstraße metro station and Soviet war memorials in Berlin. ## See also - Speer Goes to Hollywood - Downfall, 2004 German film where he was portrayed by actor Heino Ferch - Legion Speer - Transportflotte Speer - Transportkorps Speer - Hermann Giesler
30,955,593
Suillus salmonicolor
1,089,406,438
Species of fungus in the family Suillaceae
[ "Edible fungi", "Fungi described in 1874", "Fungi of Africa", "Fungi of Asia", "Fungi of Central America", "Fungi of North America", "Suillus" ]
Suillus salmonicolor, commonly known as the Slippery Jill, is a fungus in the family Suillaceae of the order Boletales. First described as a member of the genus Boletus in 1874, the species acquired several synonyms, including Suillus pinorigidus and Suillus subluteus, before it was assigned its current binomial name in 1983. It has not been determined with certainty whether S. salmonicolor is distinct from the species S. cothurnatus, described by Rolf Singer in 1945. S. salmonicolor is a mycorrhizal fungus—meaning it forms a symbiotic association with the roots of plants such that both organisms benefit from the exchange of nutrients. This symbiosis occurs with various species of pine, and the fruit bodies (or mushrooms) of the fungus appear scattered or in groups on the ground near the trees. The fungus is found in North America, Hawaii, Asia, the Caribbean, South Africa, Australia and Central America. It has been introduced to several of those locations via transplanted trees. The mushroom's dingy yellow to brownish cap is rounded to flattened in shape, slimy when wet, and grows up to 9.5 cm (3.7 in) wide. The small pores on the underside of the cap are yellow before becoming olive-brown. The stem is up to 10 cm (3.9 in) long and 1.6 cm (0.6 in) thick and is covered with reddish-brown glandular dots. Young specimens are covered with a grayish, slimy partial veil that later ruptures and leaves a sheathlike ring on the stem. Although the mushroom is generally considered edible—especially if the slimy cap cuticle and partial veil are first peeled off—opinions about flavor vary. Other similar Suillus species include S. acidus, S. subalutaceus, and S. intermedius. ## Taxonomy and phylogeny The species was first described scientifically by American mycologist Charles Christopher Frost in 1874 as Boletus salmonicolor, based on specimens he collected in the New England area of the United States. In a 1983 publication, mycologist Roy Halling declared Boletus subluteus (described by Charles Horton Peck in 1887; Ixocomus subluteus is a later combination based on this name) and Suillus pinorigidus (described by Wally Snell and Esther A. Dick in 1956) to be synonymous. Halling also reexamined Frost's type specimen of B. salmonicolor, and considered the taxon better placed in Suillus because of its glutinous cap, dotted stem, and ring; he formally transferred it to that genus, resulting in the combination Suillus salmonicolor. The specific epithet salmonicolor is a Latin color term meaning "pink with a dash of yellow". The mushroom is commonly known as the "slippery Jill". In a 1986 publication on Suillus taxonomy and nomenclature, Mary E. Palm and Elwin L. Stewart further discussed the synonymy of S. salmonicolor, S. subluteus, and S. pinorigidus. They noted that fruit bodies of S. subluteus collected in Minnesota did not have the strong salmon colors considered characteristic of S. salmonicolor, as well as collections that had been named S. pinorigidus; this is a morphological difference that could be sufficient to consider S. subluteus a distinct species. They explained that although the microscopic characteristics of the three taxa do not differ significantly, this is not unusual for Suillus and cannot be used as the sole proof of conspecificity. Palm and Stewart concluded that a study of specimens from various areas of their geographical ranges would be needed to fully resolve the taxonomy of these related species. There is some disagreement in the literature about whether Suillus cothurnatus represents a different species from S. salmonicolor. The online mycological taxonomy database MycoBank lists them as synonyms, contrary to Index Fungorum. In their 2000 monograph of North American boletes, Alan Bessette and colleagues list the two taxa separately, noting that the range of S. cothurnatus is difficult to determine because of confusion with S. salmonicolor. In a molecular analyses of Suillus phylogeny, based on the internal transcribed spacer, S. salmonicolor (as S. subluteus) and S. intermedius clustered together very closely, indicating a high degree of genetic similarity. These analyses were based on comparing the sequence differences in a single region of ribosomal DNA; more recent molecular analyses typically combine the analysis of several genes to increase the validity of inferences drawn. ## Description The cap of S. salmonicolor is bluntly rounded or convex to nearly flattened, reaching a diameter of 3–9.5 cm (1.2–3.7 in). The cap surface is sticky to slimy when moist, but becomes shiny when dry. The cap color is variable, ranging from dingy yellow to yellowish-orange to ochraceous-salmon, cinnamon-brown or olive-brown to yellow-brown. The flesh is pale orange-yellow to orange-buff or orange, and does not stain when exposed to air. The odor and taste are not distinctive. The pore surface on the underside of the cap is yellow to dingy yellow, or yellowish orange to salmon, darkening to brownish with age; it also does not stain when bruised. The pores are circular to angular, measuring 1–2 per mm and 8–10 mm (0.3–0.4 in) deep. The stem is 2.5–10 cm (1.0–3.9 in) long, 6–16 mm (0.2–0.6 in) thick, and either equal in width throughout or slightly enlarged in the lower portion. It is whitish to yellowish or pinkish-ochre, and has reddish-brown to dark brown glandular dots and smears on the surface. Glandular dots are made of clumps of pigmented cells, and, unlike reticulation or scabers (small visible tufts of fibers that occur on the stems of other Suillus species), can be rubbed off with handling. The flesh is ochraceous to yellowish, often salmon-orange at the base of the stem. The partial veil that protects the developing gills is initially thick, baggy, and rubbery. It often has a conspicuously thickened cottony roll of tissue at its base, and sometimes flares outward from the stem on the lower portion. It forms a gelatinous ring on the upper to middle part of the stem. The spore print is cinnamon-brown to brown. The surface of the cap, when applied with a drop of dilute potassium hydroxide (KOH) or ammonia solution (chemical reagents commonly used for mushroom identification), will first turn a fleeting pink color, then dark red as the flesh collapses. The spores are smooth, roughly ellipsoid in shape, inequilateral when viewed in profile, and measure 7.6–10 by 3–3.4 μm. They appear hyaline (translucent) to yellowish in a dilute solution of KOH, and cinnamon to pale ochraceous when stained with Melzer's reagent. The basidia are somewhat collapsed, hyaline, and 5–6 μm thick. The cystidia are scattered, sometimes arranged in clusters (especially on the gill edge), usually with an ochraceous-brown content, but occasionally hyaline. They are club-shaped to somewhat cylindric and measure 34–60 by 10–13 μm. The cuticle of the cap is an ixotrichodermium—a cellular arrangement where the outermost hyphae are gelatinous and emerge roughly parallel, like hairs, perpendicular to the cap surface. These hyphae are hyaline and narrowly cylindric, measuring 1.4–3 μm in diameter. The stem surface is made of scattered bundles of caulocystidia (cystidia on the stem) that are brown or sometimes hyaline in KOH, club-shaped to subcylindrical bundles interspersed among hyaline cells. These bundles are underlain by a layer of gelatinous, hyaline, vertically oriented and parallel hyphae that are shaped like narrow cylinders. Clamp connections are absent from the hyphae. ### Edibility The mushroom is edible, but removal of the slimy cap cuticle and partial veil is recommended to avoid possible gastrointestinal upset; similarly, the 1992 field guide Edible Wild Mushrooms of North America recommends the removal of the tube layer before preparation, as it can become slimy during cooking. Opinions about the quality of the mushroom vary. According to the book Boletes of North America, it is "very good" with a "lemony" flavor. A Canadian field guide is more cautious in its assessment, and suggests that one would have to be brave to consume a mushroom with such a sticky veil. Mycologist David Arora, in his Mushrooms Demystified, opines that it is not worth eating. Whatever its palatibility to humans, the mushroom serves as a habitat for larvae of mycophagous insects such as the muscid fly Mydaea discimana and the scuttle fly Megaselia lutea. ### Similar species Suillus intermedius, found in northeastern and northern North America, is similar in appearance to S. salmonicolor. It may be distinguished by a lighter-colored cap, cream to yellowish or pale ochraceous flesh, and a ring that is neither as thick nor as wide as S. salmonicolor. It is also larger, with a cap diameter of up to 16 cm (6.3 in), and its pore surface sometimes slowly stains reddish-brown when bruised. Although it has not been definitively established whether S. cothurnatus is a distinct species, several characteristics have been reported to differentiate it from S. salmonicolor: a thinner, less rubbery veil that usually lacks a thickened cottony roll at the base; glandular dots on the stem that consist of bundles of multiseptate hyphae in a parallel arrangement ending in an even row of large, sterile cystidia (60–140 μm long) that resemble basidia; and small hyaline cystidia shaped like swollen bottles with narrowed bases. Other Suillus species with which S. salmonicolor might be confused include S. acidus and S. subalutaceus. Both of these species have a less well-developed partial veil, and their flesh is a duller tone lacking yellow-orange tints. ## Ecology, habitat and distribution Suillus salmonicolor occurs in a mycorrhizal association with various species of Pinus. This is a mutualistic relationship in which the subterranean fungal mycelia creates a protective sheath around the rootlets of the tree and a network of hyphae (the Hartig net) that penetrates between the tree's epidermal and cortical cells. This association helps the plant absorb water and mineral nutrients; in exchange, the fungus receives a supply of carbohydrates produced by the plant's photosynthesis. Two-, three-, and five-needled pines have all been recorded to associate with S. salmonicolor. In North America, the fungus has been found growing with P. banksiana, P. palustris, P. resinosa, P. rigida, P. strobus and P. taeda. In Kamchatka (in the Russian Far East) it has been found in association with P. pumila, in the Philippines with P. kesiya, and in southern India with P. patula. The northern limit of its North American range is eastern Canada (Quebec), and the southern limit is Nuevo León and near Nabogame in Temósachi Municipality, Chihuahua, Mexico. Suillus salmonicolor has been collected from the Dominican Republic in the Caribbean, Japan, Taiwan, and from Mpumalanga, South Africa. Because there are no native Pinus species in South Africa, the fungus is assumed to be an exotic species that has been introduced via pine plantations. It has also been introduced to Australia, where it is known from a single collection made in a plantation of Caribbean pine (Pinus caribaea) in Queensland, and has been found growing with Caribbean pine in Belize. It is found in Hawaii under Slash Pine (Pinus elliotii), including lawns where those trees are used in landscaping. S. salmonicolor is one of several ectomycorrhizal species that have "traveled the thousands of kilometers from a mainland to Hawaii in the roots and soil of introduced seedlings." ## See also - List of North American boletes
9,856,317
Shefali Shah
1,171,675,341
Indian film actress (born 1973)
[ "1973 births", "20th-century Indian actresses", "21st-century Indian actresses", "Actresses from Mumbai", "Actresses in Hindi cinema", "Actresses in Hindi television", "Best Supporting Actress National Film Award winners", "Filmfare Awards winners", "Indian film actresses", "Indian television actresses", "Living people", "Screen Awards winners", "Zee Cine Awards winners" ]
Shefali Shah (born Shefali Shetty on 22 May 1973) is an Indian actress of film, television and theatre. Respected for her acting prowess, she works primarily in independent Hindi films and has received local and foreign accolades for her performances. Shah's acting career started on the Gujarati stage before she debuted on television in 1993. After small parts on television and a brief stint with cinema in Rangeela (1995), she gained wider recognition in 1997 for her role in the popular series Hasratein. This was followed by lead roles in the TV series Kabhie Kabhie (1997) and Raahein (1999). A supporting role in the crime film Satya (1998) won her positive notice and a Filmfare Critics Award, and she soon shifted her focus to film acting starting with a lead role in the Gujarati drama Dariya Chhoru (1999). Unwilling to compromise her artistic convictions, Shah was selective about her roles through the following decades. This resulted in intermittent film work, mostly in character parts and often to appreciation from critics. She appeared in the international co-production Monsoon Wedding (2001) and the mainstream comedy-drama Waqt: The Race Against Time (2005). In 2007, her portrayal of Kasturba Gandhi in the biographical drama Gandhi, My Father won her the Best Actress prize at the Tokyo International Film Festival, and she received the National Film Award for Best Supporting Actress for the drama film The Last Lear. Among her subsequent film roles, she played a leading part in Kucch Luv Jaisaa (2011) and was noted for her work in the social problem film Lakshmi (2014) and the ensemble drama Dil Dhadakne Do (2015). Shah's career surged in the late 2010s as she transitioned to leading roles. She won a Filmfare Short Film Award for her performance in Juice (2017) and followed with two Netflix projects: the romantic drama Once Again (2018) and the International Emmy Award-winning crime miniseries Delhi Crime (2019). Her performance as DCP Vartika Chaturvedi in the latter met with widespread acclaim. Shah wrote and directed two self-starring COVID-19-themed short films in 2020: Someday and Happy Birthday Mummyji, and led the segment "Ankahi" in the anthology film Ajeeb Daastaans (2021). Five 2022 projects, including the Disney+ Hotstar webseries Human, the feature dramas Jalsa and Darlings, as well as the second season of Delhi Crime, brought Shah further recognition. ## Early and personal life Shefali Shetty was born on 22 May 1973 in Mumbai. She is the only child of Mangalorean Sudhakar Shetty, a banker at Reserve Bank of India (RBI), and his Gujarati wife Shobha, a homeopathy doctor. Shah is fluent in Tulu, Hindi, English, Marathi and Gujarati. The family resided in Santa Cruz, Mumbai at the RBI quarters, where she attended Arya Vidya Mandir School. While she was inclined to the arts as a child, including singing and dancing (she is trained in Bharatanatyam), she did not find particular interest in acting. Her first stint with acting happened on Gujarati stage when she was 10; her school teacher's playwright husband asked Shah's mother if she would permit her daughter to play a character based on Damien Thorn from The Omen (1976). Shah played the part upon her mother's consent and would not act again until several years later. After her schooling, she enrolled at Mithibai College in Vile Parle, opting to study science, but spent most of her student days working in theatre. Shah was married to television actor Harsh Chhaya from 1994 to 2000. In December 2000, she married director Vipul Amrutlal Shah, with whom she has two sons, Aryaman and Maurya. In addition to acting, Shah is fond of painting and cooking. Finding painting therapeutic, she says it gives her the creative outlet she craves when not acting in films. She trained for six months at Last Ship, an artists' residency in Bandra, and in 2016 took a course at Metàfora, an art school in Barcelona, Spain. Working mostly with acrylic on canvass as well as charcoal and ink, Shah focuses on perspective art, namely "the marriage of perspective with architectural designs" of places she has visited. She cites Mark Rothko and Jackson Pollock as her sources of inspiration. One of her paintings was on display at Jehangir Art Gallery in Mumbai at an exhibition held by Art for Concern, where it was eventually sold, while a solo show at The Monalisa Kalagram in Pune in 2017 was, by her own admission, unsuccessful. Shah opened a restaurant named Jalsa in Ahmedabad, Gujarat in 2021, which serves Indian and international cuisines and offers customers different cultural and recreational activities, from pottery and henna decoration to musical performances such as Garba. She directly supervises its cuisine, some of which is based on her home recipes, as well as decor, having designed some of its interiors, including walls hand-painted by her. The restaurant's second outlet was opened in Bangalore, and was positively reviewed by Lifestyle Asia. ## Career ### Early theatre and television work (1990–1996) Shah's acting career began with work in inter-collegiate plays in Gujarati during the early 1990s. Her work included roles in several stage dramas including Ant Vagarni Antakshari and Doctor Tame Pan?. A 1995 piece by Rasa magazine reported that Shah had proved her abilities to become one of the stars of Gujarati theatre. In one of the plays, she was brought to the attention of a team member of the TV serial Campus (1993) who suggested that she audition for a part in it. She was accepted following a screen test. This was followed by several other serials, including the popular Zee TV shows Tara and Banegi Apni Baat (both 1993–1997), as well as Naya Nukkad (1993–1994) on Doordarshan and Daraar (1994–1995) on Zee TV. The year 1995 marked Shah's first film appearance with a brief role in Ram Gopal Varma's Rangeela. A few days into shooting, she realised the part was different from what she was set up for, and she walked out of the sets as she felt cheated. Shah was reluctant to work in motion pictures after that, and the roles she was offered were mostly small character parts. She continued working in TV series, including Balaji Telefilms' Mano Ya Na Mano (1995–1999) and Doordarshan's Aarohan (1996–1997) and Sea Hawks (1997–1998). An anthology horror series, Mano Ya Na Mano starred Shah opposite Durga Jasraj in an episode titled "Kabzaa", directed by Homi Wadia, which was developed into a full-fledged serial called Kavach in 2016. Arohan, starring and produced by Pallavi Joshi, tells the story of a woman who joins the Indian Navy. ### Breakthrough with Hasratein and Satya (1997–1999) In 1997, Shah replaced Seema Kapoor in the TV series Hasratein (1996–1999) after over 120 episodes. In her first lead role, Shah starred as Savi, a married woman involved in an extramarital affair with a married man. Based on the Marathi novel Adhantari by Jaywant Dalvi, the show was popular with audiences and attracted attention for its commentary on the institution of marriage. India Today describes it as "one of the prime productions that changed the face of Indian television". The character of Savi, a mature woman with grown up children, was significantly older in age than Shah. Given the age differences, she persuaded director Ajay Sinha to cast her. Bhavya Sadhwani of IndiaTimes attributed the show's success with viewers mainly to the "impeccable acting skills" demonstrated by Shah in the part. The serial gained wider public recognition for Shah, and she called it a milestone in her career. Her performance earned her the Zee Woman of the Year award in 1997. Another lead role was given to her in Mahesh Bhatt's weekend soap Kabhie Kabhie (1997), which aired on StarPlus. In 1998, she was offered a small part in Ram Gopal Varma's crime thriller Satya, which revolves around the Mumbai underworld. Having been disappointed in her previous collaboration with Varma on Rangeela, she was hesitant on accepting it but found it special and made sure to receive thorough information about it. In a seven-minute role, she played Pyaari Mhatre, the wife of a mafia gangster played by Manoj Bajpayee. Their roles were said to be modelled after Arun Gawli and his wife Asha. Shah said she instinctively recognised her part and knew exactly how to play it. Satya opened to commercial success and major critical acclaim, and Shah's performance in it was favourably reviewed. Anupama Chopra of India Today wrote that Shah and her co-actors "are so good that you can almost smell the Mumbai grime on their sweaty bodies". For her portrayal, Shah won the Screen Award for Best Supporting Actress. At the 44th Filmfare Awards, she was nominated for the Filmfare Award for Best Supporting Actress and was awarded the Critics Award for Best Actress. Despite the positive reaction to her work in Satya, Shah did not receive as many film offers as she expected. Following Hasratein, she starred in its successor on Zee TV's prime-time spot, the soap opera Raahein (1999). The show was met with approval from viewers and critics alike. She played Preeti, a woman caught between her love life and career ambitions. In contrast to Shah's previous roles, the character of Preeti was 22 years old. Shailaja Bajpai of The Indian Express commended Shah's acting talent but thought she was less suitable for such a young-aged part, concluding that she is "brilliantly miscast". During this period she was one of the co-hosts on the musical game show Antakshari opposite Annu Kapoor. Among other projects on television, she acted in several episodes of the anthology series Rishtey (1999–2001), including the well-received "Highway". In 1999, she was cast in a Gujarati film, Dariya Chhoru, made by her future husband Vipul Shah. A love story situated on the coast of Saurashtra between a poor man (Jamnadas Majethia) and a wealthy woman (Shah), the film was named Best Film at the Gujarat State Film Awards, where Shah won the Best Actress award. The film, which The Times of India said should cater to educated Gujarati viewers, was a box-office success. According to the book Routledge Handbook of Indian Cinemas, it was among the films that started a trend of larger productions in the Gujarati film industry. In the book Gujarat: A Panorama of the Heritage of Gujarat, the film was praised for its beautiful portrayal and Shah and her colleagues were hailed as screen artistes who "could create fresh hopes among the film goers in Gujarat". ### Recognition for character roles (2000–2007) Shah's work in the 2000s started with a short appearance in Aditya Chopra's 2000 romance Mohabbatein. A year later, she was cast in Mira Nair's international co-production Monsoon Wedding, a comedy-drama which chronicles the reunion of a large Punjabi family for a wedding. Shah played Rhea Verma, an orphaned young woman, aspiring writer and a survivor of child sexual abuse, a character she considered as the most complex in the film. The film opened to considerable international acclaim, receiving the Golden Lion at the 58th Venice International Film Festival and nominations for Best Foreign Language Film at the BAFTA and Golden Globe Awards. Elvis Mitchell of The New York Times singled out Shah's part, and Saibal Chatterjee of Hindustan Times wrote that she "taps into the depths of a difficult character with amazing ease". The film was a significant box-office success, earning over \$33 million against its \$1.2million budget. Shah worked again under her husband Vipul's direction in the family melodrama Waqt: The Race Against Time (2005), playing Amitabh Bachchan's wife and Akshay Kumar's mother. She was considered for the part following Bachchan's suggestion against her husband's hesitation. Her casting in the role of a middle-aged mother to Kumar, who in reality is five years her senior, attracted considerable media coverage. She defended her choice of the part, saying she admired the character's traits and found particular challenge in the significant age differences. Her portrayal of Sumitra Thakur, a strict mother who encourages her husband to take extreme measures to discipline their irresponsible son, earned her a second Filmfare nomination for Best Supporting Actress. Derek Elley of Variety and Ziya Us Salam of The Hindu commended her subtle and composed acting, and Subhash K. Jha of The Times of India argued that "it's Shefali Shah as Amitabh Bachchan's wife whose expressive eyes conveying spousal and matriarchal pain that you come home with". She followed with a role in Aparna Sen's English-language drama 15 Park Avenue (2005). In 2007, Shah was lauded for her work in two films: Feroz Abbas Khan's biographical film Gandhi, My Father and Rituparno Ghosh's English-language film-within-a-film drama The Last Lear. An Indo-British co-production, Gandhi, My Father features Shah in the role of Kasturba Gandhi, who is torn by the lifelong conflict between her husband Mahatma Gandhi and son Hiralal (played by Darshan Jariwala and Akshaye Khanna, respectively). Portraying the character from Kasturba's early adulthood to old age, Shah lost weight to look the part. Khalid Mohamed of Hindustan Times called her performance "magnificent" and Roshmila Bhattacharya of Screen described her as "brilliant, her sparkling glances, eloquent silences and drooping shoulders effectively conveying the hopelessness and helplessness of a parent whose child has gone astray". She was awarded the Best Actress prize at the Tokyo International Film Festival and the Critics Award for Best Actor – Female at the 2008 Zee Cine Awards. The Last Lear revolves around a Shakespearean theatre actor (played by Amitabh Bachchan). Shah played his troubled and irritable caregiver and live-in partner, a role she considered her best yet, alongside Preity Zinta and Divya Dutta. The film premiered at the Toronto International Film Festival where it was well-received. Rajeev Masand of IBN Live wrote of "the manner she goes from spiteful to soothing" throughout the film, and Sukanya Verma of Rediff.com took note of Shah's commanding presence. The film was named Best Feature Film in English at the 55th National Film Awards, where Shah won the Best Supporting Actress award for what was cited by the jury as an "aggressive portrayal of a Bengali housewife who in time becomes more tolerant of her aging husband's many eccentric guests". ### Intermittent work on stage and screen (2008–2016) Subhash Ghai's crime thriller Black & White (2008) stars Shah as Roma Mathur, a Bengali activist and the wife of an Urdu professor (Anil Kapoor). The film follows the couple's acquaintance with a disguised Islamic fundamentalist, played by Anurag Sinha, plotting a suicide attack at the Red Fort. It generated mixed reviews and so did Shah's performance, which Khalid Mohamed found to be "unusually hammy". Two years later, in view of the lack of substantial film work that would realise her acting potential, Shah's husband Vipul cast her in his Hindi stage production Bas Itna Sa Khwab, directed by Chandrakant Kulkarni. Based on Kulkarni's Marathi play Dhyanimani, it marked Shah's return to the stage after a decade and saw her in the role of a middle-class housewife opposite Kiran Karmarkar. The production travelled from Mumbai's Rangsharda to Ludhiana's Sanskritik Samagam, to Dubai. She spoke of her acting experience on stage, recounting her full involvement with her character: "I have to literally break down every time, then collect the pieces and put them back together again." Authors Sunil Kant Munjal and S.K. Rai, in the book All the World is a Stage, lavished praise on her performance. After appearing as a psychiatrist in the thriller Karthik Calling Karthik in 2010, Shah was cast in the lead part of her husband's production Kucch Luv Jaisaa the following year. She played a young housewife who spends a romantic day with a criminal on the run from prison (Rahul Bose). To prepare for the part, Shah visited the Thane Jail and interacted with prisoners to attain better understanding of her character's experience. The film opened to a lukewarm critical response, with critics Subhash K. Jha and Mihir Fadnavis observing that Shah struggles with material that was written with little conviction. Her efforts were better received by Mayank Shekhar, who found her "startlingly expressive" and commended her for exuding "the kind of vulnerability and warmth that's rare to match". After three years of absence from the screen, Shah returned as Jyoti, a brothel madam in Nagesh Kukunoor's 2014 social problem film Lakshmi, alongside Monali Thakur. Based on the true story of a teenager who is kidnapped and sold into a brothel in Hyderabad, Lakshmi released to a positive critical reception for its harshly realistic depiction human trafficking and child prostitution. Sudhish Kamath appreciated Shah's performance in a complex role. In 2015, Shah starred in Zoya Akhtar's comedy-drama Dil Dhadakne Do alongside Anil Kapoor as her husband, and Priyanka Chopra and Ranveer Singh as her children. The story is about a wealthy, dysfunctional family who embark on a cruise to celebrate the 30th wedding anniversary of the parents; Shah played Neelam Mehra, the passive-aggressive matriarch caught in a marriage of convenience and hiding her eating disorder. Shah loved the script and the character but was initially apprehensive about accepting another part of a middle-aged woman and playing a mother to Chopra and Singh; she eventually relented on her husband's advice. Dil Dhadakne Do was one of the highest-grossing Hindi films of 2015. A scene where an emotionally collapsing Neelam is seen binging on a cake in front of the mirror was particularly noted by critics. Subhash K. Jha found Shah's performance the most effective of the ensemble cast, arguing she "brings to her character an unfussy pitch-perfection rarely seen in mainstream cinema". She received her third Best Supporting Actress Filmfare nomination for the film, and was awarded the Stardust Award for Best Supporting Actress as well as a Screen Award for Best Ensemble Cast along with her co-stars in the film. In Brothers (2015), Karan Malhotra's remake of the American sports drama Warrior (2011), Shah had a minor supporting role. Her character Maria Fernandes is presented in flashbacks as a woman who accepts the child her adulterous husband had out of wedlock. The film generated mixed-to-negative reviews; Vishal Menon of The Hindu thought she had a role which required copious crying but The Hollywood Reporter found her performance heartbreaking. Shah voiced the character of Raksha in the Hindi version of the Disney live-action feature The Jungle Book (2016). She next played the fictional part of India's Minister of Home Affairs Leena Chowdhury in the action thriller Commando 2: The Black Money Trail. ### Critical acclaim in leading roles (2017–present) In 2017, Shah acted in Juice, a short film about gender inequality in middle-class Indian families. Directed by Neeraj Ghaywan, it stars Shah as Manju Singh, a woman who, after hours spent in the kitchen, acts in defiance of her inconsiderate husband. The film and Shah's performance received favourable reviews. Critics noted her ability to communicate emotions through gestures and expressions; Kriti Tulsiani wrote that Shah's "unfazed gazes convey more than words will ever say". The film won two Filmfare Short Film Awards at the 63rd Filmfare Awards: Best Film (fiction) and Best Actress for Shah. In years to follow, she credited Juice as the first of several films that helped propel her career forward. In Once Again (2018), an Indo-German Netflix romance film, Shah was cast in the lead as a widowed middle-aged restaurateur who falls in love with an ageing film star played by Neeraj Kabi. Shah said she had long awaited a film of the sort, describing herself as "an incurable romantic". She received compliments for her performance, and her chemistry with Kabi drew positive notice. Deepa Gahlot of Financial Chronicle appreciated the film's subtlety and took note of Shah's expressive eyes revealing her inner state, an opinion shared by other critics. Shah's second collaboration with Netflix took place in the 2019 procedural miniseries Delhi Crime, which was written and directed by Richie Mehta. Based on the aftermath of the 2012 Delhi gang rape, the show stars Shah as Vartika Chaturvedi, a South Delhi Deputy Commissioner of Police (DCP) who is assigned to investigate a brutal gang rape in Delhi. The character was modelled after former Delhi DCP Chhaya Sharma. Shah found the part "emotionally, physically, mentally" consuming and would often interact with Sharma throughout filming to learn more about the character. The series opened to universally positive reviews from critics, and Shah's performance met with widespread acclaim. Dorothy Rabinowitz of The Wall Street Journal commended Shah for "a movingly understated and complex performance" and Namrata Joshi of The Hindu wrote: "Shah is on top of her game, conflicted yet sure of herself, vulnerable but strong, swayed by emotions yet never giving in to them, bristling equally with anger, concern, disappointments and dejection." Delhi Crime was named Best Drama Series at the 48th International Emmy Awards, and won four Asian Academy Creative Awards, including Best Drama Series and Best Actress for Shah. She hailed the show as a turning point in her life, saying it reassured filmmakers to cast her in primary parts and heralded the busiest period of her career. In 2020, Shah experimented with writing and directing in two self-starring COVID-19-based short films, Someday and Happy Birthday Mummyji. In Someday, which marked her directorial debut, she played a frontline healthcare worker who returns home for a seven-day quarantine due to the pandemic and spends time interacting through a door with her elderly mother, who suffers from Alzheimer's disease. Shah conceived the story based on memories from her mother who had turned caregiver to her grandmother, and shot the film with a five-member crew at her residence over a period of two days. The film premiered at the 51st USA Film Festival and was later screened at the 18th Indian Film Festival Stuttgart in Germany. In Happy Birthday Mummyji, she played Suchi, a housewife whose preparations for her mother-in-law's birthday party are halted by a sudden national curfew, leaving her home alone and decisive to make the most of the rare opportunity to spend time on herself. Shah wrote the script drawing upon her own life experiences and believed Suchi "represents all the women you know". A single-character film, it opened to positive reviews and attracted some notice for a masturbation scene played by Shah. The 2021 Netflix original anthology film Ajeeb Daastaans, comprising four short stories, featured Shah in the fourth segment "Ankahi", directed by Kayoze Irani. She played Natasha, an unhappily married woman who struggles with her teenaged daughter's hearing loss and falls in love with a hearing-impaired photographer, played by Manav Kaul. She studied sign language in preparation for the part and revealed to have grown so emotionally invested in the story that it left her heartbroken when filming ended. "Ankahi" was well-received by critics, with particular emphasis placed on Shah and Kaul's performances. In 2022, Shah starred in Human, a medical streaming television series. Directed by her husband for Disney+ Hotstar, the show explores the nexus between pharmaceutical companies and large private hospitals who conduct human trials for new drugs on lower-class citizens. She played Dr. Gauri Nath, a powerful and ruthlessly ambitious neurosurgeon with a traumatic childhood who owns Manthan, a self-founded multi-specialty hospital. Shah found the negative character of Gauri to be unlike anyone she had ever known. Hindustan Times described Gauri as one of the best characters on Indian digital series yet, calling her an "incredibly disturbed sociopath" and "a vicious snake singularly committed to building her business". Critics reacted positively to Shah's turn, noting her composed demeanor and hushed tone in the part. Later in the year, Shah starred opposite Vidya Balan in the social thriller Jalsa, an Amazon Prime feature film. Her part is that of Rukhsana, a maid whose daughter becomes the victim of a hit-and-run accident. The film opened to a positive response from critics, and Shah received rave reviews for her understated performance. She was named Best Actress at the annual Indian Film Festival of Melbourne. Anuj Kumar of The Hindu commended her performance and character: "At the cost of repeating oneself, the depth of Shefali's eyes and the emotions that they could hold continues to bewitch and baffle. Her Rukhsana is that vulnerable maid from the margins who makes an attempt to hold on to a life of dignity." In the black comedy Darlings (2022), produced for Netflix, Shah and Alia Bhatt star as a mother and daughter who embark on a revenge plan against the latter's abusive husband. Shubhra Gupta complimented Shah for her "powerful act", and Anna M. M. Vetticad wrote of Shah's effective blend of comedy and drama in the part. Darlings became the highest-viewed non-English Indian original on Netflix. August 2022 saw the release of the second season of Delhi Crime, based on the chapter "Moon Gazer" from retired police officer Neeraj Kumar's book Khaki Files. Addressing new themes such as class prejudice, the show opened to positive reviews and Shah again received favourable comments for holding the show together with both power and vulnerability. According to Vogue's Taylor Antrim, Shah is "tremendous in the role" as she "seizes your attention" in her reprisal of Vartika Chaturvedi, who is "intensely serious; fearsome to her subordinates, who call her 'Madam Sir'; and clearly burdened by her job". Shah next appeared in the medical comedy Doctor G, alongside Ayushmann Khurrana and Rakul Preet Singh, for which she was nominated for the Filmfare Award for Best Supporting Actress. ## Artistry and reception Shah has been described by critics and the media as one of India's finest actresses. Describing herself as an instinctive actor, she has confessed to not approaching acting as a craft but rather becoming a person and living each character's struggle, which often proves taxing. She explained her technique: "Every role takes away a part of me. It's exhausting, it drains me completely, and then enriches me. It's a cycle, but I don't know any other way". Known for her understated acting style, Shah has been noted for her big, expressive eyes and her ability to emote through minimal facial expressions and gestures, and often through silence. Devansh Sharma described Shah's use of silence as "a leitmotif in all her performances", and Sneha Bengani commented: "Shah has always thrived in silences. Through them, she communicates with easy effortlessness what words almost always fail to." In view of her preference for minimalism, she has gained a reputation for asking directors to cut her lines and scenes. Shah explains that cinema being a powerful visual medium which captures actors' faces, sometimes not much spoken text is required and is often redundant. Due to her eagerness to be thoroughly versed in details about her scripts and parts, Shah often keeps badgering her directors with questions during filming. Highly selective about her roles and unwilling to compromise her artistic integrity, Shah chooses parts by instinct and maintains that unless completely consumed by a project, she will not commit to it. She does not give importance to the length of a role, but more "the mettle, the potency and the relevance" it has in the film. Some of Shah's characters throughout her career were of women older than herself. Her first roles on television when she was in her early twenties, including Savy, the mother of a teenager in Hasratein, made several filmmakers offer her parts of women that far surpassed her actual age. She admitted that Hasratein had damaged her career in this regard. On one occasion, she had almost played the role of a mother to Amitabh Bachchan, who was twice her age, before she left the project. Although she was initially excited about the acting challenge in playing mature women, she decided to stop accepting such parts, especially after playing the middle-aged part of Bachchan's wife and Akshay Kumar's mother in Waqt (2005), because filmmakers sought to typecast her in similar parts. She explained her choice of Gandhi, My Father (2007) was different as she played Kasturba Gandhi from the character's early adulthood into her later years. Dil Dhadakne Do (2015) was another exception where she was so impressed with the character she could not refuse it. Shah was one of the leading actresses of Indian television before she left it as she was dissatisfied with the content. Following the positive reaction to her performance in Satya in 1998, she expected more film work coming her way whereas the offers she received at the time comprised mostly small character parts. While initially bothered by the limited work available to her in Hindi films, Shah has over years come to terms with the realisation that satisfactory roles would come to her every once in a while. This resulted in numerous gaps between her film appearances. The rise of digital streaming platforms, however, rejuvenated Shah's career, with parts not otherwise available in films and often written specially for her. Delhi Crime proved to be a major turning point in her career, as it brought an influx of film offers, mostly of leading roles which would seldom come her way before. Consequently, she embarked on the busiest period of her professional life, working on six projects throughout 2020 in films and roles the kind of which she had longed for. She credited digital platforms with giving her opportunity to invest more in her parts: "The web-series format gives me hours to experiment, explore and indulge and understand my character's nuances." The reception to Shah's performances has been positive from her initial television work. Her early screen persona on television was that of a woman who, according to Chatura Poojari, is "homely, chatty but with a sensible head firmly screwed onto her shoulders—a regular Indian woman who deals with life by wearing a velvet glove over an iron hand". Shah believes her middle-class background has helped her shape a personality which makes her characters relatable. A 1999 article by The Indian Express said that she "pulls off each and every character with absolute ease". Subhash K. Jha describes her as "an impossibly skilled actress" and, on another occasion, "an actress who forces you to watch her". Speaking of her eyes, Devansh Sharma wrote in a review of Once Again (2018), "Her loquacious eyes express rage with as much ease as they do love." Reviewing Jalsa (2022), Monika Rawal Kukreja was highly impressed with Shah's use of just her eyes and expressions to emote. Author and journalist Aparna Pednekar wrote, "Shah's au naturale performances come from an instinctive, savage space, with an abundance of layers simmering beneath a placid smile and soft-spoken personality".
20,208
Margaret Murray
1,170,953,165
Anglo-Indian Egyptologist (1863–1963)
[ "1863 births", "1963 deaths", "19th-century British archaeologists", "19th-century British women writers", "20th-century British archaeologists", "20th-century British women writers", "20th-century British writers", "20th-century Indian anthropologists", "20th-century Indian women writers", "20th-century Indian writers", "Academics of University College London", "Alumni of University College London", "British Egyptologists", "British anthropologists", "British centenarians", "British feminists", "British women academics", "British women anthropologists", "British women archaeologists", "British women folklorists", "British women historians", "Historians of witchcraft", "Indian centenarians", "Presidents of the Folklore Society", "Pseudohistorians", "Scientists from Kolkata", "Women centenarians", "Writers from Kolkata" ]
Margaret Alice Murray FSA Scot FRAI (13 July 1863 – 13 November 1963) was a British-Indian Egyptologist, archaeologist, anthropologist, historian, and folklorist who was born in India. The first woman to be appointed as a lecturer in archaeology in the United Kingdom, she worked at University College London (UCL) from 1898 to 1935. She served as president of the Folklore Society from 1953 to 1955, and published widely over the course of her career. Born to a wealthy middle-class English family in Calcutta, British India, Murray divided her youth between India, Britain, and Germany, training as both a nurse and a social worker. Moving to London, in 1894 she began studying Egyptology at UCL, developing a friendship with department head Flinders Petrie, who encouraged her early academic publications and appointed her junior lecturer in 1898. In 1902–03 she took part in Petrie's excavations at Abydos, Egypt, there discovering the Osireion temple and the following season investigated the Saqqara cemetery, both of which established her reputation in Egyptology. Supplementing her UCL wage by giving public classes and lectures at the British Museum and Manchester Museum, it was at the latter in 1908 that she led the unwrapping of Khnum-nakht, one of the mummies recovered from the Tomb of two Brothers – the first time that a woman had publicly unwrapped a mummy. Recognising that British Egyptomania reflected the existence of a widespread public interest in Ancient Egypt, Murray wrote several books on Egyptology targeted at a general audience. Murray also became closely involved in the first-wave feminist movement, joining the Women's Social and Political Union and devoting much time to improving women's status at UCL. Unable to return to Egypt due to the First World War, she focused her research on the witch-cult hypothesis, the theory that the witch trials of Early Modern Christendom were an attempt to extinguish a surviving pre-Christian, pagan religion devoted to a Horned God. Although later academically discredited, the theory gained widespread attention and proved a significant influence on the emerging new religious movement of Wicca. From 1921 to 1931 Murray undertook excavations of prehistoric sites on Malta and Menorca and developed her interest in folkloristics. Awarded an honorary doctorate in 1927, she was appointed assistant professor in 1928 and retired from UCL in 1935. That year she visited Palestine to aid Petrie's excavation of Tall al-Ajjul and in 1937 she led a small excavation at Petra in Jordan. Taking on the presidency of the Folklore Society in later life, she lectured at such institutions as the University of Cambridge and City Literary Institute, and continued to publish in an independent capacity until her death. Murray's work in Egyptology and archaeology was widely acclaimed and earned her the nickname of "The Grand Old Woman of Egyptology", although after her death many of her contributions to the field were overshadowed by those of Petrie. Conversely, Murray's work in folkloristics and the history of witchcraft has been academically discredited and her methods in these areas heavily criticised. The influence of her witch-cult theory in both religion and literature has been examined by various scholars, and she herself has been dubbed the "Grandmother of Wicca". ## Early life ### Youth: 1863–93 Margaret Murray was born on 13 July 1863 in Calcutta, Bengal Presidency, then a major military city and the capital of British India. She lived in the city with her family: parents James and Margaret Murray, an older sister named Mary, and her paternal grandmother and great-grandmother. James Murray, born in India of English descent, was a businessman and manager of the Serampore paper mills who was thrice elected President of the Calcutta Chamber of Commerce. His wife, Margaret (née Carr), had moved to India from Britain in 1857 to work as a missionary, preaching Christianity and educating Indian women. She continued with this work after marrying James and giving birth to her two daughters. Although most of their lives were spent in the European area of Calcutta, which was walled off from the Indian sectors of the city, Murray encountered members of Indian society through her family's employment of ten Indian servants and through childhood holidays to Mussoorie. The historian Amara Thornton has suggested that Murray's Indian childhood continued to exert an influence over her throughout her life, expressing the view that Murray could be seen as having a hybrid transnational identity that was both British and Indian. During her childhood, Murray never received a formal education, and in later life expressed pride in the fact that she had never had to sit an exam before entering university. In 1870, Margaret and her sister Mary were sent to Britain, moving in with their uncle John, a vicar, and his wife Harriet at their home in Lambourn, Berkshire. Although John provided them with a strongly Christian education and a belief in the inferiority of women, both of which she would reject, he awakened Murray's interest in archaeology through taking her to see local monuments. In 1873, the girls' mother arrived in Europe and took them with her to Bonn in Germany, where they both became fluent in German. In 1875 they returned to Calcutta, staying there till 1877. They then moved with their parents back to England, where they settled in Sydenham, South London. There, they spent much time visiting The Crystal Palace, while their father worked at his firm's London office. In 1880, they returned to Calcutta, where Margaret remained for the next seven years. She became a nurse at the Calcutta General Hospital, which was run by the Sisters of the Anglican Sisterhood of Clower, and there was involved with the hospital's attempts to deal with a cholera outbreak. In 1887, she returned to England, moving to Rugby, Warwickshire, where her uncle John, now widowed, had moved. Here she took up employment as a social worker dealing with local underprivileged people. When her father retired and moved to England, she moved into his house in Bushey Heath, Hertfordshire, living with him until his death in 1891. In 1893 she then travelled to Madras, Tamil Nadu, where her sister had moved to with her new husband. ### Early years at University College London: 1894–1905 Encouraged by her mother and sister, Murray decided to enroll at the newly opened department of Egyptology at University College London (UCL) in Bloomsbury, Central London. Having been founded by an endowment from Amelia Edwards, one of the co-founders of the Egypt Exploration Fund (EEF), the department was run by the pioneering early archaeologist Sir William Flinders Petrie, and based in the Edwards Library of UCL's South Cloisters. Murray began her studies at UCL at age 30 in January 1894, as part of a class composed largely of other women and older men. There, she took courses in the Ancient Egyptian and Coptic languages which were taught by Francis Llewellyn Griffith and Walter Ewing Crum respectively. Murray soon got to know Petrie, becoming his copyist and illustrator and producing the drawings for the published report on his excavations at Qift, Koptos. In turn, he aided and encouraged her to write her first research paper, "The Descent of Property in the Early Periods of Egyptian History", which was published in the Proceedings of the Society for Biblical Archaeology in 1895. Becoming Petrie's de facto though unofficial assistant, Murray began to give some of the linguistic lessons in Griffith's absence. In 1898 she was appointed to the position of junior lecturer, responsible for teaching the linguistic courses at the Egyptology department; this made her the first female lecturer in archaeology in the United Kingdom. In this capacity, she spent two days a week at UCL, devoting the other days to caring for her ailing mother. As time went on, she came to teach courses on Ancient Egyptian history, religion, and language. Among Murray's students – to whom she referred as "the Gang" – were several who went on to produce noted contributions to Egyptology, including Reginald Engelbach, Georgina Aitken, Guy Brunton, and Myrtle Broome. She supplemented her UCL salary by teaching evening classes in Egyptology at the British Museum. At this point, Murray had no experience in field archaeology, and so during the 1902–03 field season, she travelled to Egypt to join Petrie's excavations at Abydos. Petrie and his wife, Hilda Petrie, had been excavating at the site since 1899, having taken over the archaeological investigation from French Coptic scholar Émile Amélineau. Murray at first joined as site nurse, but was subsequently taught how to excavate by Petrie and given a senior position. This led to some issues with some of the male excavators, who disliked the idea of taking orders from a woman. This experience, coupled with discussions with other female excavators (some of whom were active in the feminist movement) led Murray to adopt openly feminist viewpoints. While excavating at Abydos, Murray uncovered the Osireion, a temple devoted to the god Osiris which had been constructed by order of Pharaoh Seti I during the period of the New Kingdom. She published her site report as The Osireion at Abydos in 1904; in the report, she examined the inscriptions that had been discovered at the site to discern the purpose and use of the building. During the 1903–04 field season, Murray returned to Egypt, and at Petrie's instruction began her investigations at the Saqqara cemetery near to Cairo, which dated from the period of the Old Kingdom. Murray did not have legal permission to excavate the site, and instead spent her time transcribing the inscriptions from ten of the tombs that had been excavated during the 1860s by Auguste Mariette. She published her findings in 1905 as Saqqara Mastabas I, although would not publish translations of the inscriptions until 1937 as Saqqara Mastabas II. Both The Osireion at Abydos and Saqqara Mastabas I proved to be very influential in the Egyptological community, with Petrie recognising Murray's contribution to his own career. ### Feminism, the First World War, and folklore: 1905–20 On returning to London, Murray took an active role in the feminist movement, volunteering and financially donating to the cause and taking part in feminist demonstrations, protests, and marches. Joining the Women's Social and Political Union, she was present at large marches like the Mud March of 1907 and the Women's Coronation Procession of June 1911. She concealed the militancy of her actions in order to retain the image of respectability within academia. Murray also pushed the professional boundaries for women throughout her own career, and mentored other women in archaeology and throughout academia. As women could not use the men's common room, she successfully campaigned for UCL to open a common room for women, and later ensured that a larger, better-equipped room was converted for the purpose; it was later renamed the Margaret Murray Room. At UCL, she became a friend of fellow female lecturer Winifred Smith, and together they campaigned to improve the status and recognition of women in the university, with Murray becoming particularly annoyed at female staff who were afraid of upsetting or offending the male university establishment with their demands. Feeling that students should get nutritious yet affordable lunches, for many years she sat on the UCL Refectory Committee. She took on an unofficial administrative role within the Egyptology Department, and was largely responsible for introduction of a formal certificate in Egyptian archaeology in 1910. Various museums around the United Kingdom invited Murray to advise them on their Egyptological collections, resulting in her cataloguing the Egyptian artefacts owned by the Dublin National Museum, the National Museum of Antiquities in Edinburgh, and the Society of Antiquaries of Scotland, being elected a Fellow of the latter in thanks. Petrie had established connections with the Egyptological wing of Manchester Museum in Manchester, and it was there that many of his finds had been housed. Murray thus often travelled to the museum to catalogue these artefacts, and during the 1906–07 school year regularly lectured there. In 1907, Petrie excavated the Tomb of the Two Brothers, a Middle Kingdom burial of two Egyptian priests, Nakht-ankh and Khnum-nakht, and it was decided that Murray would carry out the public unwrapping of the latter's mummified body. Taking place at the museum in May 1908, it represented the first time that a woman had led a public mummy unwrapping and was attended by over 500 onlookers, attracting press attention. Murray was particularly keen to emphasise the importance that the unwrapping would have for the scholarly understanding of the Middle Kingdom and its burial practices, and lashed out against members of the public who saw it as immoral; she declared that "every vestige of ancient remains must be carefully studied and recorded without sentimentality and without fear of the outcry of the ignorant". She subsequently published a book about her analysis of the two bodies, The Tomb of the Two Brothers, which remained a key publication on Middle Kingdom mummification practices into the 21st century. Murray was dedicated to public education, hoping to infuse Egyptomania with solid scholarship about Ancient Egypt, and to this end authored a series of books aimed at a general audience. In 1905 she published Elementary Egyptian Grammar which was followed in 1911 by Elementary Coptic (Sahidic) Grammar. In 1913, she published Ancient Egyptian Legends for John Murray's "The Wisdom of the East" series. She was particularly pleased with the increased public interest in Egyptology that followed Howard Carter's discovery of the tomb of Pharaoh Tutankhamun in 1922. From at least 1911 until his death in 1940, Murray was a close friend of the anthropologist Charles Gabriel Seligman of the London School of Economics, and together they co-authored a variety of papers on Egyptology that were aimed at an anthropological audience. Many of these dealt with subjects that Egyptological journals would not publish, such as the "Sa" sign for the uterus, and thus were published in Man, the journal of the Royal Anthropological Institute. It was at Seligman's recommendation that she was invited to become a member of the Institute in 1916. In 1914, Petrie launched the academic journal Ancient Egypt, published through his own British School of Archaeology in Egypt (BSAE), which was based at UCL. Given that he was often away from London excavating in Egypt, Murray was left to operate as de facto editor much of the time. She also published many research articles in the journal and authored many of its book reviews, particularly of the German-language publications which Petrie could not read. The outbreak of the First World War in 1914, in which the United Kingdom went to war against Germany and the Ottoman Empire, meant that Petrie and other staff members were unable to return to Egypt for excavation. Instead, Petrie and Murray spent much of the time reorganising the artefact collections that they had attained over the past decades. To aid Britain's war effort, Murray enrolled as a volunteer nurse in the Volunteer Air Detachment of the College Women's Union Society, and for several weeks was posted to Saint-Malo in France. After being taken ill herself, she was sent to recuperate in Glastonbury, Somerset, where she became interested in Glastonbury Abbey and the folklore surrounding it which connected it to the legendary figure of King Arthur and to the idea that the Holy Grail had been brought there by Joseph of Aramathea. Pursuing this interest, she published the paper "Egyptian Elements in the Grail Romance" in the journal Ancient Egypt, although few agreed with her conclusions and it was criticised for making unsubstantiated leaps with the evidence by the likes of Jessie Weston. ## Later life ### Witch-cult, Malta, and Menorca: 1921–35 Murray's interest in folklore led her to develop an interest in the witch trials of Early Modern Europe. In 1917, she published a paper in Folklore, the journal of the Folklore Society, in which she first articulated her version of the witch-cult theory, arguing that the witches persecuted in European history were actually followers of "a definite religion with beliefs, ritual, and organization as highly developed as that of any cult in the end". She followed this up with papers on the subject in the journals Man and the Scottish Historical Review. She articulated these views more fully in her 1921 book The Witch-Cult in Western Europe, published by Oxford University Press after receiving a positive peer review by Henry Balfour, and which received both criticism and support on publication. Many reviews in academic journals were critical, with historians claiming that she had distorted and misinterpreted the contemporary records that she was using, but the book was nevertheless influential. As a result of her work in this area, she was invited to provide the entry on "witchcraft" for the fourteenth edition of the Encyclopædia Britannica in 1929. She used the opportunity to propagate her own witch-cult theory, failing to mention the alternate theories proposed by other academics. Her entry would be included in the encyclopedia until 1969, becoming readily accessible to the public, and it was for this reason that her ideas on the subject had such a significant impact. It received a particularly enthusiastic reception by occultists such as Dion Fortune, Lewis Spence, Ralph Shirley, and J. W. Brodie Innes, perhaps because its claims regarding an ancient secret society chimed with similar claims common among various occult groups. Murray joined the Folklore Society in February 1927, and was elected to the society's council a month later, although she stood down in 1929. Murray reiterated her witch-cult theory in her 1933 book, The God of the Witches, which was aimed at a wider, non-academic audience. In this book, she cut out or toned down what she saw as the more unpleasant aspects of the witch-cult, such as animal and child sacrifice, and began describing the religion in more positive terms as "the Old Religion". At UCL, Murray was promoted to lecturer in 1921 and to senior lecturer in 1922. From 1921 to 1927, she led archaeological excavations on Malta, assisted by Edith Guest and Gertrude Caton Thompson. She excavated the Bronze Age megalithic monuments of Santa Sofia, Santa Maria tal-Bakkari, Għar Dalam, and Borġ in-Nadur, all of which were threatened by the construction of a new aerodrome. In this she was funded by the Percy Sladen Memorial Fund. Her resulting three-volume excavation report came to be seen as an important publication within the field of Maltese archaeology. During the excavations, she had taken an interest in the island's folklore, resulting in the 1932 publication of her book Maltese Folktales, much of which was a translation of earlier stories collected by Manuel Magri and her friend Liza Galea. In 1932 Murray returned to Malta to aid in the cataloguing of the Bronze Age pottery collection held in Malta Museum, resulting in another publication, Corpus of the Bronze Age Pottery of Malta. On the basis of her work in Malta, Louis Clarke, the curator of the Cambridge Museum of Ethnology and Anthropology, invited her to lead excavations on the island of Menorca from 1930 to 1931. With the aid of Guest, she excavated the talaiotic sites of Trepucó and Sa Torreta de Tramuntana, resulting in the publication of Cambridge Excavations in Minorca. Murray also continued to publish works on Egyptology for a general audience, such as Egyptian Sculpture (1930) and Egyptian Temples (1931), which received largely positive reviews. In the summer of 1925 she led a team of volunteers to excavate Homestead Moat in Whomerle Wood near to Stevenage, Hertfordshire; she did not publish an excavation report and did not mention the event in her autobiography, with her motives for carrying out the excavation remaining unclear. In 1924, UCL promoted Murray to the position of assistant professor, and in 1927 she was awarded an honorary doctorate for her career in Egyptology. That year, Murray was tasked with guiding Mary of Teck, the Queen consort, around the Egyptology department during the latter's visit to UCL. The pressures of teaching had eased by this point, allowing Murray to spend more time travelling internationally; in 1920 she returned to Egypt and in 1929 visited South Africa, where she attended the meeting of the British Association for the Advancement of Science, whose theme was the prehistory of southern Africa. In the early 1930s she travelled to the Soviet Union, where she visited museums in Leningrad, Moscow, Kharkiv, and Kyiv, and then in late 1935 she undertook a lecture tour of Norway, Sweden, Finland, and Estonia. Although having reached legal retirement age in 1927, and thus unable to be offered another five-year contract, Murray was reappointed on an annual basis each year until 1935. At this point, she retired, expressing the opinion that she was glad to leave UCL, for reasons that she did not make clear. In 1933, Petrie had retired from UCL and moved to Jerusalem in Mandatory Palestine with his wife; Murray therefore took over as editor of the Ancient Egypt journal, renaming it Ancient Egypt and the East to reflect its increasing research interest in the ancient societies that surrounded and interacted with Egypt. The journal folded in 1935, perhaps due to Murray's retirement. Murray then spent some time in Jerusalem, where she aided the Petries in their excavation at Tall al-Ajjul, a Bronze Age mound south of Gaza. ### Petra, Cambridge, and London: 1935–53 During Murray's 1935 trip to Palestine, she had taken the opportunity to visit Petra in neighbouring Jordan. Intrigued by the site, in March and April 1937 she returned in order to carry out a small excavation in several cave dwellings at the site, subsequently writing both an excavation report and a guidebook on Petra. Back in England, from 1934 to 1940, Murray aided the cataloguing of Egyptian antiquities at Girton College, Cambridge, and also gave lectures in Egyptology at the university until 1942. Her interest in folklore more broadly continued and she wrote the introduction to Lincolshire Folklore by Ethel Rudkin, in which she discussed how superior women were as folklorists to men. During the Second World War, Murray evaded the Blitz of London by moving to Cambridge, where she volunteered for a group (probably the Army Bureau of Current Affairs or The British Way and Purpose) who educated military personnel to prepare them for post-war life. Based in the city, she embarked on research into the town's Early Modern history, examining documents stored in local parish churches, Downing College, and Ely Cathedral; she never published her findings. In 1945, she briefly became involved in the "Who put Bella in the Wych Elm?" murder case. After the war ended she returned to London, settling into a bedsit room in Endsleigh Street, which was close to University College London (UCL) and the Institute of Archaeology (then an independent institution, now part of UCL); she continued her involvement with the former and made use of the latter's library. On most days she visited the British Museum in order to consult their library, and twice a week she taught adult education classes on Ancient Egyptian history and religion at the City Literary Institute; upon her retirement from this position she nominated her former pupil, Veronica Seton-Williams, to replace her. Murray's interest in popularising Egyptology among the wider public continued; in 1949 she published Ancient Egyptian Religious Poetry, her second work for John Murray's "The Wisdom of the East" series. That year she also published The Splendour That Was Egypt, in which she collated many of her UCL lectures. The book adopted a diffusionist perspective that argued that Egypt influenced Greco-Roman society and thus modern Western society. This was seen as a compromise between Petrie's belief that other societies influenced the emergence of Egyptian civilisation and Grafton Elliot Smith's highly unorthodox and heavily criticised hyperdiffusionist view that Egypt was the source of all global civilisation. The book received a mixed reception from the archaeological community. ### Final years: 1953–63 In 1953, Murray was appointed to the presidency of the Folklore Society following the resignation of former president Allan Gomme. The Society had initially approached John Mavrogordato for the post, but he had declined, with Murray accepting the nomination several months later. Murray remained president for two terms, until 1955. In her 1954 presidential address, "England as a Field for Folklore Research", she lamented what she saw as the English people's disinterest in their own folklore in favour of that from other nations. For the autumn 1961 issue of Folklore, the society published a festschrift to Murray to commemorate her 98th birthday. The issue contained contributions from various scholars paying tribute to her – with papers dealing with archaeology, fairies, Near Eastern religious symbols, Greek folk songs – but notably not about witchcraft, potentially because no other folklorists were willing to defend her witch-cult theory. In May 1957, Murray had championed the archaeologist T. C. Lethbridge's controversial claims that he had discovered three pre-Christian chalk hill figures on Wandlebury Hill in the Gog Magog Hills, Cambridgeshire. Privately she expressed concern about the reality of the figures. Lethbridge subsequently authored a book championing her witch-cult theory in which he sought the cult's origins in pre-Christian culture. In 1960, she donated her collection of papers – including correspondences with a wide range of individuals across the country – to the Folklore Society Archive, where it is now known as "the Murray Collection". Crippled with arthritis, Murray had moved into a home in North Finchley, north London, where she was cared for by a retired couple who were trained nurses; from here she occasionally took taxis into central London to visit the UCL library. Amid failing health, in 1962 Murray moved into the Queen Victoria Memorial Hospital in Welwyn, Hertfordshire, where she could receive 24-hour care; she lived here for the final 18 months of her life. To mark her hundredth birthday, on 13 July 1963 a group of her friends, former students, and doctors gathered for a party at nearby Ayot St. Lawrence. Two days later, her doctor drove her to UCL for a second birthday party, again attended by many of her friends, colleagues, and former students; it was the last time that she visited the university. In Man, the journal of the Royal Anthropological Institute, it was noted that Murray was "the only Fellow of the Institute to [reach their centenary] within living memory, if not in its whole history". That year she published two books; one was The Genesis of Religion, in which she argued that humanity's first deities had been goddesses rather than male gods. The second was her autobiography, My First Hundred Years, which received predominantly positive reviews. She died on 13 November 1963, and her body was cremated. ## Murray's witch-cult hypotheses The later folklorists Caroline Oates and Juliette Wood have suggested that Murray was best known for her witch-cult theory, with biographer Margaret S. Drower expressing the view that it was her work on this subject which "perhaps more than any other, made her known to the general public". It has been claimed that Murray's was the "first feminist study of the witch trials", as well as being the first to have actually "empowered the witches" by giving the (largely female) accused both free will and a voice distinct from that of their interrogators. The theory was faulty, in part because all of her academic training was in Egyptology, with no background knowledge in European history, but also because she exhibited a "tendency to generalize wildly on the basis of very slender evidence". Oates and Wood, however, noted that Murray's interpretations of the evidence fit within wider perspectives on the past that existed at the time, stating that "Murray was far from isolated in her method of reading ancient ritual origins into later myths". In particular, her approach was influenced by the work of the anthropologist James Frazer, who had argued for the existence of a pervasive dying-and-resurrecting god myth, and she was also influenced by the interpretative approaches of E. O. James, Karl Pearson, Herbert Fleure, and Harold Peake. ### Argument In The Witch-Cult in Western Europe, Murray stated that she had restricted her research to Great Britain, although made some recourse to sources from France, Flanders, and New England. She drew a division between what she termed "Operative Witchcraft", which referred to the performance of charms and spells with any purpose, and "Ritual Witchcraft", by which she meant "the ancient religion of Western Europe", a fertility-based faith that she also termed "the Dianic cult". She claimed that the cult had "very probably" once been devoted to the worship of both a male deity and a "Mother Goddess" but that "at the time when the cult is recorded the worship of the male deity appears to have superseded that of the female". In her argument, Murray claimed that the figure referred to as the Devil in the trial accounts was the witches' god, "manifest and incarnate", to whom the witches offered their prayers. She claimed that at the witches' meetings, the god would be personified, usually by a man or at times by a woman or an animal; when a human personified this entity, Murray claimed that they were usually dressed plainly, though they appeared in full costume for the witches' Sabbaths. Members joined the cult either as children or adults through what Murray called "admission ceremonies"; Murray asserted that applicants had to agree to join of their own free will, and agree to devote themselves to the service of their deity. She also claimed that in some cases, these individuals had to sign a covenant or were baptised into the faith. At the same time, she claimed that the religion was largely passed down hereditary lines. Murray described the religion as being divided into covens containing thirteen members, led by a coven officer who was often termed the "Devil" in the trial accounts, but who was accountable to a "Grand Master". According to Murray, the records of the coven were kept in a secret book, with the coven also disciplining its members, to the extent of executing those deemed traitors. Describing this witch-cult as "a joyous religion", she claimed that the two primary festivals that it celebrated were on May Eve and November Eve, although that other dates of religious observation were 1 February and 1 August, the winter and summer solstices, and Easter. She asserted that the "General Meeting of all members of the religion" were known as Sabbaths, while the more private ritual meetings were known as Esbats. The Esbats, Murray claimed, were nocturnal rites that began at midnight, and were "primarily for business, whereas the Sabbath was purely religious". At the former, magical rites were performed both for malevolent and benevolent ends. She also asserted that the Sabbath ceremonies involved the witches paying homage to the deity, renewing their "vows of fidelity and obedience" to him, and providing him with accounts of all the magical actions that they had conducted since the previous Sabbath. Once this business had been concluded, admissions to the cult or marriages were conducted, ceremonies and fertility rites took place, and then the Sabbath ended with feasting and dancing. Deeming Ritual Witchcraft to be "a fertility cult", she asserted that many of its rites were designed to ensure fertility and rain-making. She claimed that there were four types of sacrifice performed by the witches: blood-sacrifice, in which the neophyte writes their name in blood; the sacrifice of animals; the sacrifice of a non-Christian child to procure magical powers; and the sacrifice of the witches' god by fire to ensure fertility. She interpreted accounts of witches shapeshifting into various animals as being representative of a rite in which the witches dressed as specific animals which they took to be sacred. She asserted that accounts of familiars were based on the witches' use of animals, which she divided into "divining familiars" used in divination and "domestic familiars" used in other magic rites. Murray asserted that a pre-Christian fertility-based religion had survived the Christianization process in Britain, although that it came to be "practised only in certain places and among certain classes of the community". She believed that folkloric stories of fairies in Britain were based on a surviving race of dwarfs, who continued to live on the island up until the Early Modern period. She asserted that this race followed the same pagan religion as the witches, thus explaining the folkloric connection between the two. In the appendices to the book, she also alleged that Joan of Arc and Gilles de Rais were members of the witch-cult and were executed for it, a claim which has been refuted by historians, especially in the case of Joan of Arc. The later historian Ronald Hutton commented that The Witch-Cult in Western Europe "rested upon a small amount of archival research, with extensive use of printed trial records in 19th-century editions, plus early modern pamphlets and works of demonology". He also noted that the book's tone was generally "dry and clinical, and every assertion was meticulously footnoted to a source, with lavish quotation". It was not a bestseller; in its first thirty years, only 2,020 copies were sold. However, it led many people to treat Murray as an authority on the subject; in 1929, she was invited to provide the entry on "Witchcraft" for the Encyclopædia Britannica, and used it to present her interpretation of the subject as if it were universally accepted in scholarship. It remained in the encyclopedia until being replaced in 1969. Murray followed The Witch-Cult in Western Europe with The God of the Witches, published by the popular press Sampson Low in 1931; although similar in content, unlike her previous volume it was aimed at a mass market audience. The tone of the book also differed strongly from its predecessor, containing "emotionally inflated [language] and coloured with religious phraseology" and repeatedly referring to the witch-cult as "the Old Religion". In this book she also "cut out or toned down" many of the claims made in her previous volume which would have painted the cult in a bad light, such as those which discussed sex and the sacrifice of animals and children. In this book she began to refer to the witches' deity as the Horned God, and asserted that it was an entity who had been worshipped in Europe since the Palaeolithic. She further asserted that in the Bronze Age, the worship of the deity could be found throughout Europe, Asia, and parts of Africa, claiming that the depiction of various horned figures from these societies proved that. Among the evidence cited were the horned figures found at Mohenjo-Daro, which are often interpreted as depictions of Pashupati, as well as the deities Osiris and Amon in Egypt and the Minotaur of Minoan Crete. Within continental Europe, she claimed that the Horned God was represented by Pan in Greece, Cernunnos in Gaul, and in various Scandinavian rock carvings. Claiming that this divinity had been declared the Devil by the Christian authorities, she nevertheless asserted that his worship was testified in officially Christian societies right through to the Modern period, citing folkloric practices such as the Dorset Ooser and the Puck Fair as evidence of his veneration. In 1954, she published The Divine King in England, in which she greatly extended on the theory, taking influence from Frazer's The Golden Bough, an anthropological book that made the claim that societies all over the world sacrificed their kings to the deities of nature. In her book, she claimed that this practice had continued into medieval England, and that, for instance, the death of William II was really a ritual sacrifice. No academic took the book seriously, and it was ignored by many of her supporters. ### Academic reception #### Early support Upon initial publication, Murray's thesis gained a favourable reception from many readers, including some significant scholars, albeit none who were experts in the witch trials. Historians of Early Modern Britain like George Norman Clark and Christopher Hill incorporated her theories into their work, although the latter subsequently distanced himself from the theory. For the 1961 reprint of The Witch-Cult in Western Europe, the Medieval historian Steven Runciman provided a foreword in which he accepted that some of Murray's "minor details may be open to criticism", but in which he was otherwise supportive of her thesis. Her theories were recapitulated by Arno Runeberg in his 1947 book Witches, Demons and Fertility Magic as well as Pennethorne Hughes in his 1952 book Witches. As a result, the Canadian historian Elliot Rose, writing in 1962, claimed that the Murrayite interpretations of the witch trials "seem to hold, at the time of writing, an almost undisputed sway at the higher intellectual levels", being widely accepted among "educated people". Rose suggested that the reason that Murray's theory gained such support was partly because of her "imposing credentials" as a member of staff at UCL, a position that lent her theory greater legitimacy in the eyes of many readers. He further suggested that the Murrayite view was attractive to many as it confirmed "the general picture of pre-Christian Europe a reader of Frazer or [Robert] Graves would be familiar with". Similarly, Hutton suggested that the cause of the Murrayite theory's popularity was because it "appealed to so many of the emotional impulses of the age", including "the notion of the English countryside as a timeless place full of ancient secrets", the literary popularity of Pan, the widespread belief that the majority of British had remained pagan long after the process of Christianisation, and the idea that folk customs represented pagan survivals. At the same time, Hutton suggested, it seemed more plausible to many than the previously dominant rationalist idea that the witch trials were the result of mass delusion. Related to this, the folklorist Jacqueline Simpson suggested that part of the Murrayite theory's appeal was that it appeared to give a "sensible, demystifying, liberating approach to a longstanding but sterile argument" between the rationalists who denied that there had been any witches and those, like Montague Summers, who insisted that there had been a real Satanic conspiracy against Christendom in the Early Modern period replete with witches with supernatural powers. "How refreshing", noted the historian Hilda Ellis Davidson, "and exciting her first book was at that period. A new approach, and such a surprising one." #### Early criticism Murray's theories never received support from experts in the Early Modern witch trials, and from her early publications onward many of her ideas were challenged by those who highlighted her "factual errors and methodological failings". Indeed, the majority of scholarly reviews of her work produced during the 1920s and 1930s were largely critical. George L. Burr reviewed both of her initial books on the witch-cult for the American Historical Review. He stated that she was not acquainted with the "careful general histories by modern scholars" and criticised her for assuming that the trial accounts accurately reflected the accused witches' genuine experiences of witchcraft, regardless of whether those confessions had been obtained through torture and coercion. He also charged her with selectively using the evidence to serve her interpretation, for instance by omitting any supernatural or miraculous events that appear in the trial accounts. W. R. Halliday was highly critical in his review for Folklore, as was E. M. Loeb in his review for American Anthropologist. Soon after, one of the foremost specialists of the trial records, L'Estrange Ewen, brought out a series of books which rejected Murray's interpretation. Rose suggested that Murray's books on the witch-cult "contain an incredible number of minor errors of fact or of calculation and several inconsistencies of reasoning". He accepted that her case "could, perhaps, still be proved by somebody else, though I very much doubt it". Highlighting that there is a gap of about a thousand years between the Christianisation of Britain and the start of the witch trials there, he argues that there is no evidence for the existence of the witch-cult anywhere in the intervening period. He further criticises Murray for treating pre-Christian Britain as a socially and culturally monolithic entity, whereas in reality, it contained a diverse array of societies and religious beliefs. He also challenges Murray's claim that the majority of Britons in the Middle Ages remained pagan as "a view grounded on ignorance alone". Murray did not respond directly to the criticisms of her work, but reacted to her critics in a hostile manner; in later life she asserted that she eventually ceased reading reviews of her work, and believed that her critics were simply acting out of their own Christian prejudices to non-Christian religion. Simpson noted that despite these critical reviews, within the field of British folkloristics, Murray's theories were permitted "to pass unapproved but unchallenged, either out of politeness or because nobody was really interested enough to research the topic". As evidence, she noted that no substantial research articles on the subject of witchcraft were published in Folklore between Murray's in 1917 and Rossell Hope Robbins's in 1963. She also highlighted that when regional studies of British folklore were published in this period by folklorists like Theo Brown, Ruth Tongue, or Enid Porter, none adopted the Murrayite framework for interpreting witchcraft beliefs, thus evidencing her claim that Murray's theories were widely ignored by scholars of folkloristics. #### Academic rejection Murray's work was increasingly criticised following her death in 1963, with the definitive academic rejection of the Murrayite witch-cult theory occurring during the 1970s. During these decades, a variety of scholars across Europe and North America – such as Alan Macfarlane, Erik Midelfort, William Monter, Robert Muchembled, Gerhard Schormann, Bente Alver and Bengt Ankarloo – published in-depth studies of the archival records from the witch trials, leaving no doubt that those tried for witchcraft were not practitioners of a surviving pre-Christian religion. In 1971, the English historian Keith Thomas stated that on the basis of this research, there was "very little evidence to suggest that the accused witches were either devil-worshippers or members of a pagan fertility cult". He stated that Murray's conclusions were "almost totally groundless" because she ignored the systematic study of the trial accounts provided by Ewen and instead used sources very selectively to argue her point. In 1975, the historian Norman Cohn commented that Murray's "knowledge of European history, even of English history, was superficial and her grasp of historical method was non-existent", adding that her ideas were "firmly set in an exaggerated and distorted version of the Frazerian mould". That same year, the historian of religion Mircea Eliade described Murray's work as "hopelessly inadequate", containing "numberless and appalling errors". In 1996, the feminist historian Diane Purkiss stated that although Murray's thesis was "intrinsically improbable" and commanded "little or no allegiance within the modern academy", she felt that male scholars like Thomas, Cohn, and Macfarlane had unfairly adopted an androcentric approach by which they contrasted their own, male and methodologically sound interpretation against Murray's "feminised belief" about the witch-cult. Hutton stated that Murray had treated her source material with "reckless abandon", in that she had taken "vivid details of alleged witch practices" from "sources scattered across a great extent of space and time" and then declared them to be normative of the cult as a whole. Simpson outlined how Murray had selected her use of evidence very specifically, particularly by ignoring and/or rationalising any accounts of supernatural or miraculous events in the trial records, thereby distorting the events that she was describing. Thus, Simpson pointed out, Murray rationalised claims that the cloven-hoofed Devil appeared at the witches' Sabbath by stating that he was a man with a special kind of shoe, and similarly asserted that witches' claims to have flown through the air on broomsticks were actually based on their practice of either hopping along on broomsticks or smearing hallucinogenic salves onto themselves. Concurring with this assessment, the historian Jeffrey Burton Russell, writing with the independent author Brooks Alexander, stated that "Murray's use of sources, in general, is appalling". The pair went on to claim that "today, scholars are agreed that Murray was more than just wrong – she was completely and embarrassingly wrong on nearly all of her basic premises". The Italian historian Carlo Ginzburg has been cited as being willing to give "some slight support" to Murray's theory. Ginzburg stated that although her thesis had been "formulated in a wholly uncritical way" and contained "serious defects", it did contain "a kernel of truth". He stated his opinion that she was right in claiming that European witchcraft had "roots in an ancient fertility cult", something that he argued was vindicated by his work researching the benandanti, an agrarian visionary tradition recorded in the Friuli district of Northeastern Italy during the 16th and 17th centuries. Several historians and folklorists have pointed out that Ginzburg's arguments are very different to Murray's: whereas Murray argued for the existence of a pre-Christian witches' cult whose members physically met during the witches' Sabbaths, Ginzburg argued that some of the European visionary traditions that were conflated with witchcraft in the Early Modern period had their origins in pre-Christian fertility religions. Moreover, other historians have expressed criticism of Ginzburg's interpretation of the benandanti; Cohn stated that there was "nothing whatsoever" in the source material to justify the idea that the benandanti were the "survival of an age-old fertility cult". Echoing these views, Hutton commented that Ginzburg's claim that the benandanti's visionary traditions were a survival from pre-Christian practices was an idea resting on "imperfect material and conceptual foundations". He added that Ginzburg's "assumption" that "what was being dreamed about in the sixteenth century had in fact been acted out in religious ceremonies" dating to "pagan times", was entirely "an inference of his own" and not one supported by the documentary evidence. ## Personal life On researching the history of UCL's Egyptology department, the historian Rosalind M. Janssen stated that Murray was "remembered with gratitude and immense affection by all her former students. A wise and witty teacher, two generations of Egyptologists have forever been in her debt." Alongside teaching them, Murray was known to socialise with her UCL students outside of class hours. The archaeologist Ralph Merrifield, who knew Murray through the Folklore Society, described her as a "diminutive and kindly scholar, who radiated intelligence and strength of character into extreme old age". Davidson, who also knew Murray through the Society, noted that at their meetings "she would sit near the front, a bent and seemingly guileless old lady dozing peacefully, and then in the middle of a discussion would suddenly intervene with a relevant and penetrating comment which showed that she had missed not one word of the argument". The later folklorist Juliette Wood noted that many members of the Folklore Society "remember her fondly", adding that Murray had been "especially keen to encourage younger researchers, even those who disagreed with her ideas". One of Murray's friends in the Society, E. O. James, described her as a "mine of information and a perpetual inspiration ever ready to impart her vast and varied stores of specialised knowledge without reserve, or, be it said, much if any regard for the generally accepted opinions and conclusions of the experts!" Davidson described her as being "not at all assertive [...] [she] never thrust her ideas on anyone. [In relation to her witch-cult theory,] she behaved in fact rather like someone who was a fully convinced member of some unusual religious sect, or perhaps, of the Freemasons, but never on any account got into arguments about it in public." The archaeologist Glyn Daniel observed that Murray remained mentally alert into her old age, commenting that "her vigour and forthrightness and ruthless energy never deserted her". Murray never married, instead devoting her life to her work, and for this reason, Hutton drew comparisons between her and two other prominent female British scholars of the period, Jane Harrison and Jessie Weston. Murray's biographer Kathleen L. Sheppard stated that she was deeply committed to public outreach, particularly when it came to Egyptology, and that as such she "wanted to change the means by which the public obtained knowledge about Egypt's history: she wished to throw open the doors to the scientific laboratory and invite the public in". She considered travel to be one of her favourite activities, although due to restraints on her time and finances she was unable to do this regularly; her salary remained small and the revenue from her books was meagre. Raised a devout Christian by her mother, Murray had initially become a Sunday School teacher to preach the faith, but after entering the academic profession she rejected religion, gaining a reputation among other members of the Folklore Society as a noted sceptic and a rationalist. She was openly critical of organised religion, although continued to maintain a personal belief in a God of some sort, relating in her autobiography that she believed in "an unseen over-ruling Power", "which science calls Nature and religion calls God". She was also a believer and a practitioner of magic, performing curses against those she felt deserved it; in one case she cursed a fellow academic, Jaroslav Černý, when she felt that his promotion to the position of Professor of Egyptology over her friend Walter Bryan Emery was unworthy. Her curse entailed mixing up ingredients in a frying pan, and was undertaken in the presence of two colleagues. In another instance, she was said to have created a wax image of Kaiser Wilhelm II and then melted it during the First World War. Ruth Whitehouse argues that, given Murray's lack of mention of such incidents in her autobiography and generally rational approach, a "spirit of mischief" as opposed to "a real belief in the efficacy of the spells" may have motivated her practice of magic. ## Legacy ### In academia Hutton noted that Murray was one of the earliest women to "make a serious impact upon the world of professional scholarship", and the archaeologist Niall Finneran described her as "one of the greatest characters of post-war British archaeology". Upon her death, Daniel referred to her as "the Grand Old Woman of Egyptology", with Hutton noting that Egyptology represented "the core of her academic career". In 2014, Thornton referred to her as "one of Britain's most famous Egyptologists". However, according to the archaeologist Ruth Whitehouse, Murray's contributions to archaeology and Egyptology were often overlooked as her work was overshadowed by that of Petrie, to the extent that she was often thought of primarily as one of Petrie's assistants rather than as a scholar in her own right. By her retirement she had come to be highly regarded within the discipline, although, according to Whitehouse, Murray's reputation declined following her death, something that Whitehouse attributed to the rejection of her witch-cult theory and the general erasure of women archaeologists from the discipline's male-dominated history. In his obituary for Murray in Folklore, James noted that her death was "an event of unusual interest and importance in the annals of the Folk-Lore Society in particular as well as in the wider sphere in which her influence was felt in so many directions and disciplines". However, later academic folklorists, such as Simpson and Wood, have cited Murray and her witch-cult theory as an embarrassment to their field, and to the Folklore Society specifically. Simpson suggested that Murray's position as President of the Society was a causal factor in the mistrustful attitude that many historians held toward folkloristics as an academic discipline, as they erroneously came to believe that all folklorists endorsed Murray's ideas. Similarly, Catherine Noble stated that "Murray caused considerable damage to the study of witchcraft". In 1935, UCL introduced the Margaret Murray Prize, awarded to the student who is deemed to have produced the best dissertation in Egyptology; it continued to be presented annually into the 21st century. In 1969, UCL named one of their common rooms in her honour, but it was converted into an office in 1989. In June 1983, Queen Elizabeth The Queen Mother visited the room and there was gifted a copy of Murray's My First Hundred Years. UCL also hold two busts of Murray, one kept in the Petrie Museum and the other in the library of the UCL Institute of Archaeology. This sculpture was commissioned by one of her students, Violet MacDermot, and produced by the artist Stephen Rickard. UCL also possess a watercolour painting of Murray by Winifred Brunton; formerly exhibited in the Petrie Gallery, it was later placed into the Art Collection stores. In 2013, on the 150th anniversary of Murray's birth and the 50th of her death, the UCL Institute of Archaeology's Ruth Whitehouse described Murray as "a remarkable woman" whose life was "well worth celebrating, both in the archaeological world at large and especially in UCL". The historian of archaeology Rosalind M. Janssen titled her study of Egyptology at UCL The First Hundred Years "as a tribute" to Murray. Murray's friend Margaret Stefana Drower authored a short biography of her, which was included as a chapter in the 2004 edited volume on Breaking Ground: Pioneering Women Archaeologists. In 2013, Lexington Books published The Life of Margaret Alice Murray: A Woman's Work in Archaeology, a biography of Murray authored by Kathleen L. Sheppard, then an assistant professor at Missouri University of Science and Technology; the book was based upon Sheppard's doctoral dissertation produced at the University of Oklahoma. Although characterising it as being "written in a clear and engaging manner", one reviewer noted that Sheppard's book focuses on Murray the "scientist" and as such neglects to discuss Murray's involvement in magical practices and her relationship with Wicca. ### In Wicca Murray's witch-cult theories provided the blueprint for the contemporary Pagan religion of Wicca, with Murray being referred to as the "Grandmother of Wicca". The Pagan studies scholar Ethan Doyle White stated that it was the theory which "formed the historical narrative around which Wicca built itself", for on its emergence in England during the 1940s and 1950s, Wicca claimed to be the survival of this witch-cult. Wicca's theological structure, revolving around a Horned God and Mother Goddess, was adopted from Murray's ideas about the ancient witch-cult, and Wiccan groups were named covens and their meetings termed esbats, both words that Murray had popularised. As with Murray's witch-cult, Wicca's practitioners entered via an initiation ceremony; Murray's claims that witches wrote down their spells in a book may have been an influence on Wicca's Book of Shadows. Wicca's early system of seasonal festivities were also based on Murray's framework. Noting that there is no evidence of Wicca existing before the publication of Murray's books, Merrifield commented that for those in 20th century Britain who wished to form their own witches' covens, "Murray may have seemed the ideal fairy godmother, and her theory became the pumpkin coach that could transport them into the realm of fantasy for which they longed". The historian Philip Heselton suggested that the New Forest coven – the oldest alleged Wiccan group – was founded circa 1935 by esotericists aware of Murray's theory and who may have believed themselves to be reincarnated witch-cult members. It was Gerald Gardner, who claimed to be an initiate of the New Forest coven, who established the tradition of Gardnerian Wicca and popularised the religion; according to Simpson, Gardner was the only member of the Folklore Society to "wholeheartedly" accept Murray's witch-cult hypothesis. The duo knew each other, with Murray writing the foreword to Gardner's 1954 book Witchcraft Today, although in that foreword she did not explicitly specify whether she believed Gardner's claim that he had discovered a survival of her witch-cult. In 2005, Noble suggested that "Murray's name might be all but forgotten today if it were not for Gerald Gardner". Murray's witch-cult theories were likely also a core influence on the non-Gardnerian Wiccan traditions that were established in Britain and Australia between 1930 and 1970 by the likes of Bob Clay-Egerton, Robert Cochrane, Charles Cardell, and Rosaleen Norton. The prominent Wiccan Doreen Valiente eagerly searched for what she believed were other surviving remnants of the Murrayite witch-cult around Britain. Valiente remained committed to a belief in Murray's witch-cult after its academic rejection, and she described Murray as "a remarkable woman". In San Francisco during the late 1960s, Murray's writings were among the sources used by Aidan A. Kelly in the creation of his Wiccan tradition, the New Reformed Orthodox Order of the Golden Dawn. In Los Angeles during the early 1970s, they were used by Zsuzsanna Budapest when she was establishing her feminist-oriented tradition of Dianic Wicca. The Murrayite witch-cult theory also provided the basis for the ideas espoused in Witchcraft and the Gay Counterculture, a 1978 book written by the American gay liberation activist Arthur Evans. Members of the Wiccan community gradually became aware of academia's rejection of the witch-cult theory. Accordingly, belief in its literal truth declined during the 1980s and 1990s, with many Wiccans instead coming to view it as a myth that conveyed metaphorical or symbolic truths. Others insisted that the historical origins of the religion did not matter and that instead Wicca was legitimated by the spiritual experiences it gave to its participants. In response, Hutton authored The Triumph of the Moon, a historical study exploring Wicca's early development; on publication in 1999 the book exerted a strong impact on the British Pagan community, further eroding belief in the Murrayite theory among Wiccans. Conversely, other practitioners clung on to the theory, treating it as an important article of faith and rejecting post-Murrayite scholarship on European witchcraft. Several prominent practitioners continued to insist that Wicca was a religion with origins stretching back to the Palaeolithic, but others rejected the validity of historical scholarship and emphasised intuition and emotion as the arbiter of truth. A few "counter-revisionist" Wiccans – among them Donald H. Frew, Jani Farrell-Roberts, and Ben Whitmore – published critiques in which they attacked post-Murrayite scholarship on matters of detail, but none defended Murray's original hypothesis completely. ### In literature Simpson noted that the publication of the Murray thesis in the Encyclopædia Britannica made it accessible to "journalists, film-makers popular novelists and thriller writers", who adopted it "enthusiastically". It influenced the work of Aldous Huxley and Robert Graves. Murray's ideas shaped the depiction of paganism in the work of historical novelist Rosemary Sutcliff. It was also an influence on the American horror author H. P. Lovecraft, who cited The Witch-Cult in Western Europe in his writings about the fictional cult of Cthulhu. The author Sylvia Townsend Warner cited Murray's work on the witch-cult as an influence on her 1926 novel Lolly Willowes, and sent a copy of her book to Murray in appreciation, with the two meeting for lunch shortly after. There was nevertheless some difference in their depictions of the witch-cult; whereas Murray had depicted an organised pre-Christian cult, Warner depicted a vague family tradition that was explicitly Satanic. In 1927, Warner lectured on the subject of witchcraft, exhibiting a strong influence from Murray's work. Analysing the relationship between Murray and Warner, the English literature scholar Mimi Winick characterised both as being "engaged in imagining new possibilities for women in modernity". The fantasy novel Lammas Night is based on the same idea of the role of the royal family. ## See also - Johann Jakob Bachofen - Howard Carter - James Frazer - René Girard - Robert Graves - Flinders Petrie
7,336,201
Maurice Wilder-Neligan
1,107,546,852
World War I Australian Army officer
[ "1882 births", "1923 deaths", "Australian Army officers", "Australian Companions of the Distinguished Service Order", "Australian Companions of the Order of St Michael and St George", "Australian military personnel of World War I", "Australian recipients of the Distinguished Conduct Medal", "Military personnel from Tavistock", "People educated at Bedford School", "People educated at Ipswich School", "Recipients of the Croix de Guerre 1914–1918 (France)", "Royal Horse Artillery soldiers" ]
Lieutenant Colonel Maurice Wilder-Neligan, (4 October 1882 – 10 January 1923), born Maurice Neligan, was an Australian soldier who commanded the South Australian-raised 10th Battalion during the latter stages of World War I. Raised and educated in the United Kingdom, he was briefly a soldier with the Royal Horse Artillery in London, after which he travelled to Australia where he worked in Queensland. He enlisted as a private in the Australian Imperial Force (AIF) on 20 August 1914 at Townsville, under the name Maurice Wilder, giving Auckland, New Zealand, as his place of birth. A sergeant in the 9th Battalion by the time of the Gallipoli landings of April 1915, he was awarded the Distinguished Conduct Medal, the second highest award for acts of gallantry by other ranks. He was quickly commissioned, reaching the rank of temporary captain before the end of the Gallipoli campaign. During his time at Gallipoli he was wounded once, and formally changed his name to Wilder-Neligan. Arriving on the Western Front with the substantive rank of captain, he led a "most brilliant" raid on German trenches near Fleurbaix, and although severely wounded in the head, stuck to his command until the operation was successfully completed. For his actions he was appointed a Companion of the Distinguished Service Order (DSO), the second highest award for gallantry by officers. When he returned from hospital, he was promoted to major, and was in temporary command of his battalion during the Second Battle of Bullecourt in May 1917. In July, he was promoted to lieutenant colonel and appointed to command the 10th Battalion. He led that unit during the Battle of the Menin Road Ridge in September and was appointed a Companion of the Order of St Michael and St George in June 1918. Perhaps his greatest achievement was the capture of Merris in July, for which he was awarded a bar to his DSO, again for gallantry. He continued to skilfully lead his battalion throughout the Hundred Days Offensive and up to the Armistice of 11 November. During the war, in addition to decorations already mentioned, he was awarded the French Croix de guerre and was mentioned in despatches five times. After the war, he worked as a district officer in the Australian-administered Territory of New Guinea, where he died at the age of 40, probably of complications from his war wounds. He was buried on Garua Island, New Britain. Considered by many to be rather eccentric, he was also a successful tactician, a skilful organiser, and highly regarded for his treatment of the soldiers under his command. ## Early life Born Maurice Neligan in Tavistock, Devon, England, on 4 October 1882, he was a son of Canon John West Neligan and his wife Charlotte, née Putland. His elder brother, the Right Reverend Moore Neligan, was the Anglican Bishop of Auckland, New Zealand, from 1903 to 1910. Maurice attended Queen Elizabeth's Grammar School, Ipswich, and Bedford Grammar School. On 18 February 1905, he was married to a divorcee, Frances Jane Wyatt, in London. In 1908, he was brought before the courts in bankruptcy proceedings, owing some £5,500. During the hearing he stated that he had been at sea during the period 1898–1902, and had not been working since he returned, although he had visited Ceylon late the previous year looking for work. He and his wife had one daughter. He was enlisted in the Royal Horse Artillery in September 1910, having lowered his age and given Auckland as his place of birth. He served as a soldier for a year before leaving his wife and child at their Park Lane home in London and travelling to Sydney, Australia. In the remaining years before the outbreak of World War I he worked as a clerk at a sugar mill weighbridge in Proserpine, Queensland, and also lived at Kelly's Club Hotel in Brandon, where he formed a close connection to the publican's family. ## World War I Neligan was enlisted in the Australian Imperial Force (AIF) on 20 August 1914 at Townsville, Queensland, under the name Maurice Wilder, again giving Auckland as his place of birth. The AIF was established as Australia's expeditionary force to fight in the war, as the Citizens' Forces were restricted to home defence per the Defence Act (1903). He had first attempted to enlist under his true name and age, and had told the clerk he was married with a child. When told that younger unmarried men were volunteering in great numbers, and that his services would not be required, he merely joined a queue in front of a different clerk, gave the name Wilder and claimed he was a bachelor. With the rank of private he was allotted to the Queensland-raised 9th Battalion of the 3rd Brigade, which was part of the 1st Division, and was given the regimental number 974. Within three weeks he had been promoted to lance corporal, and by late September was a corporal. The battalion embarked for overseas the following month and sailed via Albany, Western Australia, to Egypt on the Themistocles, arriving in early December. On 1 January 1915, Wilder was promoted to sergeant, and he was posted as the battalion orderly room sergeant. ### Gallipoli campaign After completing training, the 3rd Brigade was designated as the covering force for the landing at Anzac Cove, Gallipoli, on 25 April 1915, and so was the first brigade ashore about 4:30 am. By 9:00 am, Wilder was performing the role of battalion adjutant, assisting the acting commanding officer to organise and direct the unit. The day after the landing, Wilder's actions resulted in the award of the Distinguished Conduct Medal, the second highest award for acts of gallantry by other ranks. The citation read: > For conspicuous gallantry on 26 April 1915, near Gaba Tepe. Assisted by another non-commissioned officer, who was subsequently killed, he carried a wounded man into a place of safety under very heavy fire. Later on he was instrumental in collecting stragglers, who he led back into the firing-line. Owing to officer casualties, he was commissioned as a second lieutenant three days after the landing. Along with the rest of the 3rd Brigade, the 9th Battalion was closely engaged in establishing and defending the Anzac beachhead during the ensuing months, rotating between the various posts and trenches. During the heavy Turkish counter-attack of 19 May, Wilder showed "capacity to command at a difficult moment". On the night of 27 May, he commanded a raid by 63 soldiers of the battalion on a Turkish position south of the Anzac perimeter near Gaba Tepe. The destroyer HMS Rattlesnake used its searchlight to pinpoint the trench to be attacked, then pounded it with twenty rounds of high explosive and shrapnel from its guns. Wilder then led his force quickly up the hill towards the Turkish position, and using only the bayonet, they killed six, captured one, and returned to the Australian lines without suffering casualties. According to his biographer, Alec Hill, the success of the raid was ensured by his meticulous planning. He was wounded in early June and evacuated to Egypt, but discharged himself from hospital and returned to his unit by early August, when he was promoted to lieutenant. In the same month, he was mentioned in despatches for the first time. In mid-September he was formally appointed as unit adjutant, swiftly followed by a temporary promotion to captain. He formally changed his name to Wilder-Neligan in October, and remained at Anzac until November, after which the battalion was evacuated to Egypt. During its time in Egypt, the 9th Battalion was stationed for a period on the front line in the desert near the Suez Canal. On one occasion, an Ottoman patrol was seen, and the adventurous Wilder-Neligan asked to be allowed to take out a patrol on camels to cut them off. With certain restrictions, this was permitted, but no contact with the enemy occurred. In March 1916, he relinquished his post of adjutant, and was substantively promoted to captain before the battalion left Alexandria for France and the Western Front at the end of the month. ### Western Front #### 9th Battalion Soon after his unit reached the trenches in France, he planned a major raid near Fleurbaix, carefully preparing his men before launching it on the night of 1/2 July. This was a "silent" raid, being undertaken without a preparatory bombardment of the targeted German trenches. Wilder-Neligan split his 148 troops into three groups, and directed them to enter the German trenches about 180 metres (200 yd) apart to avoid German enfilading fire. He trained his raiding party and conducted rehearsals in the days leading up to the operation. On the night of the raid, Wilder-Neligan used machine gun fire to cover the sound of his troops advancing across no man's land. When they were close, he called down a protective barrage behind the German trenches, with machine guns sweeping the enemy flanks. The raiders then rushed forward, entered the enemy trenches and engaging in heavy fighting. During the final rush, Wilder-Neligan came across a German observation post in advance of the enemy trenches. He killed two of the three enemy soldiers occupying it, but the third threw a hand grenade that wounded him severely in the shoulder and in the head, fracturing his skull. Despite his wounds, he continued to direct his men until they had all returned safely. During the raid, the Queenslanders killed 14 Germans, wounded 40, and captured 25 more along with a machine gun, for the loss of seven dead and 26 wounded. The Australian Official War Historian, Charles Bean, described this action as "perhaps the most brilliant raid that Australians undertook" at the time. The success of the operation was recognised by Wilder-Neligan's appointment as a Companion of the Distinguished Service Order, the second highest award for acts of gallantry by officers. The citation read: > For conspicuous gallantry when commanding a raid in force. His careful training and fine leading were responsible for the successes attained. Fifty-three of the enemy were killed and prisoners taken, besides a machine gun, many rifles, and much equipment. Though wounded in the head he stuck to his command. Evacuated to the United Kingdom for treatment, he did not return to his unit until October, when he was promoted to major. In November he was again mentioned in despatches for "distinguished and gallant services and devotion to duty in the field". The 9th Battalion continued to take its turn at the front line through the worst European winter in 40 years. On 24 February, Wilder-Neligan was training his troops for another raid in force, this time on the German position known as "The Maze", south of the village of Le Barque, when it was discovered that the Germans were withdrawing from the front line. Along with the rest of the 3rd Brigade, the 9th Battalion was soon advancing towards the village, striking isolated pockets of resistance. After this, the battalion followed up the Germans as they withdrew towards the multi-layered Hindenburg Line of fortifications. In mid-April, parts of the 9th Battalion were in the thick of the fighting against a major German counter-attack at Lagnicourt. Wilder-Neligan was in temporary command of his battalion during the Second Battle of Bullecourt in May 1917, part of the Battle of Arras. For the attack on the German trenches known as O.G. 1 and O.G. 2, the 9th Battalion was placed under the command of the 1st Brigade. Before the attack, he arranged for his men to create dumps of barbed wire, stakes, hand grenades and other equipment and tools near to the battalion assembly point for the attack. He ensured that telephone lines were laid as far forward as possible, then briefed his company commanders and supporting units. Fierce bomb fights occurred in both objective trenches as soon as the 9th Battalion attacked, but the Queenslanders gradually pushed the Germans back. A heavy German barrage then fell on the captured trenches. During the battle, the unit suffered 160 casualties, mainly from bombs. For his actions at Bullecourt, Wilder-Neligan was recommended for the Belgian Order of the Crown, although there is no record of an appointment to the order. For other brief periods in the first half of 1917 he was acting commanding officer of the 9th Battalion. On 23 June 1917, he was temporarily placed in command of his battalion's South Australian-raised sister unit of the 3rd Brigade, the 10th Battalion. After briefly returning to his old unit on 6 July, he took command of the 10th Battalion with the rank of lieutenant colonel on 15 July. This was considered a surprising promotion, as at the time he was junior to forty majors then in the AIF. #### 10th Battalion Wilder-Neligan's first fight with his new battalion was the Battle of the Menin Road Ridge, which commenced on 20 September as part of the Battle of Passchendaele. In the initial planning for this offensive, the 10th Battalion was committed to the third phase of the attack by the 3rd Brigade. For this attack, Wilder-Neligan had split his command into two. The assault would be carried out by two specially trained "storm" companies, with his other two companies responsible for carrying forward ammunition and equipment and "mopping-up" pockets of resistance. During the move to the start line, the two carrying and "mopping-up" companies of the battalion were caught in a heavy and persistent German barrage, suffered significant casualties and became disorganised. Wilder-Neligan sent back his adjutant to re-organise them, and they came through the German shell fire to their assembly positions. Although initially intended as the third wave of the brigade attack, the second wave 12th Battalion had moved to the left as a result of the German barrage, and the 10th effectively joined the second wave on its right. The 10th had also moved forward to avoid the heavy barrage, and had become intermingled with the first wave formed by the 11th Battalion. When a machine gun in a pillbox held up the leading troops, Wilder-Neligan sent one of his "storm" platoons to outflank the German position. When the platoon commander was killed, the South Australians "went mad" according to Wilder-Neligan, killing rather than capturing most of the German garrison. When the first objective was reached, Wilder-Neligan had his battalion fill the gap in the line to the right of the 12th Battalion. As soon as the two carrying and "mopping-up" companies reached the second objective, Wilder-Neligan left them there to dig in. During a pause in the battle while his two "storm" companies were waiting to go forward to the third objective, Wilder-Neligan had specially procured copies of the Daily Mirror and Daily Mail newspapers distributed for his men to read. With the help of the heavy creeping barrage, the third objective was quickly taken. During the battle, the unit suffered 207 casualties. On the day following the battle, he was billed by a London newspaper as "The Eccentric Colonel", due to his distribution of newspapers to his waiting troops. Wilder-Neligan was on leave from 25 September to 8 October. The day after he returned, the battalion conducted a raid in force of German positions in Celtic Wood near Passchendaele, part of the Battle of Poelcappelle. The operation was a disaster, largely due to the poor artillery support provided. Of the 85 troops involved, most were killed or wounded. Wilder-Neligan was mentioned in despatches for the third time in November. The recommendation cited his "clever organisation, untiring energy, zeal and exemplary bravery" between February and September 1917, and highlighted his leadership at Bullecourt. The 10th Battalion rotated through front line, reserve and rest areas throughout the winter of 1917/1918. Wilder-Neligan sprained his ankle on 11 January, and was away from his unit for a week. He was away for another week in late January. On 11 February he went on leave, but did not return until 20 May, as he was temporarily commanding his old battalion from 30 March until that date. Wilder-Neligan was mentioned in despatches for the fourth time on 7 April. While he was commanding the 9th Battalion, it participated in the 3rd Brigade's abortive attempt to capture Méteren on 24 April. On 30 May, the 10th Battalion conducted a successful operation to capture several German posts forward of the line near Merris, suffering 64 casualties. After achieving its objectives, the battalion fought off two German counter-attacks. An innovation employed by Wilder-Neligan during this operation was a battery of rifle grenadiers formed from cooks and other headquarters staff, who provided close support to the assaulting troops. The battalion was congratulated by the army corps, division and brigade commanders for its work. In June, Wilder-Neligan was appointed a Companion of the Order of St Michael and St George for his work as commanding officer of the 10th Battalion from September 1917 to February 1918. On 28 June, he was sick with influenza but still on duty when an opportunity for "peaceful penetration" of the German trenches arose. Directed to conduct a demonstration in front of Merris, where the 10th Battalion held the front line, Wilder-Neligan took advantage of the accompanying barrage to encourage his two line companies to capture advanced German posts. He reinforced their initial success by ordering that both companies continue their advance, and simultaneously brought up most of his support and reserve companies to assist. The advance was covered by a smoke barrage laid down by trench mortars and rifle grenades. Wilder-Neligan's close coordination of an artillery barrage behind the German posts helped the battalion push even further forward. By the end of the day, the 10th Battalion had captured 500 yards (460 m) of German line, along with 35 prisoners, six machine guns and two trench mortars, for the loss of about 50 casualties. His job done, Wilder-Neligan then reported sick. This time, the battalion received congratulations from the army and army corps commanders. Wilder-Neligan returned to duty on 7 July. On 22–23 July, the 10th Battalion was back in the line opposite Merris. Characteristically, on his own initiative, Wilder-Neligan pushed strong patrols forward on his flanks in an effort to envelop the village. He had achieved some success, and believed that the capture of the village was imminent when a heavy German barrage descended on the battalion outpost line. The new divisional commander, Major General William Glasgow, now became aware of Wilder-Neligan's operation, and, as Wilder-Neligan could not guarantee communication with his flanking patrols due to the barrage, Glasgow ordered the 10th Battalion to withdraw. In his report on the operation, Wilder-Neligan noted that between 60 and 70 Germans had been killed and four prisoners taken, for the loss of two killed and seven wounded. Regarding this operation, he reported that, "the ultimate withdrawal in obedience to the order of the Divisional Commander in no way mitigated its success". A week later, he was given the opportunity to prove the wisdom of his plan to capture Merris. On the night of 29 July, under cover of a precisely planned creeping barrage that worked its way through Merris and 1,000 yards (910 m) beyond it, he sent two companies, totalling about 180 men, on converging lines of attack from the northeast and southwest of the village. After an hour of carefully coordinated artillery, machine gun and trench mortar support, he sent his headquarters platoon into the shattered village to "mop-up" any remaining resistance. At its conclusion, the operation had captured the village, surrounding it with a series of strongly held posts, and had captured 188 Germans for the loss of 35 casualties, only four of whom were killed. Hill states that the capture of Merris was "perhaps [Wilder-Neligan's] greatest achievement". The inspector general of training for the British Expeditionary Force described the capture of Merris as "the best show ever done by a battalion in France". Wilder-Neligan was awarded a bar to his Distinguished Service Order for this "innovative and daring operation". The citation read: > For conspicuous gallantry during a night attack on a village. Owing to his skill and courage, the plan of enveloping the village was successfully carried out, resulting in the capture of 200 prisoners and 30 machine guns. The attacking force suffered less than 20 casualties. In August, Wilder-Neligan led his battalion during the early fighting of the Hundred Days Offensive, which began on 8 August 1918 with the Battle of Amiens. During the fighting for Lihons on 10 August, he brought his battalion up into close support using an unconventional method. He moved 250 yards (230 m) forward of his unit, carrying a signalling lamp on his back, using it to transmit orders to his unit to halt or advance. By this method he brought his battalion into a position from which it could support his old unit, the 9th, with only one casualty. From his advanced position, he could see that the flank of the 9th had been held up, and he sent his strongest company to assist in the capture of Crépey Wood. On the following day, 9 August, Wilder-Neligan was placed in command of a force consisting of his own battalion and the 12th Battalion for the capture of Lihons itself. Despite heavy mist, he showed his clear grasp of the tactical situation when he realised that German counter-attacks were taking advantage of a gap between the forward troops, and quickly cleared it using supporting troops. During 10–14 August, the 10th Battalion suffered 123 casualties. The 10th Battalion was back in action on 22–23 August as the Allied advance continued north of Proyart. The 10th Battalion was in a supporting role protecting the flank of the 1st Brigade. Wilder-Neligan visited the neighbouring battalion, and upon learning of some difficulties due to German positions in a wood, immediately deployed two companies to clear the area, and due to his initiative the advance was able to continue. The 10th Battalion continued in the advance towards the Hindenburg outpost line over the next few days before being relieved for a short period of training and rest. On 18 September, the battalion saw its last fighting of the war during the capture of the Hindenburg outpost line south of the village of Villeret. In heavy fighting, the unit captured the second and third objectives, and were then relieved and went into a period of training and rest. By this point, the battalion had been reduced to a strength of 517 men. On 10 October, Wilder-Neligan was awarded the French Croix de guerre. He was mentioned in despatches for the fifth and final time on 8 November. After the Armistice of 11 November, Wilder-Neligan remained with his battalion until 1 January 1919. He returned to Australia in July, arriving in Brisbane in September. For his service during the war, in addition to the decorations already mentioned, Wilder-Neligan was awarded the 1914–15 Star, British War Medal and Victory Medal. Wilder-Neligan's clear tactical acumen was accompanied by relentless striving to ensure the needs of his men were met. According to Hill, "above all he was an organiser, some said the best in the AIF". Bean noted that he was "a restless and adventurous spirit", "an impetuous, daredevil officer, but free of the carelessness with which those qualities are often associated", "a gay, wild young Englishman, clever soldier, and inevitably a leader wherever he was", and a "mercurial commander". He had many eccentric habits, and often embarrassed his officers through his actions. He would supervise battalion drill from horseback, armed with a megaphone, with which he would berate the officers incessantly, causing much confusion. On one occasion he even chased the officers off the parade square to show his displeasure with their efforts. He was known by the nicknames "Mad Wilder", "Wily Wilder", and "Mad Neligan". Nevertheless, he was admired and trusted by his men. He was the most highly decorated officer to command the 10th Battalion during the war. ## New Guinea and death In accordance with normal repatriation procedures, Wilder-Neligan's commission in the AIF was terminated following his return to Australia in October 1919. He was involved in the formation of a soldiers' political party in Queensland, travelling the country and delivering speeches from the back of a truck. On 1 January 1920 he was appointed as a lieutenant colonel in the part-time army, the Citizens' Forces. In late March the following year he transferred to the Australian Naval and Military Expeditionary Force, which was then occupying German New Guinea. This appointment was at the rank of lieutenant. He did not immediately travel to New Guinea, living on the north coast of Queensland for two months, and visiting Brisbane in early May to receive his French Croix de guerre from the former AIF commander Field Marshal Sir William Birdwood during the latter's first visit to Australia. Wilder-Neligan travelled to New Guinea later that year to take up his appointment. Wilder-Neligan's initial role was as a deputy district officer for the garrison of Rabaul on the island of Neupommern (later renamed New Britain). In May 1921, when the military administration of the former German colony was handed over to an Australian civil one mandated by the League of Nations, he was transferred to the administration of the newly created Territory of New Guinea as district officer for Talasea in west New Britain. Early in January 1923, the Administrator, Evan Wisdom, summoned him to Rabaul to answer allegations of financial malpractice that had been made against him by a former German planter. It appears that he resigned and sailed for Rabaul. Going ashore to rest for a few days at the village of Ekerapi before continuing his journey, he died during the night of 9/10 January. Survived by his wife and daughter, he died intestate and with debts. A coronial inquiry was conducted by the acting district officer in Talasea, which did not find a cause of death but concluded that there were no suspicious circumstances. It is believed that he died of complications from his war wounds. The men of the 10th Battalion AIF Club contacted his widow to ask that his remains be reinterred in the AIF Cemetery, Adelaide, South Australia, and offered their assistance, but she declined, choosing to have him remain buried on Garua Island in New Guinea. On 23 April 1927, a photograph of Wilder-Neligan's grave was published on the front page of the Adelaide newspaper, The Mail, along with a summary of his exploits during the war.
18,449,273
Maya civilization
1,172,360,651
Mesoamerican former civilization
[ "1697 disestablishments in North America", "2nd-millennium BC establishments", "Former countries in North America", "Former monarchies of North America", "History of Belize", "History of Chiapas", "History of El Salvador", "History of Guatemala", "History of Honduras", "History of the Yucatán Peninsula", "Maya civilization" ]
The Maya civilization (/ˈmaɪə/) was a Mesoamerican civilization that existed from antiquity to the early modern period. It is known by its ancient temples and glyphs (script). The Maya script is the most sophisticated and highly developed writing system in the pre-Columbian Americas. The civilization is also noted for its art, architecture, mathematics, calendar, and astronomical system. The Maya civilization developed in the Maya Region, an area that today comprises southeastern Mexico, all of Guatemala and Belize, and the western portions of Honduras and El Salvador. It includes the northern lowlands of the Yucatán Peninsula and the Guatemalan Highlands of the Sierra Madre, the Mexican state of Chiapas, southern Guatemala, El Salvador, and the southern lowlands of the Pacific littoral plain. Today, their descendants, known collectively as the Maya, number well over 6 million individuals, speak more than twenty-eight surviving Mayan languages, and reside in nearly the same area as their ancestors. The Archaic period, before 2000 BC, saw the first developments in agriculture and the earliest villages. The Preclassic period (c. 2000 BC to 250 AD) saw the establishment of the first complex societies in the Maya region, and the cultivation of the staple crops of the Maya diet, including maize, beans, squashes, and chili peppers. The first Maya cities developed around 750 BC, and by 500 BC these cities possessed monumental architecture, including large temples with elaborate stucco façades. Hieroglyphic writing was being used in the Maya region by the 3rd century BC. In the Late Preclassic a number of large cities developed in the Petén Basin, and the city of Kaminaljuyu rose to prominence in the Guatemalan Highlands. Beginning around 250 AD, the Classic period is largely defined as when the Maya were raising sculpted monuments with Long Count dates. This period saw the Maya civilization develop many city-states linked by a complex trade network. In the Maya Lowlands two great rivals, the cities of Tikal and Calakmul, became powerful. The Classic period also saw the intrusive intervention of the central Mexican city of Teotihuacan in Maya dynastic politics. In the 9th century, there was a widespread political collapse in the central Maya region, resulting in civil wars, the abandonment of cities, and a northward shift of population. The Postclassic period saw the rise of Chichen Itza in the north, and the expansion of the aggressive Kʼicheʼ kingdom in the Guatemalan Highlands. In the 16th century, the Spanish Empire colonised the Mesoamerican region, and a lengthy series of campaigns saw the fall of Nojpetén, the last Maya city, in 1697. Rule during the Classic period centred on the concept of the "divine king", who was thought to act as a mediator between mortals and the supernatural realm. Kingship was usually (but not exclusively) patrilineal, and power normally passed to the eldest son. A prospective king was expected to be a successful war leader as well as a ruler. Closed patronage systems were the dominant force in Maya politics, although how patronage affected the political makeup of a kingdom varied from city-state to city-state. By the Late Classic period, the aristocracy had grown in size, reducing the previously exclusive power of the king. The Maya developed sophisticated art forms using both perishable and non-perishable materials, including wood, jade, obsidian, ceramics, sculpted stone monuments, stucco, and finely painted murals. Maya cities tended to expand organically. The city centers comprised ceremonial and administrative complexes, surrounded by an irregularly shaped sprawl of residential districts. Different parts of a city were often linked by causeways. Architecturally, city buildings included palaces, pyramid-temples, ceremonial ballcourts, and structures specially aligned for astronomical observation. The Maya elite were literate, and developed a complex system of hieroglyphic writing. Theirs was the most advanced writing system in the pre-Columbian Americas. The Maya recorded their history and ritual knowledge in screenfold books, of which only three uncontested examples remain, the rest having been destroyed by the Spanish. In addition, a great many examples of Maya texts can be found on stelae and ceramics. The Maya developed a highly complex series of interlocking ritual calendars, and employed mathematics that included one of the earliest known instances of the explicit zero in human history. As a part of their religion, the Maya practised human sacrifice. ## Etymology "Maya" is a modern term used to refer collectively to the various peoples that inhabited this area. They did not call themselves "Maya" and did not have a sense of common identity or political unity. ## Geography The Maya civilization occupied a wide territory that included southeastern Mexico and northern Central America. This area included the entire Yucatán Peninsula and all of the territory now in the modern countries of Guatemala and Belize, as well as the western portions of Honduras and El Salvador. Most of the peninsula is formed by a vast plain with few hills or mountains and a generally low coastline. The territory of the Maya covered a third of Mesoamerica, and the Maya were engaged in a dynamic relationship with neighbouring cultures that included the Olmecs, Mixtecs, Teotihuacan, and Aztecs. During the Early Classic period, the Maya cities of Tikal and Kaminaljuyu were key Maya foci in a network that extended into the highlands of central Mexico; there was a strong Maya presence at the Tetitla compound of Teotihuacan. The Maya city of Chichen Itza and the distant Toltec capital of Tula had an especially close relationship. The Petén region consists of densely forested low-lying limestone plain; a chain of fourteen lakes runs across the central drainage basin of Petén. To the south the plain gradually rises towards the Guatemalan Highlands. The dense Maya forest covers northern Petén and Belize, most of Quintana Roo, southern Campeche, and a portion of the south of Yucatán state. Farther north, the vegetation turns to lower forest consisting of dense scrub. The littoral zone of Soconusco lies to the south of the Sierra Madre de Chiapas, and consists of a narrow coastal plain and the foothills of the Sierra Madre. The Maya highlands extend eastwards from Chiapas into Guatemala, reaching their highest in the Sierra de los Cuchumatanes. Their major pre-Columbian population centres were in the largest highland valleys, such as the Valley of Guatemala and the Quetzaltenango Valley. In the southern highlands, a belt of volcanic cones runs parallel to the Pacific coast. The highlands extend northwards into Verapaz, and gradually descend to the east. ## History The history of Maya civilization is divided into three principal periods: the Preclassic, Classic, and Postclassic. These were preceded by the Archaic Period, during which the first settled villages and early developments in agriculture emerged. Modern scholars regard these periods as arbitrary divisions of Maya chronology, rather than indicative of cultural evolution or decline. Definitions of the start and end dates of period spans can vary by as much as a century, depending on the author. ### Preclassic period (c. 2000 BC – 250 AD) The Maya developed their first civilization in the Preclassic period. Scholars continue to discuss when this era of Maya civilization began. Maya occupation at Cuello (modern Belize) has been carbon dated to around 2600 BC. Settlements were established around 1800 BC in the Soconusco region of the Pacific coast, and the Maya were already cultivating the staple crops of maize, beans, squash, and chili pepper. This period was characterised by sedentary communities and the introduction of pottery and fired clay figurines. During the Middle Preclassic Period, small villages began to grow to form cities. Nakbe in the Petén department of Guatemala is the earliest well-documented city in the Maya lowlands, where large structures have been dated to around 750 BC. The northern lowlands of Yucatán were widely settled by the Middle Preclassic. By approximately 400 BC, early Maya rulers were raising stelae. A developed script was already being used in Petén by the 3rd century BC. In the Late Preclassic Period, the enormous city of El Mirador grew to cover approximately 16 square kilometres (6.2 sq mi). Although not as large, Tikal was already a significant city by around 350 BC. In the highlands, Kaminaljuyu emerged as a principal centre in the Late Preclassic. Takalik Abaj and Chocolá were two of the most important cities on the Pacific coastal plain, and Komchen grew to become an important site in northern Yucatán. The Late Preclassic cultural florescence collapsed in the 1st century AD and many of the great Maya cities of the epoch were abandoned; the cause of this collapse is unknown. ### Classic period (c. 250–900 AD) The Classic period is largely defined as the period during which the lowland Maya raised dated monuments using the Long Count calendar. This period marked the peak of large-scale construction and urbanism, the recording of monumental inscriptions, and demonstrated significant intellectual and artistic development, particularly in the southern lowland regions. The Classic period Maya political landscape has been likened to that of Renaissance Italy or Classical Greece, with multiple city-states engaged in a complex network of alliances and enmities. The largest cities had 50,000 to 120,000 people and were linked to networks of subsidiary sites. During the Early Classic, cities throughout the Maya region were influenced by the great metropolis of Teotihuacan in the distant Valley of Mexico. In AD 378, Teotihuacan decisively intervened at Tikal and other nearby cities, deposed their rulers, and installed a new Teotihuacan-backed dynasty. This intervention was led by Siyaj Kʼakʼ ("Born of Fire"), who arrived at Tikal in early 378. The king of Tikal, Chak Tok Ichʼaak I, died on the same day, suggesting a violent takeover. A year later, Siyaj Kʼakʼ oversaw the installation of a new king, Yax Nuun Ahiin I. This led to a period of political dominance when Tikal became the most powerful city in the central lowlands. Tikal's great rival was Calakmul, another powerful city in the Petén Basin. Tikal and Calakmul both developed extensive systems of allies and vassals; lesser cities that entered one of these networks gained prestige from their association with the top-tier city, and maintained peaceful relations with members of the network. Tikal and Calakmul engaged in the manoeuvering of their alliance networks against each other. At various points during the Classic period, one or other of these powers would gain a strategic victory over its great rival, resulting in respective periods of florescence and decline. In 629, Bʼalaj Chan Kʼawiil, a son of the Tikal king Kʼinich Muwaan Jol II, was sent to found a new city at Dos Pilas, in the Petexbatún region, apparently as an outpost to extend Tikal's power beyond the reach of Calakmul. For the next two decades he fought loyally for his brother and overlord at Tikal. In 648, king Yuknoom Chʼeen II of Calakmul captured Balaj Chan Kʼawiil. Yuknoom Chʼeen II then reinstated Balaj Chan Kʼawiil upon the throne of Dos Pilas as his vassal. He thereafter served as a loyal ally of Calakmul. In the southeast, Copán was the most important city. Its Classic-period dynasty was founded in 426 by Kʼinich Yax Kʼukʼ Moʼ. The new king had strong ties with central Petén and Teotihuacan. Copán reached the height of its cultural and artistic development during the rule of Uaxaclajuun Ubʼaah Kʼawiil, who ruled from 695 to 738. His reign ended catastrophically when he was captured by his vassal, king Kʼakʼ Tiliw Chan Yopaat of Quiriguá. The captured lord of Copán was taken back to Quiriguá and was decapitated in a public ritual. It is likely that this coup was backed by Calakmul, in order to weaken a powerful ally of Tikal. Palenque and Yaxchilan were the most powerful cities in the Usumacinta region. In the highlands, Kaminaljuyu in the Valley of Guatemala was already a sprawling city by 300. In the north of the Maya area, Coba was the most important capital. #### Classic Maya collapse During the 9th century AD, the central Maya region suffered major political collapse, marked by the abandonment of cities, the ending of dynasties, and a northward shift in activity. No universally accepted theory explains this collapse, but it likely had a combination of causes, including endemic internecine warfare, overpopulation resulting in severe environmental degradation, and drought. During this period, known as the Terminal Classic, the northern cities of Chichen Itza and Uxmal showed increased activity. Major cities in the northern Yucatán Peninsula were inhabited long after the cities of the southern lowlands ceased to raise monuments. Classic Maya social organization was based on the ritual authority of the ruler, rather than central control of trade and food distribution. This model was poorly structured to respond to changes, because the ruler's actions were limited by tradition to such activities as construction, ritual, and warfare. This only served to exacerbate systemic problems. By the 9th and 10th centuries, this resulted in collapse of this system of rulership. In the northern Yucatán, individual rule was replaced by a ruling council formed from elite lineages. In the southern Yucatán and central Petén, kingdoms declined; in western Petén and some other areas, the changes were catastrophic and resulted in the rapid depopulation of cities. Within a couple of generations, large swathes of the central Maya area were all but abandoned. Both the capitals and their secondary centres were generally abandoned within a period of 50 to 100 years. One by one, cities stopped sculpting dated monuments; the last Long Count date was inscribed at Toniná in 909. Stelae were no longer raised, and squatters moved into abandoned royal palaces. Mesoamerican trade routes shifted and bypassed Petén. ### Postclassic period (c. 950–1539 AD) Although much reduced, a significant Maya presence remained into the Postclassic period after the abandonment of the major Classic period cities; the population was particularly concentrated near permanent water sources. Unlike during previous cycles of contraction, abandoned lands were not quickly resettled in the Postclassic. Activity shifted to the northern lowlands and the Maya Highlands; this may have involved migration from the southern lowlands, because many Postclassic Maya groups had migration myths. Chichen Itza and its Puuc neighbours declined dramatically in the 11th century, and this may represent the final episode of Classic Period collapse. After the decline of Chichen Itza, the Maya region lacked a dominant power until the rise of the city of Mayapan in the 12th century. New cities arose near the Caribbean and Gulf coasts, and new trade networks were formed. The Postclassic Period was marked by changes from the preceding Classic Period. The once-great city of Kaminaljuyu in the Valley of Guatemala was abandoned after continuous occupation of almost 2,000 years. Across the highlands and neighbouring Pacific coast, long-occupied cities in exposed locations were relocated, apparently due to a proliferation of warfare. Cities came to occupy more-easily defended hilltop locations surrounded by deep ravines, with ditch-and-wall defences sometimes supplementing the natural terrain. One of the most important cities in the Guatemalan Highlands at this time was Qʼumarkaj, the capital of the aggressive Kʼicheʼ kingdom. The government of Maya states, from the Yucatán to the Guatemalan highlands, was often organised as joint rule by a council. However, in practice one member of the council could act as a supreme ruler, while the other members served him as advisors. Mayapan was abandoned around 1448, after a period of political, social and environmental turbulence that in many ways echoed the Classic period collapse in the southern Maya region. The abandonment of the city was followed by a period of prolonged warfare, disease and natural disasters in the Yucatán Peninsula, which ended only shortly before Spanish contact in 1511. Even without a dominant regional capital, the early Spanish explorers reported wealthy coastal cities and thriving marketplaces. During the Late Postclassic, the Yucatán Peninsula was divided into a number of independent provinces that shared a common culture but varied in internal sociopolitical organization. On the eve of the Spanish conquest, the highlands of Guatemala were dominated by several powerful Maya states. The Kʼicheʼ had carved out a small empire covering a large part of the western Guatemalan Highlands and the neighbouring Pacific coastal plain. However, in the decades before the Spanish conquest of the Kaqchikel kingdom had been steadily eroding the kingdom of the Kʼicheʼ. ### Contact period and Spanish conquest (1511–1697 AD) In 1511, a Spanish caravel was wrecked in the Caribbean, and about a dozen survivors made landfall on the coast of Yucatán. They were seized by a Maya lord, and most were sacrificed, although two escaped. From 1517 to 1519, three separate Spanish expeditions explored the Yucatán coast, and engaged in a number of battles with the Maya inhabitants. After the Aztec capital Tenochtitlan fell to the Spanish in 1521, Hernán Cortés despatched Pedro de Alvarado to Guatemala with 180 cavalry, 300 infantry, 4 cannons, and thousands of allied warriors from central Mexico; they arrived in Soconusco in 1523. The Kʼicheʼ capital, Qʼumarkaj, fell to Alvarado in 1524. Shortly afterwards, the Spanish were invited as allies into Iximche, the capital city of the Kaqchikel Maya. Good relations did not last, due to excessive Spanish demands for gold as tribute, and the city was abandoned a few months later. This was followed by the fall of Zaculeu, the Mam Maya capital, in 1525. Francisco de Montejo and his son, Francisco de Montejo the Younger, launched a long series of campaigns against the polities of the Yucatán Peninsula in 1527, and finally completed the conquest of the northern portion of the peninsula in 1546. This left only the Maya kingdoms of the Petén Basin independent. In 1697, Martín de Ursúa launched an assault on the Itza capital Nojpetén and the last independent Maya city fell to the Spanish. ### Persistence of Maya culture The Spanish conquest stripped away most of the defining features of Maya civilization. However, many Maya villages remained remote from Spanish colonial authority, and for the most part continued to manage their own affairs. Maya communities and the nuclear family maintained their traditional day-to-day life. The basic Mesoamerican diet of maize and beans continued, although agricultural output was improved by the introduction of steel tools. Traditional crafts such as weaving, ceramics, and basketry continued to be practised. Community markets and trade in local products continued long after the conquest. At times, the colonial administration encouraged the traditional economy in order to extract tribute in the form of ceramics or cotton textiles, although these were usually made to European specifications. Maya beliefs and language proved resistant to change, despite vigorous efforts by Catholic missionaries. The 260-day tzolkʼin ritual calendar continues in use in modern Maya communities in the highlands of Guatemala and Chiapas, and millions of Mayan-language speakers inhabit the territory in which their ancestors developed their civilization. ### Investigation of Maya civilization The agents of the Catholic Church wrote detailed accounts of the Maya, in support of their efforts at Christianization, and absorption of the Maya into the Spanish Empire. This was followed by various Spanish priests and colonial officials who left descriptions of ruins they visited in Yucatán and Central America. In 1839, American traveller and writer John Lloyd Stephens set out to visit a number of Maya sites with English architect and draftsman Frederick Catherwood. Their illustrated accounts of the ruins sparked strong popular interest, and brought the Maya to world attention. The later 19th century saw the recording and recovery of ethnohistoric accounts of the Maya, and the first steps in deciphering Maya hieroglyphs. The final two decades of the 19th century saw the birth of modern scientific archaeology in the Maya region, with the meticulous work of Alfred Maudslay and Teoberto Maler. By the early 20th century, the Peabody Museum was sponsoring excavations at Copán and in the Yucatán Peninsula. In the first two decades of the 20th century, advances were made in deciphering the Maya calendar, and identifying deities, dates, and religious concepts. Since the 1930s, archaeological exploration increased dramatically, with large-scale excavations across the Maya region. In the 1960s, Mayanist J. Eric S. Thompson promoted the ideas that Maya cities were essentially vacant ceremonial centres serving a dispersed population in the forest, and that the Maya civilization was governed by peaceful astronomer-priests. These ideas began to collapse with major advances in the decipherment of the script in the late 20th century, pioneered by Heinrich Berlin, Tatiana Proskouriakoff, and Yuri Knorozov. With breakthroughs in understanding of Maya script since the 1950s, the texts revealed the warlike activities of the Classic Maya kings, undermining the view of the Maya as peaceful. ## Politics Unlike the Aztecs and the Inca, the Maya political system never integrated the entire Maya cultural area into a single state or empire. Rather, throughout its history, the Maya area contained a varying mix of political complexity that included both states and chiefdoms. These polities fluctuated greatly in their relationships with each other and were engaged in a complex web of rivalries, periods of dominance or submission, vassalage, and alliances. At times, different polities achieved regional dominance, such as Calakmul, Caracol, Mayapan, and Tikal. The first reliably evidenced polities formed in the Maya lowlands in the 9th century BC. During the Late Preclassic, the Maya political system coalesced into a theopolitical form, where elite ideology justified the ruler's authority, and was reinforced by public display, ritual, and religion. The divine king was the centre of political power, exercising ultimate control over administrative, economic, judicial, and military functions. The divine authority invested within the ruler was such that the king was able to mobilize both the aristocracy and commoners in executing huge infrastructure projects, apparently with no police force or standing army. Some polities engaged in a strategy of increasing administration, and filling administrative posts with loyal supporters rather than blood relatives. Within a polity, mid-ranking population centres would have played a key role in managing resources and internal conflict. The Maya political landscape was highly complex and Maya elites engaged in political intrigue to gain economic and social advantage over neighbours. In the Late Classic, some cities established a long period of dominance over other large cities, such as the dominance of Caracol over Naranjo for half a century. In other cases, loose alliance networks were formed around a dominant city. Border settlements, usually located about halfway between neighbouring capitals, often switched allegiance over the course of their history, and at times acted independently. Dominant capitals exacted tribute in the form of luxury items from subjugated population centres. Political power was reinforced by military power, and the capture and humiliation of enemy warriors played an important part in elite culture. An overriding sense of pride and honour among the warrior aristocracy could lead to extended feuds and vendettas, which caused political instability and the fragmentation of polities. ## Society From the Early Preclassic, Maya society was sharply divided between the elite and commoners. As population increased over time, various sectors of society became increasingly specialised, and political organization increasingly complex. By the Late Classic, when populations had grown enormously and hundreds of cities were connected in a complex web of political hierarchies, the wealthy segment of society multiplied. A middle class may have developed that included artisans, low ranking priests and officials, merchants, and soldiers. Commoners included farmers, servants, labourers, and slaves. According to indigenous histories, land was held communally by noble houses or clans. Such clans held that the land was the property of the ancestors, and ties between the land and the ancestors were reinforced by the burial of the dead within residential compounds. ### King and court Classic Maya rule was centred in a royal culture that was displayed in all areas of Classic Maya art. The king was the supreme ruler and held a semi-divine status that made him the mediator between the mortal realm and that of the gods. From very early times, kings were specifically identified with the young maize god, whose gift of maize was the basis of Mesoamerican civilization. Maya royal succession was patrilineal, and royal power only passed to queens when doing otherwise would result in the extinction of the dynasty. Typically, power was passed to the eldest son. A young prince was called a chʼok ("youth"), although this word later came to refer to nobility in general. The royal heir was called bʼaah chʼok ("head youth"). Various points in the prince's childhood were marked by ritual; the most important was a bloodletting ceremony at age five or six. Although being of the royal bloodline was of utmost importance, the heir also had to be a successful war leader, as demonstrated by taking of captives. The enthronement of a new king was a highly elaborate ceremony, involving a series of separate acts that included enthronement upon a jaguar-skin cushion, human sacrifice, and receiving the symbols of royal power, such as a headband bearing a jade representation of the so-called "jester god", an elaborate headdress adorned with quetzal feathers, and a sceptre representing the god Kʼawiil. Maya political administration, based around the royal court, was not bureaucratic in nature. Government was hierarchical, and official posts were sponsored by higher-ranking members of the aristocracy; officials tended to be promoted to higher levels of office over their lives. Officials are referred to as being "owned" by their sponsor, and this relationship continued even after the death of the sponsor. The Maya royal court was a vibrant and dynamic political institution. There was no universal structure for the Maya royal court, instead each polity formed a royal court that was suited to its own individual context. A number of royal and noble titles have been identified by epigraphers translating Classic Maya inscriptions. Ajaw is usually translated as "lord" or "king". In the Early Classic, an ajaw was the ruler of a city. Later, with increasing social complexity, the ajaw was a member of the ruling class and a major city could have more than one, each ruling over different districts. Paramount rulers distinguished themselves from the extended nobility by prefixing the word kʼuhul to their ajaw title. A kʼuhul ajaw was "divine lord", originally confined to the kings of the most prestigious and ancient royal lines. Kalomte was a royal title, whose exact meaning is not yet deciphered, but it was held only by the most powerful kings of the strongest dynasties. It indicated an overlord, or high king, and was only in use during the Classic period. By the Late Classic, the absolute power of the kʼuhul ajaw had weakened, and the political system had diversified to include a wider aristocracy, that by this time may well have expanded disproportionately. A sajal was ranked below the ajaw, and indicated a subservient lord. A sajal would be lord of a second- or third-tier site, answering to an ajaw, who may himself have been subservient to a kalomte. A sajal would often be a war captain or regional governor, and inscriptions often link the sajal title to warfare; they are often mentioned as the holders of war captives. Sajal meant "feared one". The titles of ah tzʼihb and ah chʼul hun are both related to scribes. The ah tzʼihb was a royal scribe, usually a member of the royal family; the ah chʼul hun was the Keeper of the Holy Books, a title that is closely associated with the ajaw title, indicating that an ajaw always held the ah chʼul hun title simultaneously. Other courtly titles, the functions of which are not well understood, were yajaw kʼahk''' ("Lord of Fire"), tiʼhuun and ti'sakhuun. These last two may be variations on the same title, and Mark Zender has suggested that the holder of this title may have been the spokesman for the ruler. Courtly titles are overwhelmingly male-oriented, and in those relatively rare occasions where they are applied to a woman, they appear to be used as honorifics for female royalty. Titled elites were often associated with particular structures in the hieroglyphic inscriptions of Classic period cities, indicating that such office holders either owned that structure, or that the structure was an important focus for their activities. A lakam, or standard-bearer, was possibly the only non-elite post-holder in the royal court. The lakam was only found in larger sites, and they appear to have been responsible for the taxation of local districts. Different factions may have existed in the royal court. The kʼuhul ahaw and his household would have formed the central power-base, but other important groups were the priesthood, the warrior aristocracy, and other aristocratic courtiers. Where ruling councils existed, as at Chichen Itza and Copán, these may have formed an additional faction. Rivalry between different factions would have led to dynamic political institutions as compromises and disagreements were played out. In such a setting, public performance was vital. Such performances included ritual dances, presentation of war captives, offerings of tribute, human sacrifice, and religious ritual. ### Commoners Commoners are estimated to have comprised over 90% of the population, but relatively little is known about them. Their houses were generally constructed from perishable materials, and their remains have left little trace in the archaeological record. Some commoner dwellings were raised on low platforms, and these can be identified, but an unknown quantity of commoner houses were not. Such low-status dwellings can only be detected by extensive remote-sensing surveys of apparently empty terrain. The range of commoners was broad; it consisted of everyone not of noble birth, and therefore included everyone from the poorest farmers to wealthy craftsmen and commoners appointed to bureaucratic positions. Commoners engaged in essential production activities, including that of products destined for use by the elite, such as cotton and cacao, as well as subsistence crops for their own use, and utilitarian items such as ceramics and stone tools. Commoners took part in warfare, and could advance socially by proving themselves as outstanding warriors. Commoners paid taxes to the elite in the form of staple goods such as maize, flour and game. It is likely that hard-working commoners who displayed exceptional skills and initiative could become influential members of Maya society. ## Warfare Warfare was prevalent in the Maya world. Military campaigns were launched for a variety of reasons, including the control of trade routes and tribute, raids to take captives, scaling up to the complete destruction of an enemy state. Little is known about Maya military organization, logistics, or training. Warfare is depicted in Maya art from the Classic period, and wars and victories are mentioned in hieroglyphic inscriptions. Unfortunately, the inscriptions do not provide information upon the causes of war, or the form it took. In the 8th–9th centuries, intensive warfare resulted in the collapse of the kingdoms of the Petexbatún region of western Petén. The rapid abandonment of Aguateca by its inhabitants has provided a rare opportunity to examine the remains of Maya weaponry in situ. Aguateca was stormed by unknown enemies around 810 AD, who overcame its formidable defences and burned the royal palace. The elite inhabitants of the city either fled or were captured, and never returned to collect their abandoned property. The inhabitants of the periphery abandoned the site soon after. This is an example of intensive warfare carried out by an enemy in order to eliminate a Maya state, rather than subjugate it. Research at Aguateca indicated that Classic period warriors were primarily members of the elite. From as early as the Preclassic period, the ruler of a Maya polity was expected to be a distinguished war leader, and was depicted with trophy heads hanging from his belt. In the Classic period, such trophy heads no longer appeared on the king's belt, but Classic period kings are frequently depicted standing over humiliated war captives. Right up to the end of the Postclassic period, Maya kings led as war captains. Maya inscriptions from the Classic show that a defeated king could be captured, tortured, and sacrificed. The Spanish recorded that Maya leaders kept track of troop movements in painted books. The outcome of a successful military campaign could vary in its impact on the defeated polity. In some cases, entire cities were sacked, and never resettled, as at Aguateca. In other instances, the victors would seize the defeated rulers, their families, and patron gods. The captured nobles and their families could be imprisoned, or sacrificed. At the least severe end of the scale, the defeated polity would be obliged to pay tribute to the victor. ### Warriors During the Contact period, certain military positions were held by members of the aristocracy, and were passed on by patrilineal succession. It is likely that the specialised knowledge inherent in the particular military role was taught to the successor, including strategy, ritual, and war dances. Maya armies of the Contact period were highly disciplined, and warriors participated in regular training exercises and drills; every able-bodied adult male was available for military service. Maya states did not maintain standing armies; warriors were mustered by local officials who reported back to appointed warleaders. There were also units of full-time mercenaries who followed permanent leaders. Most warriors were not full-time, however, and were primarily farmers; the needs of their crops usually came before warfare. Maya warfare was not so much aimed at destruction of the enemy as the seizure of captives and plunder. There is some evidence from the Classic period that women provided supporting roles in war, but they did not act as military officers with the exception of those rare ruling queens. By the Postclassic, the native chronicles suggest that women occasionally fought in battle. ### Weapons The atlatl (spear-thrower) was introduced to the Maya region by Teotihuacan in the Early Classic. This was a 0.5-metre-long (1.6 ft) stick with a notched end to hold a dart or javelin. The stick was used to launch the missile with more force and accuracy than simply hurling it with the arm. Evidence in the form of stone blade points recovered from Aguateca indicate that darts and spears were the primary weapons of the Classic Maya warrior. Commoners used blowguns in war, which also served as their hunting weapon. The bow and arrow was used by the ancient Maya for both war and hunting. Although present in the Maya region during the Classic period, its use as a weapon of war was not favoured; it did not become a common weapon until the Postclassic. The Contact period Maya also used two-handed swords crafted from strong wood with the blade fashioned from inset obsidian, similar to the Aztec macuahuitl. Maya warriors wore body armour in the form of quilted cotton that had been soaked in salt water to toughen it; the resulting armour compared favourably to the steel armour worn by the Spanish when they conquered the region. Warriors bore wooden or animal hide shields decorated with feathers and animal skins. ## Trade Trade was a key component of Maya society, and in the development of the Maya civilization. The cities that grew to become the most important usually controlled access to vital trade goods, or portage routes. Cities such as Kaminaljuyu and Qʼumarkaj in the Guatemalan Highlands, and Chalchuapa in El Salvador, variously controlled access to the sources of obsidian at different points in Maya history. The Maya were major producers of cotton, which was used to make the textiles to be traded throughout Mesoamerica. The most important cities in the northern Yucatán Peninsula controlled access to the sources of salt. In the Postclassic, the Maya engaged in a flourishing slave trade with wider Mesoamerica. The Maya engaged in long-distance trade across the Maya region, and across greater Mesoamerica and beyond. As an illustration, an Early Classic Maya merchant quarter has been identified at the distant metropolis of Teotihuacan, in central Mexico. Within Mesoamerica beyond the Maya area, trade routes particularly focused on central Mexico and the Gulf coast. In the Early Classic, Chichen Itza was at the hub of an extensive trade network that imported gold discs from Colombia and Panama, and turquoise from Los Cerrillos, New Mexico. Long-distance trade of both luxury and utilitarian goods was probably controlled by the royal family. Prestige goods obtained by trade were used both for consumption by the city's ruler, and as luxury gifts to consolidate the loyalty of vassals and allies. Trade routes not only supplied physical goods, they facilitated the movement of people and ideas throughout Mesoamerica. Shifts in trade routes occurred with the rise and fall of important cities in the Maya region, and have been identified in every major reorganization of the Maya civilization, such as the rise of Preclassic Maya civilization, the transition to the Classic, and the Terminal Classic collapse. Even the Spanish Conquest did not immediately terminate all Maya trading activity; for example, the Contact period Manche Chʼol traded the prestige crops of cacao, annatto and vanilla into colonial Verapaz. ### Merchants Little is known of Maya merchants, although they are depicted on Maya ceramics in elaborate noble dress, so at least some were members of the elite. During the Contact period, Maya nobility took part in long-distance trading expeditions. The majority of traders were middle class, but were largely engaged in local and regional trade rather than the prestigious long-distance trading that was the preserve of the elite. The travelling of merchants into dangerous foreign territory was likened to a passage through the underworld; the patron deities of merchants were two underworld gods carrying backpacks. When merchants travelled, they painted themselves black, like their patron gods, and went heavily armed. The Maya had no pack animals, so all trade goods were carried on the backs of porters when going overland; if the trade route followed a river or the coast, then goods were transported in canoes. A substantial Maya trading canoe made from a large hollowed-out tree trunk was encountered off Honduras on Christopher Columbus's fourth voyage. The canoe was 2.5 metres (8.2 ft) broad and was powered by 25 rowers. Trade goods carried included cacao, obsidian, ceramics, textiles, and copper bells and axes. Cacao was used as currency (although not exclusively), and its value was such that counterfeiting occurred by removing the flesh from the pod, and stuffing it with dirt or avocado rind. ### Marketplaces Marketplaces are difficult to identify archaeologically. However, the Spanish reported a thriving market economy when they arrived in the region. At some Classic period cities, archaeologists have tentatively identified formal arcade-style masonry architecture and parallel alignments of scattered stones as the permanent foundations of market stalls. A 2007 study compared soils from a modern Guatemalan market to a proposed ancient market at Chunchucmil; unusually high levels of zinc and phosphorus at both sites indicated similar food production and vegetable sales activity. The calculated density of market stalls at Chunchucmil strongly suggests that a thriving market economy already existed in the Early Classic. Archaeologists have tentatively identified marketplaces at an increasing number of Maya cities by means of a combination of archaeology and soil analysis. When the Spanish arrived, Postclassic cities in the highlands had markets in permanent plazas, with officials on hand to settle disputes, enforce rules, and collect taxes. ## Art Maya art is essentially the art of the royal court. It is almost exclusively concerned with the Maya elite and their world. Maya art was crafted from both perishable and non-perishable materials, and served to link the Maya to their ancestors. Although surviving Maya art represents only a small proportion of the art that the Maya created, it represents a wider variety of subjects than any other art tradition in the Americas. Maya art has many regional styles, and is unique in the ancient Americas in bearing narrative text. The finest surviving Maya art dates to the Late Classic period. The Maya exhibited a preference for the colour green or blue-green, and used the same word for the colours blue and green. Correspondingly, they placed high value on apple-green jade, and other greenstones, associating them with the sun-god Kʼinich Ajau. They sculpted artefacts that included fine tesserae and beads, to carved heads weighing 4.42 kilograms (9.7 lb). The Maya nobility practised dental modification, and some lords wore encrusted jade in their teeth. Mosaic funerary masks could also be fashioned from jade, such as that of Kʼinich Janaabʼ Pakal, king of Palenque. Maya stone sculpture emerged into the archaeological record as a fully developed tradition, suggesting that it may have evolved from a tradition of sculpting wood. Because of the biodegradability of wood, the corpus of Maya woodwork has almost entirely disappeared. The few wooden artefacts that have survived include three-dimensional sculptures, and hieroglyphic panels. Stone Maya stelae are widespread in city sites, often paired with low, circular stones referred to as altars in the literature. Stone sculpture also took other forms, such as the limestone relief panels at Palenque and Piedras Negras. At Yaxchilan, Dos Pilas, Copán, and other sites, stone stairways were decorated with sculpture. The hieroglyphic stairway at Copán comprises the longest surviving Maya hieroglyphic text, and consists of 2,200 individual glyphs. The largest Maya sculptures consisted of architectural façades crafted from stucco. The rough form was laid out on a plain plaster base coating on the wall, and the three-dimensional form was built up using small stones. Finally, this was coated with stucco and moulded into the finished form; human body forms were first modelled in stucco, with their costumes added afterwards. The final stucco sculpture was then brightly painted. Giant stucco masks were used to adorn temple façades by the Late Preclassic, and such decoration continued into the Classic period. The Maya had a long tradition of mural painting; rich polychrome murals have been excavated at San Bartolo, dating to between 300 and 200 BC. Walls were coated with plaster, and polychrome designs were painted onto the smooth finish. The majority of such murals have not survived, but Early Classic tombs painted in cream, red, and black have been excavated at Caracol, Río Azul, and Tikal. Among the best preserved murals are a full-size series of Late Classic paintings at Bonampak. Flint, chert, and obsidian all served utilitarian purposes in Maya culture, but many pieces were finely crafted into forms that were never intended to be used as tools. Eccentric flints are among the finest lithic artefacts produced by the ancient Maya. They were technically very challenging to produce, requiring considerable skill on the part of the artisan. Large obsidian eccentrics can measure over 30 centimetres (12 in) in length. Their actual form varies considerably but they generally depict human, animal and geometric forms associated with Maya religion. Eccentric flints show a great variety of forms, such as crescents, crosses, snakes, and scorpions. The largest and most elaborate examples display multiple human heads, with minor heads sometimes branching off from larger one. Maya textiles are very poorly represented in the archaeological record, although by comparison with other pre-Columbian cultures, such as the Aztecs and the Andean region, it is likely that they were high-value items. Scraps of textile have been recovered, but the best evidence for textile art is where they are represented in other media, such as painted murals or ceramics. Such secondary representations show the elite of the Maya court adorned with sumptuous cloths, generally these would have been cotton, but jaguar pelts and deer hides are also shown. Ceramics are the most commonly surviving type of Maya art. The Maya had no knowledge of the potter's wheel, and Maya vessels were built up by coiling rolled strips of clay into the desired form. Maya pottery was not glazed, although it often had a fine finish produced by burnishing. Maya ceramics were painted with clay slips blended with minerals and coloured clays. Ancient Maya firing techniques have yet to be replicated. A quantity of extremely fine ceramic figurines have been excavated from Late Classic tombs on Jaina Island, in northern Yucatán. They stand from 10 to 25 centimetres (3.9 to 9.8 in) high and were hand modelled, with exquisite detail. The Ik-style polychrome ceramic corpus, including finely painted plates and cylindrical vessels, originated in Late Classic Motul de San José. It includes a set of features such as hieroglyphs painted in a pink or pale red colour and scenes with dancers wearing masks. One of the most distinctive features is the realistic representation of subjects as they appeared in life. The subject matter of the vessels includes courtly life from the Petén region in the 8th century AD, such as diplomatic meetings, feasting, bloodletting, scenes of warriors and the sacrifice of prisoners of war. Bone, both human and animal, was also sculpted; human bones may have been trophies, or relics of ancestors. The Maya valued Spondylus shells, and worked them to remove the white exterior and spines, to reveal the fine orange interior. Around the 10th century AD, metallurgy arrived in Mesoamerica from South America, and the Maya began to make small objects in gold, silver and copper. The Maya generally hammered sheet metal into objects such as beads, bells, and discs. In the last centuries before the Spanish Conquest, the Maya began to use the lost-wax method to cast small metal pieces. One poorly studied area of Maya folk art is graffiti. Additional graffiti, not part of the planned decoration, was incised into the stucco of interior walls, floors, and benches, in a wide variety of buildings, including temples, residences, and storerooms. Graffiti has been recorded at 51 Maya sites, particularly clustered in the Petén Basin and southern Campeche, and the Chenes region of northwestern Yucatán. At Tikal, where a great quantity of graffiti has been recorded, the subject matter includes drawings of temples, people, deities, animals, banners, litters, and thrones. Graffiti was often inscribed haphazardly, with drawings overlapping each other, and display a mix of crude, untrained art, and examples by artists familiar with Classic-period artistic conventions. ## Architecture The Maya produced a vast array of structures, and have left an extensive architectural legacy. Maya architecture also incorporates various art forms and hieroglyphic texts. Masonry architecture built by the Maya evidences craft specialization in Maya society, centralised organization and the political means to mobilize a large workforce. It is estimated that a large elite residence at Copán required an estimated 10,686 man-days to build, which compares to 67-man-days for a commoner's hut. It is further estimated that 65% of the labour required to build the noble residence was used in the quarrying, transporting, and finishing of the stone used in construction, and 24% of the labour was required for the manufacture and application of limestone-based plaster. Altogether, it is estimated that two to three months were required for the construction of the residence for this single noble at Copán, using between 80 and 130 full-time labourers. A Classic-period city like Tikal was spread over 20 square kilometres (7.7 sq mi), with an urban core covering 6 square kilometres (2.3 sq mi). The labour required to build such a city was immense, running into many millions of man-days. The most massive structures ever erected by the Maya were built during the Preclassic period. Craft specialization would have required dedicated stonemasons and plasterers by the Late Preclassic, and would have required planners and architects. ### Urban design Maya cities were not formally planned, and were subject to irregular expansion, with the haphazard addition of palaces, temples and other buildings. Most Maya cities tended to grow outwards from the core, and upwards as new structures were superimposed upon preceding architecture. Maya cities usually had a ceremonial and administrative centre surrounded by a vast irregular sprawl of residential complexes. The centres of all Maya cities featured sacred precincts, sometimes separated from nearby residential areas by walls. These precincts contained pyramid temples and other monumental architecture dedicated to elite activities, such as basal platforms that supported administrative or elite residential complexes. Sculpted monuments were raised to record the deeds of the ruling dynasty. City centres also featured plazas, sacred ballcourts and buildings used for marketplaces and schools. Frequently causeways linked the centre to outlying areas of the city. Some of these classes of architecture formed lesser groups in the outlying areas of the city, which served as sacred centres for non-royal lineages. The areas adjacent to these sacred compounds included residential complexes housing wealthy lineages. The largest and richest of these elite compounds sometimes possessed sculpture and art of craftsmanship equal to that of royal art. The ceremonial centre of the Maya city was where the ruling elite lived, and where the administrative functions of the city were performed, together with religious ceremonies. It was also where the inhabitants of the city gathered for public activities. Elite residential complexes occupied the best land around the city centre, while commoners had their residences dispersed further away from the ceremonial centre. Residential units were built on top of stone platforms to raise them above the level of the rain season floodwaters. ### Building materials and methods The Maya built their cities with Neolithic technology; they built their structures from both perishable materials and from stone. The exact type of stone used in masonry construction varied according to locally available resources, and this also affected the building style. Across a broad swathe of the Maya area, limestone was immediately available. The local limestone is relatively soft when freshly cut, but hardens with exposure. There was great variety in the quality of limestone, with good-quality stone available in the Usumacinta region; in the northern Yucatán, the limestone used in construction was of relatively poor quality. Volcanic tuff was used at Copán, and nearby Quiriguá employed sandstone. In Comalcalco, where suitable stone was not available locally, fired bricks were employed. Limestone was burned at high temperatures in order to manufacture cement, plaster, and stucco. Lime-based cement was used to seal stonework in place, and stone blocks were fashioned using rope-and-water abrasion, and with obsidian tools. The Maya did not employ a functional wheel, so all loads were transported on litters, barges, or rolled on logs. Heavy loads were lifted with rope, but probably without employing pulleys. Wood was used for beams, and for lintels, even in masonry structures. Throughout Maya history, common huts and some temples continued to be built from wooden poles and thatch. Adobe was also applied; this consisted of mud strengthened with straw and was applied as a coating over the woven-stick walls of huts, even after the development of masonry structures. In the southern Maya area, adobe was employed in monumental architecture when no suitable stone was locally available. ### Principal construction types The great cities of the Maya civilization were composed of pyramid temples, palaces, ballcourts, sacbeob (causeways), patios and plazas. Some cities also possessed extensive hydraulic systems or defensive walls. The exteriors of most buildings were painted, either in one or multiple colours, or with imagery. Many buildings were adorned with sculpture or painted stucco reliefs. #### Palaces and acropoleis These complexes were usually located in the site core, beside a principal plaza. Maya palaces consisted of a platform supporting a multiroom range structure. The term acropolis, in a Maya context, refers to a complex of structures built upon platforms of varying height. Palaces and acropoleis were essentially elite residential compounds. They generally extended horizontally as opposed to the towering Maya pyramids, and often had restricted access. Some structures in Maya acropoleis supported roof combs. Rooms often had stone benches for sleeping, and holes indicate where curtains once hung. Large palaces, such as at Palenque, could be fitted with a water supply, and sweat baths were often found within the complex, or nearby. During the Early Classic, rulers were sometimes buried underneath the acropolis complex. Some rooms in palaces were true throne rooms; in the royal palace of Palenque there were a number of throne rooms that were used for important events, including the inauguration of new kings. Palaces are usually arranged around one or more courtyards, with their façades facing inwards; some examples are adorned with sculpture. Some palaces possess associated hieroglyphic descriptions that identify them as the royal residences of named rulers. There is abundant evidence that palaces were far more than simple elite residences, and that a range of courtly activities took place in them, including audiences, formal receptions, and important rituals. #### Pyramids and temples Temples were sometimes referred to in hieroglyphic texts as kʼuh nah, meaning "god's house". Temples were raised on platforms, most often upon a pyramid. The earliest temples were probably thatched huts built upon low platforms. By the Late Preclassic period, their walls were of stone, and the development of the corbel arch allowed stone roofs to replace thatch. By the Classic period, temple roofs were being topped with roof combs that extended the height of the temple and served as a foundation for monumental art. Temple shrines contained one to three rooms, and were dedicated to important deities. Such a deity might be one of the patron gods of the city, or a deified ancestor. In general, freestanding pyramids were shrines honouring powerful ancestors. #### E-Groups and observatories The Maya were keen observers of the sun, stars, and planets. E-Groups were a particular arrangement of temples that were relatively common in the Maya region; they take their names from Group E at Uaxactun. They consisted of three small structures facing a fourth structure, and were used to mark the solstices and equinoxes. The earliest examples date to the Preclassic period. The Lost World complex at Tikal started out as an E-Group built towards the end of the Middle Preclassic. Due to its nature, the basic layout of an E-Group was constant. A structure was built on the west side of a plaza; it was usually a radial pyramid with stairways facing the cardinal directions. It faced east across the plaza to three small temples on the far side. From the west pyramid, the sun was seen to rise over these temples on the solstices and equinoxes. E-Groups were raised across the central and southern Maya area for over a millennium; not all were properly aligned as observatories, and their function may have been symbolic. As well as E-Groups, the Maya built other structures dedicated to observing the movements of celestial bodies. Many Maya buildings were aligned with astronomical bodies, including the planet Venus, and various constellations. The Caracol structure at Chichen Itza was a circular multi-level edifice, with a conical superstructure. It has slit windows that marked the movements of Venus. At Copán, a pair of stelae were raised to mark the position of the setting sun at the equinoxes. #### Triadic pyramids Triadic pyramids first appeared in the Preclassic. They consisted of a dominant structure flanked by two smaller inward-facing buildings, all mounted upon a single basal platform. The largest known triadic pyramid was built at El Mirador in the Petén Basin; it covers an area six times as large as that covered by Temple IV, the largest pyramid at Tikal. The three superstructures all have stairways leading up from the central plaza on top of the basal platform. No securely established forerunners of Triadic Groups are known, but they may have developed from the eastern range building of E-Group complexes. The triadic form was the predominant architectural form in the Petén region during the Late Preclassic. Examples of triadic pyramids are known from as many as 88 archaeological sites. At Nakbe, there are at least a dozen examples of triadic complexes and the four largest structures in the city are triadic in nature. At El Mirador there are probably as many as 36 triadic structures. Examples of the triadic form are even known from Dzibilchaltun in the far north of the Yucatán Peninsula, and Qʼumarkaj in the Highlands of Guatemala. The triadic pyramid remained a popular architectural form for centuries after the first examples were built; it continued in use into the Classic Period, with later examples being found at Uaxactun, Caracol, Seibal, Nakum, Tikal and Palenque. The Qʼumarkaj example is the only one that has been dated to the Postclassic Period. The triple-temple form of the triadic pyramid appears to be related to Maya mythology. #### Ballcourts The ballcourt is a distinctive pan-Mesoamerican form of architecture. Although the majority of Maya ballcourts date to the Classic period, the earliest examples appeared around 1000 BC in northwestern Yucatán, during the Middle Preclassic. By the time of Spanish contact, ballcourts were only in use in the Guatemalan Highlands, at cities such as Qʼumarkaj and Iximche. Throughout Maya history, ballcourts maintained a characteristic form consisting of an ɪ shape, with a central playing area terminating in two transverse end zones. The central playing area usually measures between 20 and 30 metres (66 and 98 ft) long, and is flanked by two lateral structures that stood up to 3 or 4 metres (9.8 or 13.1 ft) high. The lateral platforms often supported structures that may have held privileged spectators. The Great Ballcourt at Chichen Itza is the largest in Mesoamerica, measuring 83 metres (272 ft) long by 30 metres (98 ft) wide, with walls standing 8.2 metres (27 ft) high. ### Regional architectural styles Although Maya cities shared many common features, there was considerable variation in architectural style. Such styles were influenced by locally available construction materials, climate, topography, and local preferences. In the Late Classic, these local differences developed into distinctive regional architectural styles. #### Central Petén The central Petén style of architecture is modelled after the great city of Tikal. The style is characterised by tall pyramids supporting a summit shrine adorned with a roof comb, and accessed by a single doorway. Additional features are the use of stela-altar pairings, and the decoration of architectural façades, lintels, and roof combs with relief sculptures of rulers and gods. One of the finest examples of Central Petén style architecture is Tikal Temple I. Examples of sites in the Central Petén style include Altun Ha, Calakmul, Holmul, Ixkun, Nakum, Naranjo, and Yaxhá. #### Puuc The exemplar of Puuc-style architecture is Uxmal. The style developed in the Puuc Hills of northwestern Yucatán; during the Terminal Classic it spread beyond this core region across the northern Yucatán Peninsula. Puuc sites replaced rubble cores with lime cement, resulting in stronger walls, and also strengthened their corbel arches; this allowed Puuc-style cities to build freestanding entrance archways. The upper façades of buildings were decorated with precut stones mosaic-fashion, erected as facing over the core, forming elaborate compositions of long-nosed deities such as the rain god Chaac and the Principal Bird Deity. The motifs also included geometric patterns, lattices and spools, possibly influenced by styles from highland Oaxaca, outside the Maya area. In contrast, the lower façades were left undecorated. Roof combs were relatively uncommon at Puuc sites. #### Chenes The Chenes style is very similar to the Puuc style, but predates the use of the mosaic façades of the Puuc region. It featured fully adorned façades on both the upper and lower sections of structures. Some doorways were surrounded by mosaic masks of monsters representing mountain or sky deities, identifying the doorways as entrances to the supernatural realm. Some buildings contained interior stairways that accessed different levels. The Chenes style is most commonly encountered in the southern portion of the Yucatán Peninsula, although individual buildings in the style can be found elsewhere in the peninsula. Examples of Chenes sites include Dzibilnocac, Hochob, Santa Rosa Xtampak, and Tabasqueño. #### Río Bec The Río Bec style forms a sub-region of the Chenes style, and also features elements of the Central Petén style, such as prominent roof combs. Its palaces are distinctive for their false-tower decorations, lacking interior rooms, with steep, almost vertical, stairways and false doors. These towers were adorned with deity masks, and were built to impress the viewer, rather than serve any practical function. Such false towers are only found in the Río Bec region. Río Bec sites include Chicanná, Hormiguero, and Xpuhil. #### Usumacinta The Usumacinta style developed in the hilly terrain of the Usumacinta drainage. Cities took advantage of the hillsides to support their major architecture, as at Palenque and Yaxchilan. Sites modified corbel vaulting to allow thinner walls and multiple access doors to temples. As in Petén, roof combs adorned principal structures. Palaces had multiple entrances that used post-and-lintel entrances rather than corbel vaulting. Many sites erected stelae, but Palenque instead developed finely sculpted panelling to decorate its buildings. ## Language Before 2000 BC, the Maya spoke a single language, dubbed proto-Mayan by linguists. Linguistic analysis of reconstructed Proto-Mayan vocabulary suggests that the original Proto-Mayan homeland was in the western or northern Guatemalan Highlands, although the evidence is not conclusive. Proto-Mayan diverged during the Preclassic period to form the major Mayan language groups that make up the family, including Huastecan, Greater Kʼicheʼan, Greater Qʼanjobalan, Mamean, Tzʼeltalan-Chʼolan, and Yucatecan. These groups diverged further during the pre-Columbian era to form over 30 languages that have survived into modern times. The language of almost all Classic Maya texts over the entire Maya area has been identified as Chʼolan; Late Preclassic text from Kaminaljuyu, in the highlands, also appears to be in, or related to, Chʼolan. The use of Chʼolan as the language of Maya text does not necessarily indicate that it was the language commonly used by the local populace – it may have been equivalent to Medieval Latin as a ritual or prestige language. Classic Chʼolan may have been the prestige language of the Classic Maya elite, used in inter-polity communication such as diplomacy and trade. By the Postclassic period, Yucatec was also being written in Maya codices alongside Chʼolan. ## Writing and literacy The Maya writing system is one of the outstanding achievements of the pre-Columbian inhabitants of the Americas. It was the most sophisticated and highly developed writing system of more than a dozen systems that developed in Mesoamerica. The earliest inscriptions in an identifiably Maya script date back to 300–200 BC, in the Petén Basin. However, this is preceded by several other Mesoamerican writing systems, such as the Epi-Olmec and Zapotec scripts. Early Maya script had appeared on the Pacific coast of Guatemala by the late 1st century AD, or early 2nd century. Similarities between the Isthmian script and Early Maya script of the Pacific coast suggest that the two systems developed in tandem. By about AD 250, the Maya script had become a more formalised and consistent writing system. The Catholic Church and colonial officials, notably Bishop Diego de Landa, destroyed Maya texts wherever they found them, and with them the knowledge of Maya writing, but by chance four uncontested pre-Columbian books dated to the Postclassic period have been preserved. These are known as the Madrid Codex, the Dresden Codex, the Paris Codex and the Maya Codex of Mexico (previously known as the Grolier Codex, which was of disputed authenticity until 2018). Archaeology conducted at Maya sites often reveals other fragments, rectangular lumps of plaster and paint chips which were codices; these tantalizing remains are, however, too severely damaged for any inscriptions to have survived, most of the organic material having decayed. In reference to the few extant Maya writings, Michael D. Coe stated: > [O]ur knowledge of ancient Maya thought must represent only a tiny fraction of the whole picture, for of the thousands of books in which the full extent of their learning and ritual was recorded, only four have survived to modern times (as though all that posterity knew of ourselves were to be based upon three prayer books and 'Pilgrim's Progress'). Most surviving pre-Columbian Maya writing dates to the Classic period and is contained in stone inscriptions from Maya sites, such as stelae, or on ceramics vessels. Other media include the aforementioned codices, stucco façades, frescoes, wooden lintels, cave walls, and portable artefacts crafted from a variety of materials, including bone, shell, obsidian, and jade. ### Writing system The Maya writing system (often called hieroglyphs from a superficial resemblance to Ancient Egyptian writing) is a logosyllabic writing system, combining a syllabary of phonetic signs representing syllables with logogram representing entire words. Among the writing systems of the Pre-Columbian New World, Maya script most closely represents the spoken language. At any one time, no more than around 500 glyphs were in use, some 200 of which (including variations) were phonetic. The Maya script was in use up to the arrival of the Europeans, its use peaking during the Classic Period. In excess of 10,000 individual texts have been recovered, mostly inscribed on stone monuments, lintels, stelae and ceramics. The Maya also produced texts painted on a form of paper manufactured from processed tree-bark generally now known by its Nahuatl-language name amatl used to produce codices. The skill and knowledge of Maya writing persisted among segments of the population right up to the Spanish conquest. The knowledge was subsequently lost, as a result of the impact of the conquest on Maya society. The decipherment and recovery of the knowledge of Maya writing has been a long and laborious process. Some elements were first deciphered in the late 19th and early 20th century, mostly the parts having to do with numbers, the Maya calendar, and astronomy. Major breakthroughs were made from the 1950s to 1970s, and accelerated rapidly thereafter. By the end of the 20th century, scholars were able to read the majority of Maya texts, and ongoing work continues to further illuminate the content. ### Logosyllabic script The basic unit of Maya logosyllabic text is the glyph block, which transcribes a word or phrase. The block is composed of one or more individual glyphs attached to each other to form the glyph block, with individual glyph blocks generally being separated by a space. Glyph blocks are usually arranged in a grid pattern. For ease of reference, epigraphers refer to glyph blocks from left to right alphabetically, and top to bottom numerically. Thus, any glyph block in a piece of text can be identified. C4 would be third block counting from the left, and the fourth block counting downwards. If a monument or artefact has more than one inscription, column labels are not repeated, rather they continue in the alphabetic series; if there are more than 26 columns, the labelling continues as A', B', etc. Numeric row labels restart from 1 for each discrete unit of text. Although Mayan text may be laid out in varying manners, generally it is arranged into double columns of glyph blocks. The reading order of text starts at the top left (block A1), continues to the second block in the double-column (B1), then drops down a row and starts again from the left half of the double column (A2), and thus continues in zig-zag fashion. Once the bottom is reached, the inscription continues from the top left of the next double column (C1). Where an inscription ends in a single (unpaired) column, this final column is usually read straight downwards. Individual glyph blocks may be composed of a number of elements. These consist of the main sign, and any affixes. Main signs represent the major element of the block, and may be a noun, verb, adverb, adjective, or phonetic sign. Some main signs are abstract, some are pictures of the object they represent, and others are "head variants", personifications of the word they represent. Affixes are smaller rectangular elements, usually attached to a main sign, although a block may be composed entirely of affixes. Affixes may represent a wide variety of speech elements, including nouns, verbs, verbal suffixes, prepositions, and pronouns. Small sections of a main sign could be used to represent the whole main sign. Maya scribes were highly inventive in their usage and adaptation of glyph elements. ### Writing tools Although the archaeological record does not provide examples of brushes or pens, analysis of ink strokes on the Postclassic codices suggests that it was applied with a brush with a tip fashioned from pliable hair. A Classic period sculpture from Copán, Honduras, depicts a scribe with an inkpot fashioned from a conch shell. Excavations at Aguateca uncovered a number of scribal artefacts from the residences of elite status scribes, including palettes and mortars and pestles. ### Scribes and literacy Commoners were illiterate; scribes were drawn from the elite. It is not known if all members of the aristocracy could read and write, although at least some women could, since there are representations of female scribes in Maya art. Maya scribes were called aj tzʼib, meaning "one who writes or paints". There were probably scribal schools where members of the aristocracy were taught to write. Scribal activity is identifiable in the archaeological record; Jasaw Chan Kʼawiil I, king of Tikal, was interred with his paint pot. Some junior members of the Copán royal dynasty have also been found buried with their writing implements. A palace at Copán has been identified as that of a noble lineage of scribes; it is decorated with sculpture that includes figures holding ink pots. Although not much is known about Maya scribes, some did sign their work, both on ceramics and on stone sculpture. Usually, only a single scribe signed a ceramic vessel, but multiple sculptors are known to have recorded their names on stone sculpture; eight sculptors signed one stela at Piedras Negras. However, most works remained unsigned by their artists. ## Mathematics In common with the other Mesoamerican civilizations, the Maya used a base 20 (vigesimal) system. The bar-and-dot counting system that is the base of Maya numerals was in use in Mesoamerica by 1000 BC; the Maya adopted it by the Late Preclassic, and added the symbol for zero. This may have been the earliest known occurrence of the idea of an explicit zero worldwide, although it may have been later than the Babylonian system. The earliest explicit use of zero occurred on monuments dated to 357 AD. In its earliest uses, the zero served as a place holder, indicating an absence of a particular calendrical count. This later developed into a numeral that was used to perform calculation, and was used in hieroglyphic texts for more than a thousand years, until the writing system was extinguished by the Spanish. The basic number system consists of a dot to represent one, and a bar to represent five. By the Postclassic period a shell symbol represented zero; during the Classic period other glyphs were used. The Maya numerals from 0 to 19 used repetitions of these symbols. The value of a numeral was determined by its position; as a numeral shifted upwards, its basic value multiplied by twenty. In this way, the lowest symbol would represent units, the next symbol up would represent multiples of twenty, and the symbol above that would represent multiples of 400, and so on. For example, the number 884 would be written with four dots on the lowest level, four dots on the next level up, and two dots on the next level after that, to give 4×1 + 4×20 + 2×400 = 884. Using this system, the Maya were able to record huge numbers. Simple addition could be performed by summing the dots and bars in two columns to give the result in a third column. ## Calendar The Maya calendrical system, in common with other Mesoamerican calendars, had its origins in the Preclassic period. However, it was the Maya that developed the calendar to its maximum sophistication, recording lunar and solar cycles, eclipses and movements of planets with great accuracy. In some cases, the Maya calculations were more accurate than equivalent calculations in the Old World; for example, the Maya solar year was calculated to greater accuracy than the Julian year. The Maya calendar was intrinsically tied to Maya ritual, and it was central to Maya religious practices. The calendar combined a non-repeating Long Count with three interlocking cycles, each measuring a progressively larger period. These were the 260-day tzolkʼin, the 365-day haabʼ, and the 52-year Calendar Round, resulting from the combination of the tzolkʼin with the haab'. There were also additional calendric cycles, such as an 819-day cycle associated with the four quadrants of Maya cosmology, governed by four different aspects of the god Kʼawiil. The basic unit in the Maya calendar was one day, or kʼin, and 20 kʼin grouped to form a winal. The next unit, instead of being multiplied by 20, as called for by the vigesimal system, was multiplied by 18 in order to provide a rough approximation of the solar year (hence producing 360 days). This 360-day year was called a tun. Each succeeding level of multiplication followed the vigesimal system. The 260-day tzolkʼin provided the basic cycle of Maya ceremony, and the foundations of Maya prophecy. No astronomical basis for this count has been proved, and it may be that the 260-day count is based on the human gestation period. This is reinforced by the use of the tzolkʼin to record dates of birth, and provide corresponding prophecy. The 260-day cycle repeated a series of 20-day-names, with a number from 1 to 13 prefixed to indicated where in the cycle a particular day occurred. The 365-day haab was produced by a cycle of eighteen named 20-day winals, completed by the addition of a 5-day period called the wayeb. The wayeb was considered to be a dangerous time, when the barriers between the mortal and supernatural realms were broken, allowing malignant deities to cross over and interfere in human concerns. In a similar way to the tzʼolkin, the named winal would be prefixed by a number (from 0 to 19), in the case of the shorter wayeb period, the prefix numbers ran 0 to 4. Since each day in the tzʼolkin had a name and number (e.g. 8 Ajaw), this would interlock with the haab, producing an additional number and name, to give any day a more complete designation, for example 8 Ajaw 13 Keh. Such a day name could only recur once every 52 years, and this period is referred to by Mayanists as the Calendar Round. In most Mesoamerican cultures, the Calendar Round was the largest unit for measuring time. As with any non-repeating calendar, the Maya measured time from a fixed start point. The Maya set the beginning of their calendar as the end of a previous cycle of bakʼtuns, equivalent to a day in 3114 BC. This was believed by the Maya to be the day of the creation of the world in its current form. The Maya used the Long Count Calendar to fix any given day of the Calendar Round within their current great Piktun cycle consisting of either 20 bakʼtuns. There was some variation in the calendar, specifically texts in Palenque demonstrate that the piktun cycle that ended in 3114 BC had only 13 bakʼtuns, but others used a cycle of 13 + 20 bakʼtun in the current piktun. Additionally, there may have been some regional variation in how these exceptional cycles were managed. A full long count date consisted of an introductory glyph followed by five glyphs counting off the number of bakʼtuns, katʼuns, tuns, winals, and kʼins since the start of the current creation. This would be followed by the tzʼolkin portion of the Calendar Round date, and after a number of intervening glyphs, the Long Count date would end with the Haab portion of the Calendar Round date. ### Correlation of the Long Count calendar Although the Calendar Round is still in use today, the Maya started using an abbreviated Short Count during the Late Classic period. The Short Count is a count of 13 kʼatuns. The Book of Chilam Balam of Chumayel contains the only colonial reference to classic long-count dates. The most generally accepted correlation is the Goodman-Martínez-Thompson, or GMT, correlation. This equates the Long Count date 11.16.0.0.0 13 Ajaw 8 Xul with the Gregorian date of 12 November 1539. Epigraphers Simon Martin and Nikolai Grube argue for a two-day shift from the standard GMT correlation. The Spinden Correlation would shift the Long Count dates back by 260 years; it also accords with the documentary evidence, and is better suited to the archaeology of the Yucatán Peninsula, but presents problems with the rest of the Maya region. The George Vaillant Correlation would shift all Maya dates 260 years later, and would greatly shorten the Postclassic period. Radiocarbon dating of dated wooden lintels at Tikal supports the GMT correlation. ## Astronomy > The famous astrologer John Dee used an Aztec obsidian mirror to see into the future. We may look down our noses at his ideas, but one may be sure that in outlook he was far closer to a Maya priest astronomer than is an astronomer of our century. The Maya made meticulous observations of celestial bodies. This information was used for divination, so Maya astronomy was essentially for astrological purposes. Although Maya astronomy was mainly used by the priesthood to comprehend past cycles of time, and project them into the future to produce prophecy, it also had some practical applications, such as providing aid in crop planting and harvesting. The priesthood refined observations and recorded eclipses of the sun and moon, and movements of Venus and the stars; these were measured against dated events in the past, on the assumption that similar events would occur in the future when the same astronomical conditions prevailed. Illustrations in the codices show that priests made astronomical observations using the naked eye, assisted by crossed sticks as a sighting device. Analysis of the few remaining Postclassic codices has revealed that, at the time of European contact, the Maya had recorded eclipse tables, calendars, and astronomical knowledge that was more accurate at that time than comparable knowledge in Europe. The Maya measured the 584-day Venus cycle with an error of just two hours. Five cycles of Venus equated to eight 365-day haab calendrical cycles, and this period was recorded in the codices. The Maya also followed the movements of Jupiter, Mars and Mercury. When Venus rose as the Morning Star, this was associated with the rebirth of the Maya Hero Twins. For the Maya, the heliacal rising of Venus was associated with destruction and upheaval. Venus was closely associated with warfare, and the hieroglyph meaning "war" incorporated the glyph-element symbolizing the planet. Sight-lines through the windows of the Caracol building at Chichen Itza align with the northernmost and southernmost extremes of Venus' path. Maya rulers launched military campaigns to coincide with the heliacal or cosmical rising of Venus, and would also sacrifice important captives to coincide with such conjunctions. Solar and lunar eclipses were considered to be especially dangerous events that could bring catastrophe upon the world. In the Dresden Codex, a solar eclipse is represented by a serpent devouring the kʼin ("day") hieroglyph. Eclipses were interpreted as the sun or moon being bitten, and lunar tables were recorded in order that the Maya might be able to predict them, and perform the appropriate ceremonies to ward off disaster. ## Religion and mythology In common with the rest of Mesoamerica, the Maya believed in a supernatural realm inhabited by an array of powerful deities who needed to be placated with ceremonial offerings and ritual practices. At the core of Maya religious practice was the worship of deceased ancestors, who would intercede for their living descendants in dealings with the supernatural realm. The earliest intermediaries between humans and the supernatural were shamans. Maya ritual included the use of hallucinogens for chilan, oracular priests. Visions for the chilan were likely facilitated by consumption of water lilies, which are hallucinogenic in high doses. As the Maya civilization developed, the ruling elite codified the Maya world view into religious cults that justified their right to rule. In the Late Preclassic, this process culminated in the institution of the divine king, the kʼuhul ajaw, endowed with ultimate political and religious power. The Maya viewed the cosmos as highly structured. There were thirteen levels in the heavens and nine in the underworld, with the mortal world in between. Each level had four cardinal directions associated with a different colour; north was white, east was red, south was yellow, and west was black. Major deities had aspects associated with these directions and colours. Maya households interred their dead underneath the floors, with offerings appropriate to the social status of the family. There the dead could act as protective ancestors. Maya lineages were patrilineal, so the worship of a prominent male ancestor would be emphasised, often with a household shrine. As Maya society developed, and the elite became more powerful, Maya royalty developed their household shrines into the great pyramids that held the tombs of their ancestors. Belief in supernatural forces pervaded Maya life, from the simplest day-to-day activities such as cooking, to trade, politics, and elite activities. Maya deities governed all aspects of the world, both visible and invisible. The Maya priesthood was a closed group, drawing its members from the established elite; by the Early Classic they were recording increasingly complex ritual information in their hieroglyphic books, including astronomical observations, calendrical cycles, history and mythology. The priests performed public ceremonies that incorporated feasting, bloodletting, incense burning, music, ritual dance, and, on certain occasions, human sacrifice. During the Classic period, the Maya ruler was the high priest, and the direct conduit between mortals and the gods. It is highly likely that, among commoners, shamanism continued in parallel to state religion. By the Postclassic, religious emphasis had changed; there was an increase in worship of the images of deities, and more frequent recourse to human sacrifice. Archaeologists painstakingly reconstruct these ritual practices and beliefs using several techniques. One important, though incomplete, resource is physical evidence, such as dedicatory caches and other ritual deposits, shrines, and burials with their associated funerary offerings. Maya art, architecture, and writing are another resource, and these can be combined with ethnographic sources, including records of Maya religious practices made by the Spanish during the conquest. ### Human sacrifice Blood was viewed as a potent source of nourishment for the Maya deities, and the sacrifice of a living creature was a powerful blood offering. By extension, the sacrifice of a human life was the ultimate offering of blood to the gods, and the most important Maya rituals culminated in human sacrifice. Generally only high status prisoners of war were sacrificed, with lower status captives being used for labour. Important rituals such as the dedication of major building projects or the enthronement of a new ruler required a human offering. The sacrifice of an enemy king was the most prized, and such a sacrifice involved decapitation of the captive ruler, perhaps in a ritual reenactment of the decapitation of the Maya maize god by the death gods. In AD 738, the vassal king Kʼakʼ Tiliw Chan Yopaat of Quiriguá captured his overlord, Uaxaclajuun Ubʼaah Kʼawiil of Copán and a few days later ritually decapitated him. Sacrifice by decapitation is depicted in Classic period Maya art, and sometimes took place after the victim was tortured, being variously beaten, scalped, burnt or disembowelled. Another myth associated with decapitation was that of the Hero Twins recounted in the Popol Vuh: playing a ballgame against the gods of the underworld, the heroes achieved victory, but one of each pair of twins was decapitated by their opponents. During the Postclassic period, the most common form of human sacrifice was heart extraction, influenced by the rites of the Aztecs in the Valley of Mexico; this usually took place in the courtyard of a temple, or upon the summit of the pyramid. In one ritual, the corpse would be skinned by assistant priests, except for the hands and feet, and the officiating priest would then dress himself in the skin of the sacrificial victim and perform a ritual dance symbolizing the rebirth of life. Archaeological investigations indicate that heart sacrifice was practised as early as the Classic period. ### Deities The Maya world was populated by a great variety of deities, supernatural entities and sacred forces. The Maya had such a broad interpretation of the sacred that identifying distinct deities with specific functions is inaccurate. The Maya interpretation of deities was closely tied to the calendar, astronomy, and their cosmology. The importance of a deity, its characteristics, and its associations varied according to the movement of celestial bodies. The priestly interpretation of astronomical records and books was therefore crucial, since the priest would understand which deity required ritual propitiation, when the correct ceremonies should be performed, and what would be an appropriate offering. Each deity had four manifestations, associated with the cardinal directions, each identified with a different colour. They also had a dual day-night/life-death aspect. Itzamna was the creator god, but he also embodied the cosmos, and was simultaneously a sun god; Kʼinich Ahau, the day sun, was one of his aspects. Maya kings frequently identified themselves with Kʼinich Ahau. Itzamna also had a night sun aspect, the Night Jaguar, representing the sun in its journey through the underworld. The four Pawatuns supported the corners of the mortal realm; in the heavens, the Bacabs performed the same function. As well as their four main aspects, the Bakabs had dozens of other aspects that are not well understood. The four Chaacs were storm gods, controlling thunder, lightning, and the rains. The nine lords of the night each governed one of the underworld realms. Other important deities included the moon goddess, the maize god, and the Hero Twins. The Popol Vuh was written in the Latin script in early colonial times, and was probably transcribed from a hieroglyphic book by an unknown Kʼicheʼ Maya nobleman. It is one of the most outstanding works of indigenous literature in the Americas. The Popul Vuh recounts the mythical creation of the world, the legend of the Hero Twins, and the history of the Postclassic Kʼicheʼ kingdom. Deities recorded in the Popul Vuh include Hun Hunahpu, believed by some to be the Kʼicheʼ maize god, and a triad of deities led by the Kʼicheʼ patron Tohil, and also including the moon goddess Awilix, and the mountain god Jacawitz. In common with other Mesoamerican cultures, the Maya worshipped feathered serpent deities. Such worship was rare during the Classic period, but by the Postclassic the feathered serpent had spread to both the Yucatán Peninsula and the Guatemalan Highlands. In Yucatán, the feathered serpent deity was Kukulkan, among the Kʼicheʼ it was Qʼuqʼumatz. Kukulkan had his origins in the Classic period War Serpent, Waxaklahun Ubah Kan'', and has also been identified as the Postclassic version of the Vision Serpent of Classic Maya art. Although the cult of Kukulkan had its origins in these earlier Maya traditions, the worship of Kukulkan was heavily influenced by the Quetzalcoatl cult of central Mexico. Likewise, Qʼuqʼumatz had a composite origin, combining the attributes of Mexican Quetzalcoatl with aspects of the Classic period Itzamna. ## Agriculture The ancient Maya had diverse and sophisticated methods of food production. It was believed that shifting cultivation (swidden) agriculture provided most of their food, but it is now thought that permanent raised fields, terracing, intensive gardening, forest gardens, and managed fallows were also crucial to supporting the large populations of the Classic period in some areas. Indeed, evidence of these different agricultural systems persist today: raised fields connected by canals can be seen on aerial photographs. Contemporary rainforest species composition has significantly higher abundance of species of economic value to ancient Maya in areas that were densely populated in pre-Columbian times, and pollen records in lake sediments suggest that maize, manioc, sunflower seeds, cotton, and other crops have been cultivated in association with deforestation in Mesoamerica since at least 2500 BC. The basic staples of the Maya diet were maize, beans, and squashes. These were supplemented with a wide variety of other plants either cultivated in gardens or gathered in the forest. At Joya de Cerén, a volcanic eruption preserved a record of foodstuffs stored in Maya homes, among them were chilies and tomatoes. Cotton seeds were in the process of being ground, perhaps to produce cooking oil. In addition to basic foodstuffs, the Maya also cultivated prestige crops such as cotton, cacao and vanilla. Cacao was especially prized by the elite, who consumed chocolate beverages. Cotton was spun, dyed, and woven into valuable textiles in order to be traded. The Maya had few domestic animals; dogs were domesticated by 3000 BC, and the Muscovy duck by the Late Postclassic. Ocellated turkeys were unsuitable for domestication, but were rounded up in the wild and penned for fattening. All of these were used as food animals; dogs were additionally used for hunting. It is possible that deer were also penned and fattened. ## Maya sites There are hundreds of Maya sites spread across five countries: Belize, El Salvador, Guatemala, Honduras and Mexico. The six sites with particularly outstanding architecture or sculpture are Chichen Itza, Palenque, Uxmal, and Yaxchilan in Mexico, Tikal in Guatemala and Copán in Honduras. Other important, but difficult to reach, sites include Calakmul and El Mirador. The principal sites in the Puuc region, after Uxmal, are Kabah, Labna, and Sayil. In the east of the Yucatán Peninsula are Coba and the small site of Tulum. The Río Bec sites of the base of the peninsula include Becan, Chicanná, Kohunlich, and Xpuhil. The most noteworthy sites in Chiapas, other than Palenque and Yaxchilan, are Bonampak and Toniná. In the Guatemalan Highlands are Iximche, Kaminaljuyu, Mixco Viejo, and Qʼumarkaj (also known as Utatlán). In the northern Petén lowlands of Guatemala there are many sites, though apart from Tikal access is generally difficult. Some of the Petén sites are Dos Pilas, Seibal, and Uaxactún. Important sites in Belize include Altun Ha, Caracol, and Xunantunich. ## Museum collections There are many museums across the world with Maya artefacts in their collections. The Foundation for the Advancement of Mesoamerican Studies lists over 250 museums in its Maya Museum database, and the European Association of Mayanists lists just under 50 museums in Europe alone. ## See also - Entheogenics and the Maya - Huastec civilization - Index of Mexico-related articles - Songs of Dzitbalche - Maya peoples - Maya music
649,289
Interstate 696
1,169,240,962
Interstate Highway in Oakland and Macomb counties in Michigan, United States
[ "Auxiliary Interstate Highways", "Interstate 96", "Interstate Highways in Michigan", "Transportation in Macomb County, Michigan", "Transportation in Oakland County, Michigan" ]
Interstate 696 (I-696) is an east–west auxiliary Interstate Highway in the Metro Detroit region of the US state of Michigan. The state trunkline highway is also known as the Walter P. Reuther Freeway, named for the prominent auto industry union head by the Michigan Legislature in 1971. I-696 is a bypass route, detouring around the city of Detroit through the city's northern suburbs in Oakland and Macomb counties. It starts by branching off I-96 and I-275 at its western terminus in Farmington Hills, and runs through suburbs including Southfield, Royal Oak and Warren before merging into I-94 at St. Clair Shores on the east end. It has eight lanes for most of its length and is approximately 10 miles (16 km) north of downtown Detroit. I-696 connects to other freeways such as I-75 (Chrysler Freeway) and M-10 (Lodge Freeway). Local residents sometimes refer to I-696 as "The Autobahn of Detroit". Planning for the freeway started in the 1950s. Michigan state officials proposed the designation I-98, but this was not approved. Construction started on the first segment in 1961, and the Lodge Freeway was designated Business Spur Interstate 696 (BS I-696) the following year. The western third of the freeway opened in 1963, and the eastern third was completed in January 1979. The central segment was the subject of much controversy during the 1960s and 1970s. Various municipalities along this stretch argued over the routing of the freeway such that the governor locked several officials into a room overnight until they would agree to a routing. Later, various groups used federal environmental regulations to force changes to the freeway. The Orthodox Jewish community in Oak Park was concerned about pedestrian access across the freeway; I-696 was built with a set of parks on overpasses to accommodate their needs. The Detroit Zoo and the City of Detroit also fought components of the freeway design. These concessions delayed the completion of I-696 until December 15, 1989. Since completion, the speed limit was raised from 55 to 70 miles per hour (89 to 113 km/h). In addition, some interchanges were reconfigured in 2006. ## Route description I-696, which has been called "Detroit's Autobahn" by some residents, reflecting a reputation for fast drivers, begins in the west in the city of Novi as a left exit branching off I-96. This ramp is a portion of the I-96/I-696/I-275/M-5 interchange that spans the north–south, Novi–Farmington Hills city line linking together five converging freeways. The freeway curves southeasterly and then northeasterly through the complex as it runs eastward through the adjacent residential subdivisions. I-696 passes south of 12 Mile Road in the Mile Road System through Farmington Hills, passing south of Harrison High School and north of Mercy High School. After crossing into Southfield, I-696 passes through the Mixing Bowl, another complex interchange that spans over two miles (3.2 km) near the American Center involving M-10 (Lodge Freeway and Northwestern Highway) and US Highway 24 (US 24, Telegraph Road) between two partial interchanges with Franklin Road on the west and Lahser Road on the east. The carriageways for I-696 run in the median of M-10 from northwest to southeast. East of this interchange, cargo restrictions have been enacted for the next 10-mile-long (16 km) segment of I-696; no commercial vehicles may carry flammable or explosive loads; the segment passes below grade and between retaining walls that are 20–25 feet (6.1–7.6 m) tall, which would hinder evacuation in the event of a fire. During construction in April 1989, vandals set a fire under one of the plazas, and officials were concerned about the intensity of the fire and the potential for a "horizontal towering inferno" along the freeway section once opened to traffic. After passing through the Mixing Bowl, I-696 follows 11 Mile Road, which forms a pair of service drives for the main freeway. The Interstate passes through the city of Lathrup Village before turning southward and then easterly on an S-shaped path to run along 10 Mile Road. This segment of freeway is known for its extensive use of retaining walls; three large landscaped plazas form short tunnels for freeway traffic near the Greenfield Road exit. The freeway passes next to the Jewish Community Center of Metropolitan Detroit as it passes under the third pedestrian plaza. The Interstate then picks up 10 Mile Road, which forms a pair of service drives, as the Reuther runs along the border between the cities of Oak Park and Huntington Woods. I-696 follows the southern edge of the Detroit Zoo. Immediately east of the zoo, the Interstate intersects M-1 (Woodward Avenue), and crosses a line of the Canadian National Railway that also carries Amtrak passenger service between Detroit and Pontiac. East of the rail crossing, I-696 has a four-level stack interchange with I-75 over the quadripoint for Royal Oak, Madison Heights, Hazel Park and Ferndale. This interchange marks the eastern end of the cargo restrictions. I-696 jogs to the northeast near the Hazel Park Raceway, leaving 10 Mile Road. Crossing into Warren in Macomb County at the Dequindre Road interchange, the freeway begins to follow 11 Mile Road again. Near the Detroit Arsenal Tank Plant, I-696 has another stack interchange for Mound Road; through the junction, the freeway makes a slight bend to the south. The freeway continues east through the northern edge of Center Line, crossing a line of Conrail Shared Assets and heading back into Warren. The Interstate crosses into Roseville near the M-97 (Groesbeck Highway) interchange and then meets M-3 (Gratiot Avenue) just west of the eastern terminus at I-94 (the Edsel Ford Freeway) in St. Clair Shores. The service drives merge in this final interchange and 11 Mile Road continues due east to Lake St. Clair. Like other state highways in Michigan, I-696 is maintained by the Michigan Department of Transportation (MDOT). In 2011, the department's traffic surveys showed that on average 185,700 vehicles used the freeway daily east of I-75 and 38,100 vehicles did so each day in part of the Mixing Bowl, the highest and lowest counts along the highway, respectively. As an Interstate Highway, all of I-696 is listed on the National Highway System, a network of roads important to the country's economy, defense, and mobility. ## History ### Planning and initial construction I-696 is part of the original Interstate Highway System as outlined in 1956–58. As originally proposed by the Michigan State Highway Department, the freeway would have been numbered I-98. Construction started in 1961. The Lodge Freeway, the first segment of which opened in 1957, was given the Business Spur I-696 designation in 1962. The first segment of I-696 built was the western third of the completed freeway which opened in 1963–1964 at a cost of \$16.6 million (equivalent to \$ in ). This section ran from I-96 in Novi east to the Lodge Freeway in Southfield. The then-unfinished freeway was named for Walter P. Reuther, former leader of the United Auto Workers labor union after he and his wife died in a plane crash on May 9, 1970. The next year the Michigan Legislature approved the naming by passing Senate Concurrent Resolution 57. In the late 1970s, during the second phase of construction, lobbying efforts and lawsuits attempted to block construction of the central section. If successful, the efforts would have left the freeway with a gap in the middle between the first (western) and second (eastern) phases of construction. During this time, MDOT assigned M-6 to the eastern section of the freeway under construction. Signs were erected along the service roads that followed 11 Mile Road to connect the already built stack interchange at I-75 east to I-94. By the time the eastern freeway segment was initially opened in January 1979 between I-94 and I-75, the signage for M-6 was removed and replaced with I-696 signage; it cost \$200 million (equivalent to \$ in ) to complete. Later in 1979, a closure was scheduled to allow work to be completed on three of the segment's nine interchanges. ### Controversies over middle segment The central section was the most controversial. Governor James Blanchard was 15 years old and a high school sophomore in neighboring Pleasant Ridge when the freeway was proposed and purchased a home in the area in 1972. He joked during remarks at the dedication in 1989, "The unvarnished truth about this freeway? I wasn't even alive when it was first proposed," and added, "frankly, I never thought it would go through." Total cost at completion for the entire freeway at the end of the 30-year project was \$675 million (equivalent to \$ in ). Arguments between local officials were so intense that during the 1960s, then-Governor George W. Romney once locked fighting bureaucrats in a community center until they would agree on a path for the freeway. During the 1970s, local groups used then-new environmental regulations to oppose the Interstate. The freeway was noted in a Congressional subcommittee report on the "Major Interstate System Route Controversy in Urban Areas" for the controversies in 1970. Before 1967, local communities had to approve highway locations and designs, and the debates over I-696 prompted the passage of an arbitration statute. That statute was challenged by Pleasant Ridge and Lathrup Village before being upheld by the Michigan Supreme Court. Lathrup Village later withdrew from a planning agreement in 1971; had that agreement been implemented, construction on the central section was scheduled to commence in 1974 and finish in 1976. The community of Orthodox Jews in Oak Park wanted the freeway to pass to the north of their suburb. When this was deemed to be futile, the community asked for changes to the design that would mitigate the impact of the freeway to the pedestrian-dependent community. Final approval in 1981 of the freeway's alignment was contingent on these mitigation measures. To address the community's unique needs, the state hired a rabbi to serve as a consultant on the project. In addition, a series of landscaped plazas were incorporated into the design, forming the tunnels through which I-696 passes. These structures are a set of three 700-foot-wide (210 m) bridges that cross the freeway within a mile (1.6 km). They allow members of the Jewish community to walk to synagogues on the Sabbath and other holidays when Jewish law prohibits driving. These plazas had their length limited; if they were longer, they would be considered tunnels that would require ventilation systems. The Detroit Zoo was concerned that noise and air pollution from the Interstate would disturb the animals. They were satisfied by \$12 million (equivalent to \$ in ) spent on a new parking ramp and other improvements. The City of Detroit tried to stop I-696 as well, but in the end the city was forced to redesign its golf course. A refusal to grant an additional nine feet (2.7 m) of right-of-way by Detroit forced additional design and construction delays during the 1980s. One of the last obstacles to construction of the freeway was a wetlands area near Southfield. MDOT received a permit from the Michigan Department of Natural Resources to destroy 6+1⁄2 acres (2.6 ha) of wetland and create a replacement 11-acre (4.5 ha) area. In the process, some prairie roses and wetlands milkweed were transplanted from the path of I-696 in 1987. The final section of the eight-lane freeway opened at a cost of \$436 million (equivalent to \$ in ) on December 15, 1989. At the time, one caller to a Detroit radio show commented, "do you realize we have been to the moon and back in the time it has taken to get that road from Ferndale to Southfield?" ### Since completion As part of the overall rehabilitation to the Mixing Bowl interchange, a new interchange at Franklin Road was to be constructed in 2006. An exit ramp from I-696 eastbound to American Drive opened in April 2006. An entrance ramp from Franklin Road to I-696 westbound opened in July 2006. The Franklin Road overpass, which had been closed during this time, re-opened in October 2006. On November 9 that year, the speed limit was increased from 65 to 70 mph (105 to 113 km/h) along the length of I-696. During speed enforcement patrols in August 2022, the Michigan State Police gave out 77 citations during one 4-hour period including six arrests. One motorist was driving at 101 mph (163 km/h), while others were cited at 99, 94, and 91 mph (159, 151, and 146 km/h). In 2023, MDOT started a complete reconstruction of I-696 from I-275 in Farmington Hills to US 24 (Telegraph Road) in Southfield. The eastbound lanes will be reconstructed in 2023, and the westbound lanes will be reconstructed the following year. ## Exit list ## Related trunkline Business Spur Interstate 696 (BS I-696) was the designation given to the Lodge Freeway in the Detroit area in 1962. This 17+1⁄2-mile-long (28.2 km) freeway was renumbered as part of US 10 in 1970, when that highway designation was shifted off Woodward Avenue. ## See also
817,478
Homer Davenport
1,170,513,080
American political cartoonist and writer (1867–1912)
[ "1867 births", "1912 deaths", "19th-century American artists", "20th-century American artists", "American editorial cartoonists", "Arabian breeders and trainers", "Artists from Oregon", "People from Silverton, Oregon", "The Oregonian people" ]
Homer Calvin Davenport (1867 – May 2, 1912) was a political cartoonist and writer from the United States. He is known for drawings that satirized figures of the Gilded Age and Progressive Era, most notably Ohio Senator Mark Hanna. Although Davenport had no formal art training, he became one of the highest paid political cartoonists in the world. Davenport also was one of the first major American breeders of Arabian horses and one of the founders of the Arabian Horse Club of America. A native Oregonian, Davenport developed interests in both art and horses as a young boy. He tried a variety of jobs before gaining employment as a cartoonist, initially working at several newspapers on the West Coast, including The San Francisco Examiner, purchased by William Randolph Hearst. His talent for drawing and interest in Arabian horses dovetailed in 1893 at the Chicago Daily Herald when he studied and drew the Arabian horses exhibited at the World's Columbian Exposition. When Hearst acquired the New York Morning Journal in 1895, money was no object in his attempt to establish the Journal as a leading New York newspaper, and Hearst moved Davenport east in 1885 to be part of what is regarded as one of the greatest newspaper staffs ever assembled. Working with columnist Alfred Henry Lewis, Davenport created many cartoons in opposition to the 1896 Republican presidential candidate, former Ohio governor William McKinley, and Hanna, his campaign manager. McKinley was elected and Hanna elevated to the Senate; Davenport continued to draw his sharp cartoons during the 1900 presidential race, though McKinley was again victorious. In 1904, Davenport was hired away from Hearst by the New York Evening Mail, a Republican paper, and there drew a favorable cartoon of President Theodore Roosevelt that boosted Roosevelt's election campaign that year. The President in turn proved helpful to Davenport in 1906 when the cartoonist required diplomatic permission to travel abroad in his quest to purchase pure desert-bred Arabian horses. In partnership with millionaire Peter Bradley, Davenport traveled extensively amongst the Anazeh people of Syria and went through a brotherhood ceremony with the Bedouin leader who guided his travels. The 27 horses Davenport purchased and brought to the United States had a profound and lasting impact on Arabian horse breeding. Davenport's later years were marked by fewer influential cartoons and a troubled personal life; he dedicated much of his time to his animal breeding pursuits, traveled widely, and gave lectures. He was a lifelong lover of animals and of country living; he not only raised horses, but also exotic poultry and other animals. He died in 1912 of pneumonia, which he contracted after going to the docks of New York City to watch and chronicle the arrival of survivors of the sinking of the RMS Titanic. ## Childhood and early career Davenport was born in 1867 in the Waldo Hills, several miles south of Silverton, Oregon. His parents were Timothy Woodbridge and Florinda Willard (Geer) Davenport, The family had deep progressive roots; Davenport's grandfather, Benjamin, had been a doctor and abolitionist whose home in Ohio was a stop on the Underground Railroad. Davenport's parents, who had married in 1854, previously lost two other children in infancy to diphtheria, but Homer and his older sister, Orla, lived to adulthood. Timothy Davenport trained in medicine, but became a surveyor and writer later dubbed "The Sage of Silverton". He had been the Indian agent for the Umatilla Agency in 1862, surveyor of Marion County in 1864, and later in his life, Oregon Land Agent (1895–1899). He was one of the founders of the Republican Party in Oregon, served as an Oregon state representative from 1868 to 1872 and was elected a state senator in 1882. He ran unsuccessfully for the United States House of Representatives on the Independent Party ticket in 1874. Florinda Davenport was an admirer of the political cartoons of Thomas Nast that appeared in Harper's Weekly. While pregnant with Homer, she developed a belief, which she viewed as a prophecy, that her child would become as famous a cartoonist as Nast. She was also influenced by the essay "How To Born [sic] A Genius", by Russell Trall, and closely followed his recommendations for diet and "concentration" during her pregnancy. She died of smallpox in 1870, when Homer was three years old, and on her deathbed asked her husband to give Homer "every opportunity" to become a cartoonist. Young Davenport was given a box of paints as a Christmas gift. At this stage of his youth, as his father later stated, Homer also had "horse on the brain". Cooped up inside during the winter of 1870–1871, in part because the entire family was quarantined on account of the smallpox outbreak that had killed Florinda, Timothy told Homer stories of Arab people and their horses. Soon after, at the age of three years and nine months, the boy used his paints to produce an image he called "Arabian horses." He learned to ride on the family's pet horse, Old John. Following his mother's death, both of Davenport's grandmothers helped raise him. Timothy Davenport remarried in 1872, to Elizabeth "Nancy" Gilmour Wisner, and in 1873, the family moved to Silverton—the cartoonist later recounted that the move to the community, about 40 miles (64 km) south of Portland and with a population of 300 at the time, was so he "might live in the Latin Quarter of that village and inhale any artistic atmosphere that was going to waste". Homer began to study music, and was allowed to help Timothy clerk at the store the elder Davenport purchased when he first moved to Silverton. Timothy required Homer to milk the cows, but otherwise Homer was to "study faces and draw." He was well-liked by the villagers, but they considered him shiftless—they did not consider drawing to be real work. He exhibited an interest in animals, especially fast horses and fighting cocks. Davenport later wrote that his fascination with Arabian horses was reawakened in his adolescent years with his admiration of a picture of an Arabian-type horse found on an empty can of horse liniment. He carefully cleaned the can and kept it as his "only piece of artistic furniture" for many years until forced to leave it behind when he moved to San Francisco. He also played in the community band in his formative years, and with that group young Davenport once traveled as far as Portland. Davenport's initial jobs were not successful. His first position outside Silverton began when a small circus came to town, and Davenport, in his late teenage years, left with it. He was assigned as a clown and to care for the circus's small herd of horses, which he also sketched. He became disenchanted with the circus when he was told to brush the elephant's entire body with linseed oil, a difficult task. He left the tour and tried to succeed as a jockey, despite being tall. Other early positions included clerking in a store, working as a railway fireman, and being a stoker on the Multnomah. In 1889, Davenport attended the Mark Hopkins School of Art in San Francisco, California, where he was expelled after a month because of his cartooning; he returned to the school for a brief time in 1892. He worked for free at the Portland Evening Telegram, which published several of his drawings, but not for pay. In 1890, he attended Armstrong Business College, but dropped out after a few months. Although his work took him from Silverton, for the remainder of his life, Davenport was often melancholy for his native Oregon, and in writing to relatives there, he repeatedly told them not to send him anything that would remind him of Silverton, because he would be plunged into despair. ## Newspaper career ### West Coast years Davenport's first paid job in journalism, in 1889, was drawing for the Portland newspaper, The Oregonian, where he showed a talent for sketching events from memory. He was fired in 1890, it was said, for poorly drawing a stove for an advertisement—he could not draw buildings and appliances well. By another story, he was let go when there was only work for one in the paper's engraving department, and he was junior man. He then worked for the Portland Sunday Mercury, traveling to New Orleans for a prizefight in January 1891 between Jack Nonpareil Dempsey of Portland and Bob Fitzsimmons. When he returned, he earned money through selling his drawings as postcards. Davenport's talent came to the attention of C. W. Smith, general manager of the Associated Press, and also Timothy Davenport's first cousin. Smith got the young cartoonist a free pass on the railroad to San Francisco in 1891 and wrote a letter to the business manager of The San Francisco Examiner, essentially a demand that Davenport be hired. He was; the Examiner's business manager had been greatly impressed by doodles that Davenport drew while waiting. At the Examiner, Davenport was not a cartoonist, but a newspaper artist who illustrated articles—the technology to directly reproduce photographs in newspapers was still a few years away. After a year at the Examiner, he was fired; several stories state that this occurred after he asked for a raise from his meager salary of \$10 per week. His work, including the New Orleans postcards, had attracted admirers, who, in addition to Smith, helped him to get a job with the San Francisco Chronicle in 1892. While there, he attracted reader attention for his ability to draw animals. He resigned in April 1893 because he wanted to go to Chicago and see the World's Columbian Exposition, and his contacts secured him a position with the Chicago Herald. At the Herald, one of his jobs was to illustrate the horse races at Washington Park. He was dismissed from the Herald, and in one account ascribed his dismissal to going every day to visit and sketch the Arabian horses on exhibit at the World's Fair. However, more likely, the poor economy and the end of the fair caused the Herald to lay him off, and Davenport suggested as much in a 1905 interview. While at the Daily Herald, he married Daisy Moor, who traveled from her home in San Francisco to Chicago in order to marry him. Davenport returned to San Francisco and regained his position at the Chronicle. This time, he was allowed to caricature California political figures. By then, William Randolph Hearst owned the Examiner. In his early days as a newspaper tycoon, Hearst followed Davenport's cartoons in the Chronicle, and when the cartoonist became well known for his satires of figures in the 1894 California gubernatorial campaign, hired him, more than doubling his salary. When a famous horse died and the Examiner lacked an image, Davenport, who had seen the animal the previous year, drew it from memory. Impressed, Hearst purchased the original drawing. Davenport took his responsibilities as political cartoonist seriously, traveling to Sacramento, the state capital, to observe the legislative process and its participants. ### Transfer to New York Journal Hearst had been successful in California with the Examiner, and sought to expand operations to the nation's largest city, New York. Several newspapers were available for sale, including The New York Times, but Hearst then lacked the resources to purchase them. In September 1895, having lost most of its circulation and its advertisers over the past year, Cincinnati publisher John R. McLean made his New York Morning Journal available at a price within Hearst's means, and he bought it for \$180,000. Hearst changed the name to the New York Journal and began to assemble what Hearst biographer Ben Procter deemed one of the greatest staffs in newspaper history. Under editor-in-chief Willis J. Abbot, the well paid staff included foreign correspondent Richard Harding Davis, columnist Alfred Henry Lewis, and humorist Bill Nye. Contributors included Mark Twain and Stephen Crane. Davenport was among a number of talented staff on the Examiner whom Hearst transferred to New York and employed on the Journal at a high salary. In 1896, a presidential election year, Davenport was sent to Washington to meet and study some of the Republican Party's potential candidates, such as Speaker of the House Thomas B. Reed. Hearst's Journal was a Democratic paper, and Davenport would be expected to harshly caricature the Republican presidential candidate. The Republicans were anxious to take over the White House from Democrat Grover Cleveland; they were widely expected to do so, as the Democrats were blamed for the economic Panic of 1893, which had brought depression to the nation for the past three years. None of the potential Democratic candidates seemed particularly formidable, and the Republican nominee was expected to win in a landslide. Reporters and illustrators on the Journal often worked in pairs. Davenport was teamed with Lewis and the two soon forged a solid relationship. In early 1896, Lewis went to Ohio to investigate the leading candidate for the Republican presidential nomination, that state's former governor, William McKinley. To interview the candidate, Lewis was required to undergo an interview himself, with McKinley's political manager, Cleveland industrialist Mark Hanna. Hanna had set aside his business career to manage McKinley's campaign, and was paying all expenses for a political machine which helped make McKinley the frontrunner in the Republican race. Lewis got his interview with McKinley, then remained in Ohio, investigating Hanna. In 1893, Governor McKinley had been called upon to pay the obligations of a friend for whom he had co-signed loans; Hanna and other McKinley supporters had bought up or paid these debts. Lewis viewed Hanna as controlling McKinley, able to ruin the candidate by calling in the purchased notes. Becoming increasingly outraged by what he deemed as Hanna's purchase of the Republican nomination, and so likely the presidency, Lewis began to popularize this view in the pages of the Journal. The first Davenport cartoon depicting Hanna appeared soon after. ### 1896 election and Mark Hanna McKinley, with the exception of his 1893 financial crisis, had avoided scandal and carefully guarded his image, making him difficult to attack. Hanna proved an easier target. Although Davenport had depicted Hanna in his cartoons before the Republican convention in June, these efforts were uninspired. This changed once Davenport got a look at his subject while attending the 1896 Republican National Convention in St. Louis. After three days spent closely observing Hanna managing the convention to secure McKinley's nomination and passage of a platform supporting the gold standard, Davenport was impressed with Hanna's dynamic behavior. Convinced that he could effectively lampoon Hanna, Davenport's cartoons became more effective. Hanna was a tall man; Davenport exaggerated this trait, in part by shrinking everybody else. He also increased Hanna's already substantial girth. Hanna's short sideburns were lengthened and made rougher—Davenport described them as "like an unplaned cedar board". Davenport borrowed from the animal kingdom for his creation, drawing Hanna's ears so they stuck out like a monkey's. The cartoonist described Hanna's eyes as like a parrot's, leaving no movement unseen, or as those of a circus elephant—scanning the street for peanuts. The resultant caricature of Hanna was given props such as moneybags and laborer's skulls to rest his feet upon, as well as cufflinks engraved with the dollar sign to wear with his plaid businessman's suits. He was often accompanied by William McKinley, usually drawn as a shrunken though dignified figure dominated by the giant Hanna. Even so, Davenport felt the figure seemed to lack something until the cartoonist took the dollar signs from the cufflink and placed them inside every check of the cartoon Hanna's suit. Davenport likely acted at the suggestion of his cartoonist colleague at the Journal, M. de Lipman, who had depicted McKinley as Buddha in a loincloth with Hanna as his attendant, robes ablaze with an array of dollar signs. According to Hearst biographer Kenneth Whyte, "whatever its origins, Davenport's 'plutocratic plaid', as it became known, was an instant hit." In July 1896, the Democrats nominated former Nebraska congressman William Jennings Bryan for president. Bryan had electrified the Democratic National Convention with his Cross of Gold speech. Bryan was an eloquent supporter of "free silver", a policy which would inflate the currency by allowing silver bullion to be submitted by the public and converted into coins even though the intrinsic value of a silver dollar was about half its stated value. Bryan's candidacy divided the Democratic Party and its supporters, and caused many normally Democratic papers to abandon him. Hearst called a meeting of his senior staff to decide the Journal's policy. Though few favored the Democrat, Hearst decided, "Unlimber the guns; we are going to fight for Bryan." Davenport's cartoons had an effect on Hanna. West Virginia Senator Nathan B. Scott remembered being with Hanna as he viewed his caricature wearing a suit covered with dollar signs, trampling women and children underfoot, and hearing the Ohioan state, "that hurts". Hanna could not make public appearances without having to field questions about the cartoons. Nevertheless, publisher J. B. Morrow, a friend of both McKinley and his campaign manager, stated that Hanna "took his course regardless of local criticism". McKinley made no attempt to deflect criticism from Hanna and in fact kept a file of Davenport cartoons that particularly amused him. Despite Hanna's discomfiture, both men were content to have Hanna attacked if it meant that McKinley would not be. Most of the cartoons Davenport drew during the 1896 campaign were simple in execution and somber in mood. One, for example, depicts Hanna walking down Wall Street, bags of money in each hand and a grin on his face. Another shows only Hanna's hand and wrist—and McKinley dangling from his fob chain. One that is intended to be funny depicts McKinley as a small boy accompanied by Hanna as nursemaid; McKinley tugs at Hanna's skirts, wanting to go into a shop where the labor vote is for sale. Another shows Hanna wearing a Napoleon hat (McKinley was said to resemble the late emperor), raising a mask of McKinley's face to his own. Davenport's cartoons ran a few times per week in the Journal, generally on an inside page. They were, however, widely reprinted—including in Bryan's campaign materials—and according to Whyte, "nothing in any paper came close to matching their impact [on the presidential race]". Hanna biographer William T. Horner noted, "Davenport's image of Hanna in a suit covered with dollar signs remains an iconic view of the man to this day". Despite great public excitement after his nomination, Bryan was unable to overcome his disadvantages in financing, organization, lack of party unity, and public mistrust of the Democrats, and he was defeated in the November election. A few days after the election, Davenport went to Republican headquarters in New York to be formally introduced to the man he had so sharply characterized. As witnesses such as Vice President-elect Garret Hobart came in to see the good-humored proceedings, Hanna told Davenport, "I admire your genius and execution, but damn your conception." With the 1896 campaign over, a reporter asked Davenport in February 1897 who would replace Hanna as a special subject of his cartoons, and Davenport replied, "Hanna is by no means out of the way. He will probably continue a good subject for some time." Hanna, having declined the position of Postmaster General, secured appointment to the Senate when McKinley made Ohio's aging senior senator, John Sherman, his Secretary of State. Until 1913, state legislatures, not the people, elected senators, and so Hanna had to seek election to a full term when the Ohio General Assembly met in January 1898. Hanna campaigned in the 1897 legislative election, and was elected to the Senate in his own right the following January, in a very close vote. Davenport drew cartoons against Hanna in the senatorial race. Nevertheless, when he attended the legislature's meeting in Columbus, he wore a Hanna button, and seemed happy after Hanna's triumph. When asked why, he replied, "that insures me six more years at him, and he's a good subject". ### 1897 to 1901 The 1896 campaign made Davenport famous and well paid, earning \$12,000 per year, the highest compensation of any cartoonist of his time. Hearst, who had lost a fortune but who had established the Journal as one of New York's most influential newspapers, also gave him a \$3,000 bonus with which to take a trip to Europe with Daisy. In London, Davenport interviewed and drew the elderly former prime minister, William Gladstone. In Venice, he came upon a large statue of Samson. He was impressed by the large muscles of the work, and immediately conceived of it as representing America's powerful corporate trusts, the status of which was then a major political issue. A large, powerful, grass-skirted figure representing the trusts would be seen with McKinley and Hanna in Davenport's cartoons during the President's re-election bid in 1900. In 1897, Davenport was sent to Carson City, Nevada, to cover the heavyweight championship fight between boxers Bob Fitzsimmons and Jim Corbett, a match heavily promoted by the Journal. Fitzsimmons won. Davenport traveled to Nevada by way of Silverton, visiting there for the first time since becoming famous. The following year, Davenport went to Asbury Park, New Jersey, to watch Corbett in training. Davenport both interviewed him and made several drawings which the Journal published, including one of cartoonist and boxer sparring. Davenport's drawings left few public figures unscathed; he even caricatured himself and his boss, Hearst. Ultimately, Davenport's work became so well recognized for skewering political figures he considered corrupt, that in 1897 his opponents attempted to pass a law banning political cartoons in New York. The bill, introduced in the state legislature with the prodding of U.S. Senator Thomas C. Platt, (R-NY), did not pass, but the effort inspired Davenport to create one of his most famous works: "No Honest Man Need Fear Cartoons." In 1897 and into 1898, the Hearst papers pounded a drumbeat for war with Spain. Davenport drew cartoons depicting President McKinley as cowardly and unwilling to go to war because it might harm Wall Street. Once the Spanish–American War was under way, one of the American war heroes was Admiral George Dewey, victor at the Battle of Manila Bay, who was welcomed home in 1899 with celebrations and the gift of a house. The admiral promptly deeded the residence over to his newlywed wife, a Catholic, turning public opinion (especially among Protestants) against him. However, resentment eased after Davenport depicted Dewey on his bridge during the battle, with the caption, "Lest we forget". In 1899, Davenport returned to Europe, covering the Dreyfus case in Rennes. In 1900, the presidential election again featured McKinley defeating Bryan, and again featured Davenport, reprising his depictions of Hanna, this time aided by the giant figure of the trusts. Also a subject of Hearst's cartoonists was McKinley's running mate, war hero and New York Governor Theodore Roosevelt, presented as a child with a Rough Rider's outfit and little self-control. ### 1901 to 1912 The Journal was renamed the American in 1901. Davenport continued there until 1904, eventually earning \$25,000 per year, a very large salary at the time. Following Hearst's policy, he relentlessly attacked President Roosevelt, who had succeeded the assassinated McKinley in September 1901. Davenport both cartooned and wrote for the American; one column mockingly alleged that the new President had hidden all portraits of previous presidents in the White House basement, with the visitor left to view a large portrait of Roosevelt as well-armed Rough Rider. Nevertheless, the Republicans wooed Davenport, seeking to deprive the Democrats of one of their weapons, and eventually President and cartoonist met. In 1904, Davenport left the American for the New York Evening Mail, a Republican paper, to be paid \$25,000 for the final six months of 1904 (most likely paid by the party's backers) and an undisclosed salary after that. The 1904 presidential campaign featured Roosevelt, seeking a full term in his own right, against the Democratic candidate, Judge Alton B. Parker of New York. Again Davenport affected the campaign, this time with a cartoon of Uncle Sam with his hand on Roosevelt's shoulder, "He's good enough for me". The Republicans spent \$200,000 reproducing it; the image was used as cover art on sheet music for marches written in support of Roosevelt. Although Davenport continued at the Evening Mail after Roosevelt was elected, the quality of his work declined; fewer and fewer of his images were selected for inclusion in Albert Shaw's Review of Reviews. He also began to devote large periods to other activities; in 1905, he spent months in his home state of Oregon, first visiting Silverton and then showing, at Portland's Lewis and Clark Exposition, the animals he bred. In 1902, James Pond, a lecture circuit manager, hired Davenport as a speaker. Beginning in 1905, Davenport traveled on the Chatauqua lecture circuit, giving engaging talks, during which he sketched on stage. He sometimes appeared on the same program as Bryan, though on different days, and like him drew thousands of listeners. In 1906, he traveled to the Middle East to purchase Arabian horses from their native land, and then wrote a book in 1908 about his experiences. Davenport authored an autobiographical book, The Diary of a Country Boy, in 1910, and collections of his cartoons, including The Dollar or the Man and Cartoons by Davenport. Apparently as a joke, Davenport once included The Belle (or sometimes, Bell ) of Silverton and Other Oregon Stories in a list of his publications, and reference books for years listed it among his works. A book of that name did not exist, however. Some speculate that this was an early working title for The Country Boy. Davenport's marriage had failed by 1909, and he suffered a breakdown that year, related to his ongoing divorce case. As he recovered, he announced a forthcoming series to be available for license to newspapers, "Men I have sketched". This project was aborted when, in 1911, Davenport was invited by Hearst to return to the American. He was on assignment for the American on April 19, 1912, when he met the RMS Carpathia at the docks in New York to draw the survivors of the RMS Titanic. He drew three cartoons, but upon leaving his office was in a "highly nervous state". That evening he fell ill at the apartment of a friend, Mrs. William Cochran, a medium and spiritualist. Diagnosed with pneumonia, he died in her home two weeks later, on May 2, 1912. Hearst paid for eight doctors to treat Davenport, and later for an elaborate funeral—the publisher had Davenport's body returned to his beloved Silverton for burial. His funeral was a freethought service conducted by a spiritualist, Jean Morris Ellis. Addison Bennett of The Oregonian wrote, "Yes, Homer has come home for the last time, home to wander again never". ## Arabian horse breeder In addition to his cartooning, Davenport is remembered for playing a key role in bringing some of the earliest desert-bred or asil Arabian horses to America. A longtime admirer of horses, Davenport stated in 1905, "I have dreamed of Arabian horses all my life." He had been captivated by the beauty of the Arabians brought to the Chicago Columbian Exposition in 1893. Upon learning that these horses had remained in America and had been sold at auction, he sought them out, finding most of the surviving animals in 1898 in the hands of millionaire fertilizer magnate Peter Bradley of Hingham, Massachusetts. Davenport bought some Arabian horses outright between 1898 and 1905, paying \$8,500 for one stallion, but he later partnered with Bradley in the horse business. Among his purchases, he managed to gather all but one of the surviving horses that had been a part of the Chicago Exhibition. ### Desert journey In 1906, Davenport, with Bradley's financial backing, used his political connections, particularly those with President Theodore Roosevelt, to obtain the diplomatic permissions required to travel into the lands controlled by the Ottoman Empire. Roosevelt himself was interested in breeding quality cavalry horses, had tried but failed to get Congress to fund a government cavalry stud farm, and considered Arabian blood useful for army horses. Davenport originally intended to travel alone, but was soon joined by two young associates anxious for an adventure in the Middle East: C. A. "Arthur" Moore Jr., and John H. "Jack" Thompson Jr. He traveled throughout what today is Syria and Lebanon, and successfully brought 27 horses to America. To travel to the Middle East and purchase horses, Davenport needed to obtain diplomatic permission from the government of the Ottoman Empire, and specifically from Sultan Abdul Hamid II. In December 1905, Davenport approached President Roosevelt for help, and in January 1906, Roosevelt provided him a letter of support that he was able to present to the Turkish Ambassador to the United States, Chikeb Bey, who contacted the Sultan. To the surprise of both Davenport and the Ambassador, the permit, called an Iradé, was granted, allowing the export of "six or eight" horses. Davenport and his traveling companions left the United States on July 5, 1906, traveling to France by ship and from there to Constantinople by train. Upon arrival, the Iradé was authenticated, and clarified that Davenport would be allowed to export both mares and stallions. Davenport's accomplishment was notable for several reasons. It was the first time Arabian horses officially had been allowed to be exported from the Ottoman Empire in 35 years. It was also notable that Davenport not only was able to purchase stallions, which were often available for sale to outsiders, but also mares, which were treasured by the Bedouin; the best war mares generally were not for sale at any price. Before Davenport left Constantinople to travel to Aleppo and then into the desert, he visited the royal stables, and also took advantage of an opportunity to view the Sultan during a public appearance. He displayed his artistic ability and talent for detail by sketching several portraits of Abdul Hamid II from memory about a half-hour after observing him, as Davenport believed the ruler unwilling to have his image drawn. Davenport's personal impression of the Sultan was sympathetic, viewing him as a frail, elderly man burdened by the weight of his office but kind and fatherly to his children. Davenport compared his appearance as a melding of the late congressman from Maine, Nelson Dingley, with merchant and philanthropist Nathan Straus, commenting of the Sultan, "I thought ... that no matter what crimes had been charged to him, his expressionless soldiers, his army and its leaders were possibly more to blame than he." Believing that he needed to keep his sketches a secret, he carried the sketch book in a hidden pocket throughout his journey, and at customs smuggled it onto the steamer home hidden inside a bale of hay. One reason for Davenport's success in obtaining high-quality, pure-blooded Arabian horses was his (possibly accidental) decision to breach protocol and visit Akmet Haffez, a Bedouin who served as a liaison between the Ottoman government and the tribal people of the Anazeh, before calling upon the Governor of Syria, Nazim Pasha. Haffez considered the timing of Davenport's visit a great honor, and gave Davenport his finest mare, a war mare named \*Wadduda. Not to be outdone, the Pasha gave Davenport the stallion \*Haleb, who was a well-respected sire throughout the region, known as the "Pride of the Desert." Haleb had been given to the Pasha as a reward for keeping the camel tax low. Haffez then personally escorted Davenport into the desert, and at one point in the journey the two men took an oath of brotherhood. Haffez helped arrange for the best-quality horses to be presented, negotiated fair prices, and verified that their pedigrees were asil. Davenport chronicled this journey in his 1908 book, My Quest of the Arabian Horse. The impact of the 17 stallions and 10 mares purchased by Davenport was of major importance to the Arabian horse breed in America. While what are now called "Davenport" bloodlines can be found in thousands of Arabian horse pedigrees, there are also some preservation breeders whose horses have bloodlines that are entirely descended from the horses he imported. Davenport's efforts, as well as those of his successors, allowed the Arabian horse in America to be bred with authentic Arabian type and pure bloodlines. ### Arabians in America Upon his return to America, his newly imported horses became part of his Davenport Desert Arabian Stud in Morris Plains, New Jersey. By 1908, however, the Davenport Desert Arabian Stud was listed in the Arabian Stud Book as located in Hingham, Massachusetts, and he remained closely affiliated with Bradley's Hingham Stock Farm, which became the sole owner of the horses after Davenport's death in 1912. In 1908, Davenport became one of the five incorporators of the Arabian Horse Club of America (now the Arabian Horse Association). The United States Department of Agriculture (USDA) recognized the organization as the official registry for Arabian horses in 1909. Prior to that time, the Thoroughbred stud books of both the United Kingdom and the United States also handled the registration of Arabian horses. The reason a new organization, separate from the American Jockey Club, was needed to register Arabians came about largely because of Davenport. He had meticulously sought horses with pure bloodlines and known breeding strains with the expert assistance of Haffez, but once out of the desert, he was not aware that he also needed to obtain written affidavits and other paperwork to document their bloodlines. Additionally, because his Arabians were not shipped via Britain, they were not certified by the United Kingdom's Jockey Club before arriving in America, and without that authentication, the American Jockey Club refused to register his imported horses. Another factor may have influenced the organization's stance: in a cartoon, Davenport had satirized Jockey Club President August Belmont. Haleb in particular became widely admired by American breeders, and in addition to siring Arabians, he was also crossed with Morgan and Standardbred mares. In 1907, Davenport entered the stallion into the Justin Morgan Cup, a horse show competition he won, defeating 19 Morgan horses. In 1909, Haleb died under mysterious circumstances. Davenport believed the horse had been poisoned. He had the stallion's skull and partial skeleton prepared and sent to the Smithsonian Institution, where it became part of the museum's research collection. Davenport also purchased horses from the Crabbet Park Stud in England, notably the stallion \*Abu Zeyd, considered the best son of his famous sire, Mesaoud. In 1911, Davenport described \*Abu Zeyd as "the grandest specimen of the Arabian horse I have ever seen and I will give a \$100 cup to the owner of any horse than can beat him." Upon Davenport's death, a significant number of his horses were obtained by W. R. Brown and his brother Herbert, where they became the foundation bloodstock for Brown's Maynesboro Stud of Berlin, New Hampshire. Included in the purchase was \*Abu Zeyd. The Maynesboro stud also acquired 10 mares from the Davenport estate. ## Personal life and other interests Davenport married Daisy Moor of San Francisco on September 7, 1893; she had traveled to Chicago while the artist was working there. They had three children: Homer Clyde, born 1896; Mildred, born 1899; and Gloria Ward, born 1904. While living in a New York apartment between 1895 and 1901 not much is known of the Davenport home life except that the furnishings were luxurious. By 1901, Davenport had bought both a house in East Orange, New Jersey, and a farm in Morris Plains, New Jersey. He kept many of the animals he collected and bred, including pheasants and horses, at East Orange, but decided to move both animals and himself to Morris Plains, and take the rail line dubbed the "Millionaire's Special" to work in New York. He moved away from East Orange in 1906, though he still owned the house as late as 1909. In Morris Plains, the Davenports hosted large parties attended by celebrities, artists, writers, and other influential people of the day, including Ambrose Bierce, Lillian Russell, Thomas Edison, William Jennings Bryan, Buffalo Bill Cody, Frederic Remington, and the Florodora girls. Instead of using a regular guestbook, Davenport would have his guests sign the clapboard siding of his home to commemorate their visits. Davenport bred various animals. "I was born with a love of horses and for all animals that do not hurt anything ... I feel happiest when I am with these birds and animals," he said, "I am a part of them without anything to explain." His understanding of the dynamics of purebred animal breeding was that deviation from the original, useful type led to degeneration of a breed. While best known as a horse breeder, he also raised pheasants—including exotic varieties from the Himalayas—and other breeds of birds. By 1905 he started a pheasant farm on his property in Morris Plains, gathering the birds he had kept on the west coast, and buying others from overseas using the profits from his first published book of cartoons. As of 1908 he owned the largest private collection of pheasants and wild waterfowl in America. At various times, his menagerie also contained angora goats, Persian fat-tailed sheep, Sicilian donkeys, and Chinese ducks. Three times, he built up collections of cockfighting roosters, once selling them to finance his start the first time he lived and worked in San Francisco. In addition to his interest in horses and birds, Davenport was also fond of dogs, notably a bull terrier named Duff, obtained as a puppy. Davenport taught Duff to do tricks and profited by loaning the dog to perform in vaudeville acts. In 1908, Davenport involved himself in a controversy over the breeding of show-quality dogs, stating that he thought breeding solely for show purposes was creating an animal that was of inferior quality. He targeted certain popular breeders of purebred collies as producing animals that had less intelligence, were of poor temperament, and lacked utility. He pointedly named famous breeders whom he felt were making particularly poor decisions. The Davenport marriage did not last; Daisy did not share many of her husband's interests and intensely disliked Silverton. In 1909 they separated, and the parting was acrimonious. Homer initially returned to New York to live, but soon suffered a breakdown; he spent months recuperating in a resort hotel in San Diego, California, at the expense of his friend, sporting goods mogul Albert Spalding. Though he deeded his two properties over to Daisy, she sued for alimony, and had Homer held in contempt by a New York court for failure to pay support when he was not working. He returned to New York and obtained a new stock farm at Holmdel, New Jersey, in 1910. Though his father died in 1911, he began to pull his life together and returned to cartooning. He met a new companion, referred to in his papers only as "Zadah", whom he intended to marry once his divorce case was concluded. However, he died before his scheduled August 1912 trial date. ## Legacy Davenport's cartoons have had a lasting impact on the public image of Mark Hanna, both on how he was perceived at the time and on how he is remembered today. Early Hanna biographer Herbert Croly, writing in 1912, the year Davenport died, deemed his subject portrayed as a "monster" by the "powerful but brutal caricatures of Homer Davenport". According to Horner, the portrayal of Hanna that has stood the test of time is one that depicts him "side by side with a gigantic figure representing the trusts, and a tiny, childlike, William McKinley. He will forever be known as "Dollar Mark", thanks to Homer Davenport and many other columnists who drew him as a malevolent presence. McKinley biographer Margaret Leech regretted Davenport's effect on the former president's image: "the representation of McKinley as pitiable and victimized was a poor service to his reputation. The graphic impression of his spineless subservience to Hanna would long outlive the lies of [Journal columnist] Alfred Henry Lewis." Davenport's obituary opined that he "did for San Francisco what Thomas Nast did for New York." According to Davenport's biographers, Leland Huot and Alfred Powers, his Arabian horses "were to perpetuate his fame on and on into future years more than his political cartoons, so that in ten thousand stables today he is known as having been a great, great man". Today, the term "CMK", meaning "Crabbet/Maynesboro/Kellogg", is a label for specific lines of "Domestic" or "American-bred" Arabian horses. It describes the descendants of horses imported to America from the desert or from Crabbet Park Stud in the late 1800s and early 1900s then bred on in the US by the Hamidie Society, Randolph Huntington, Spencer Borden, Davenport, W.R. Brown's Maynesboro Stud, W. K. Kellogg, Hearst's San Simeon Stud, and "General" J. M. Dickinson's Traveler's Rest Stud. Silverton, Oregon, gives tribute to Davenport during its annual Homer Davenport Community Festival, held annually in August. The festival began in 1980. ## Books In addition to his newspaper cartoons and postcards, Davenport wrote or provided illustrations for the following books: - - ''Republished by The Arabian Horse Club of America, Best Publishing, Boulder, Colorado, 1949ASIN: B0007EYORE ## See also - Theodore Thurston Geer Family history and legacy
57,021,483
Distributed-element circuit
1,060,746,637
Electrical circuits composed of lengths of transmission lines or other distributed components
[ "Distributed element circuits", "Microwave technology", "Radio electronics" ]
Distributed-element circuits are electrical circuits composed of lengths of transmission lines or other distributed components. These circuits perform the same functions as conventional circuits composed of passive components, such as capacitors, inductors, and transformers. They are used mostly at microwave frequencies, where conventional components are difficult (or impossible) to implement. Conventional circuits consist of individual components manufactured separately then connected together with a conducting medium. Distributed-element circuits are built by forming the medium itself into specific patterns. A major advantage of distributed-element circuits is that they can be produced cheaply as a printed circuit board for consumer products, such as satellite television. They are also made in coaxial and waveguide formats for applications such as radar, satellite communication, and microwave links. A phenomenon commonly used in distributed-element circuits is that a length of transmission line can be made to behave as a resonator. Distributed-element components which do this include stubs, coupled lines, and cascaded lines. Circuits built from these components include filters, power dividers, directional couplers, and circulators. Distributed-element circuits were studied during the 1920s and 1930s but did not become important until World War II, when they were used in radar. After the war their use was limited to military, space, and broadcasting infrastructure, but improvements in materials science in the field soon led to broader applications. They can now be found in domestic products such as satellite dishes and mobile phones. ## Circuit modelling Distributed-element circuits are designed with the distributed-element model, an alternative to the lumped-element model in which the passive electrical elements of electrical resistance, capacitance and inductance are assumed to be "lumped" at one point in space in a resistor, capacitor or inductor, respectively. The distributed-element model is used when this assumption no longer holds, and these properties are considered to be distributed in space. The assumption breaks down when there is significant time for electromagnetic waves to travel from one terminal of a component to the other; "significant", in this context, implies enough time for a noticeable phase change. The amount of phase change is dependent on the wave's frequency (and inversely dependent on wavelength). A common rule of thumb amongst engineers is to change from the lumped to the distributed model when distances involved are more than one-tenth of a wavelength (a 36° phase change). The lumped model completely fails at one-quarter wavelength (a 90° phase change), with not only the value, but the nature of the component not being as predicted. Due to this dependence on wavelength, the distributed-element model is used mostly at higher frequencies; at low frequencies, distributed-element components are too bulky. Distributed designs are feasible above 300 MHz, and are the technology of choice at microwave frequencies above 1 GHz. There is no clear-cut demarcation in the frequency at which these models should be used. Although the changeover is usually somewhere in the 100-to-500 MHz range, the technological scale is also significant; miniaturised circuits can use the lumped model at a higher frequency. Printed circuit boards (PCBs) using through-hole technology are larger than equivalent designs using surface-mount technology. Hybrid integrated circuits are smaller than PCB technologies, and monolithic integrated circuits are smaller than both. Integrated circuits can use lumped designs at higher frequencies than printed circuits, and this is done in some radio frequency integrated circuits. This choice is particularly significant for hand-held devices, because lumped-element designs generally result in a smaller product. ### Construction with transmission lines The overwhelming majority of distributed-element circuits are composed of lengths of transmission line, a particularly simple form to model. The cross-sectional dimensions of the line are unvarying along its length, and are small compared to the signal wavelength; thus, only distribution along the length of the line need be considered. Such an element of a distributed circuit is entirely characterised by its length and characteristic impedance. A further simplification occurs in commensurate line circuits, where all the elements are the same length. With commensurate circuits, a lumped circuit design prototype consisting of capacitors and inductors can be directly converted into a distributed circuit with a one-to-one correspondence between the elements of each circuit. Commensurate line circuits are important because a design theory for producing them exists; no general theory exists for circuits consisting of arbitrary lengths of transmission line (or any arbitrary shapes). Although an arbitrary shape can be analysed with Maxwell's equations to determine its behaviour, finding useful structures is a matter of trial and error or guesswork. An important difference between distributed-element circuits and lumped-element circuits is that the frequency response of a distributed circuit periodically repeats as shown in the Chebyshev filter example; the equivalent lumped circuit does not. This is a result of the transfer function of lumped forms being a rational function of complex frequency; distributed forms are an irrational function. Another difference is that cascade-connected lengths of line introduce a fixed delay at all frequencies (assuming an ideal line). There is no equivalent in lumped circuits for a fixed delay, although an approximation could be constructed for a limited frequency range. ## Advantages and disadvantages Distributed-element circuits are cheap and easy to manufacture in some formats, but take up more space than lumped-element circuits. This is problematic in mobile devices (especially hand-held ones), where space is at a premium. If the operating frequencies are not too high, the designer may miniaturise components rather than switching to distributed elements. However, parasitic elements and resistive losses in lumped components are greater with increasing frequency as a proportion of the nominal value of the lumped-element impedance. In some cases, designers may choose a distributed-element design (even if lumped components are available at that frequency) to benefit from improved quality. Distributed-element designs tend to have greater power-handling capability; with a lumped component, all the energy passed by a circuit is concentrated in a small volume. ## Media ### Paired conductors Several types of transmission line exist, and any of them can be used to construct distributed-element circuits. The oldest (and still most widely used) is a pair of conductors; its most common form is twisted pair, used for telephone lines and Internet connections. It is not often used for distributed-element circuits because the frequencies used are lower than the point where distributed-element designs become advantageous. However, designers frequently begin with a lumped-element design and convert it to an open-wire distributed-element design. Open wire is a pair of parallel uninsulated conductors used, for instance, for telephone lines on telegraph poles. The designer does not usually intend to implement the circuit in this form; it is an intermediate step in the design process. Distributed-element designs with conductor pairs are limited to a few specialised uses, such as Lecher lines and the twin-lead used for antenna feed lines. ### Coaxial Coaxial line, a centre conductor surrounded by an insulated shielding conductor, is widely used for interconnecting units of microwave equipment and for longer-distance transmissions. Although coaxial distributed-element devices were commonly manufactured during the second half of the 20th century, they have been replaced in many applications by planar forms due to cost and size considerations. Air-dielectric coaxial line is used for low-loss and high-power applications. Distributed-element circuits in other media still commonly transition to coaxial connectors at the circuit ports for interconnection purposes. ### Planar The majority of modern distributed-element circuits use planar transmission lines, especially those in mass-produced consumer items. There are several forms of planar line, but the kind known as microstrip is the most common. It can be manufactured by the same process as printed circuit boards and hence is cheap to make. It also lends itself to integration with lumped circuits on the same board. Other forms of printed planar lines include stripline, finline and many variations. Planar lines can also be used in monolithic microwave integrated circuits, where they are integral to the device chip. ### Waveguide Many distributed-element designs can be directly implemented in waveguide. However, there is an additional complication with waveguides in that multiple modes are possible. These sometimes exist simultaneously, and this situation has no analogy in conducting lines. Waveguides have the advantages of lower loss and higher quality resonators over conducting lines, but their relative expense and bulk means that microstrip is often preferred. Waveguide mostly finds uses in high-end products, such as high-power military radars and the upper microwave bands (where planar formats are too lossy). Waveguide becomes bulkier with lower frequency, which militates against its use on the lower bands. ### Mechanical In a few specialist applications, such as the mechanical filters in high-end radio transmitters (marine, military, amateur radio), electronic circuits can be implemented as mechanical components; this is done largely because of the high quality of the mechanical resonators. They are used in the radio frequency band (below microwave frequencies), where waveguides might otherwise be used. Mechanical circuits can also be implemented, in whole or in part, as distributed-element circuits. The frequency at which the transition to distributed-element design becomes feasible (or necessary) is much lower with mechanical circuits. This is because the speed at which signals travel through mechanical media is much lower than the speed of electrical signals. ## Circuit components There are several structures that are repeatedly used in distributed-element circuits. Some of the common ones are described below. ### Stub A stub is a short length of line that branches to the side of a main line. The end of the stub is often left open- or short-circuited, but may also be terminated with a lumped component. A stub can be used on its own (for instance, for impedance matching), or several of them can be used together in a more complex circuit such as a filter. A stub can be designed as the equivalent of a lumped capacitor, inductor, or resonator. Departures from constructing with uniform transmission lines in distributed-element circuits are rare. One such departure that is widely used is the radial stub, which is shaped like a sector of a circle. They are often used in pairs, one on either side of the main transmission line. Such pairs are called butterfly or bowtie stubs. ### Coupled lines Coupled lines are two transmission lines between which there is some electromagnetic coupling. The coupling can be direct or indirect. In indirect coupling, the two lines are run closely together for a distance with no screening between them. The strength of the coupling depends on the distance between the lines and the cross-section presented to the other line. In direct coupling, branch lines directly connect the two main lines together at intervals. Coupled lines are a common method of constructing power dividers and directional couplers. Another property of coupled lines is that they act as a pair of coupled resonators. This property is used in many distributed-element filters. ### Cascaded lines Cascaded lines are lengths of transmission line where the output of one line is connected to the input of the next. Multiple cascaded lines of different characteristic impedances can be used to construct a filter or a wide-band impedance matching network. This is called a stepped impedance structure. A single, cascaded line one-quarter wavelength long forms a quarter-wave impedance transformer. This has the useful property of transforming any impedance network into its dual; in this role, it is called an impedance inverter. This structure can be used in filters to implement a lumped-element prototype in ladder topology as a distributed-element circuit. The quarter-wave transformers are alternated with a distributed-element resonator to achieve this. However, this is now a dated design; more compact inverters, such as the impedance step, are used instead. An impedance step is the discontinuity formed at the junction of two cascaded transmission lines with different characteristic impedances. ### Cavity resonator A cavity resonator is an empty (or sometimes dielectric-filled) space surrounded by conducting walls. Apertures in the walls couple the resonator to the rest of the circuit. Resonance occurs due to electromagnetic waves reflected back and forth from the cavity walls setting up standing waves. Cavity resonators can be used in many media, but are most naturally formed in waveguide from the already existing metal walls of the guide. ### Dielectric resonator A dielectric resonator is a piece of dielectric material exposed to electromagnetic waves. It is most often in the form of a cylinder or thick disc. Although cavity resonators can be filled with dielectric, the essential difference is that in cavity resonators the electromagnetic field is entirely contained within the cavity walls. A dielectric resonator has some field in the surrounding space. This can lead to undesirable coupling with other components. The major advantage of dielectric resonators is that they are considerably smaller than the equivalent air-filled cavity. ### Helical resonator A helical resonator is a helix of wire in a cavity; one end is unconnected, and the other is bonded to the cavity wall. Although they are superficially similar to lumped inductors, helical resonators are distributed-element components and are used in the VHF and lower UHF bands. ### Fractals The use of fractal-like curves as a circuit component is an emerging field in distributed-element circuits. Fractals have been used to make resonators for filters and antennae. One of the benefits of using fractals is their space-filling property, making them smaller than other designs. Other advantages include the ability to produce wide-band and multi-band designs, good in-band performance, and good out-of-band rejection. In practice, a true fractal cannot be made because at each fractal iteration the manufacturing tolerances become tighter and are eventually greater than the construction method can achieve. However, after a small number of iterations, the performance is close to that of a true fractal. These may be called pre-fractals or finite-order fractals where it is necessary to distinguish from a true fractal. Fractals that have been used as a circuit component include the Koch snowflake, Minkowski island, Sierpiński curve, Hilbert curve, and Peano curve. The first three are closed curves, suitable for patch antennae. The latter two are open curves with terminations on opposite sides of the fractal. This makes them suitable for use where a connection in cascade is required. ### Taper A taper is a transmission line with a gradual change in cross-section. It can be considered the limiting case of the stepped impedance structure with an infinite number of steps. Tapers are a simple way of joining two transmission lines of different characteristic impedances. Using tapers greatly reduces the mismatch effects that a direct join would cause. If the change in cross-section is not too great, no other matching circuitry may be needed. Tapers can provide transitions between lines in different media, especially different forms of planar media. Tapers commonly change shape linearly, but a variety of other profiles may be used. The profile that achieves a specified match in the shortest length is known as a Klopfenstein taper and is based on the Chebychev filter design. Tapers can be used to match a transmission line to an antenna. In some designs, such as the horn antenna and Vivaldi antenna, the taper is itself the antenna. Horn antennae, like other tapers, are often linear, but the best match is obtained with an exponential curve. The Vivaldi antenna is a flat (slot) version of the exponential taper. ### Distributed resistance Resistive elements are generally not useful in a distributed-element circuit. However, distributed resistors may be used in attenuators and line terminations. In planar media they can be implemented as a meandering line of high-resistance material, or as a deposited patch of thin-film or thick-film material. In waveguide, a card of microwave absorbent material can be inserted into the waveguide. ## Circuit blocks ### Filters and impedance matching Filters are a large percentage of circuits constructed with distributed elements. A wide range of structures are used for constructing them, including stubs, coupled lines and cascaded lines. Variations include interdigital filters, combline filters and hairpin filters. More-recent developments include fractal filters. Many filters are constructed in conjunction with dielectric resonators. As with lumped-element filters, the more elements used, the closer the filter comes to an ideal response; the structure can become quite complex. For simple, narrow-band requirements, a single resonator may suffice (such as a stub or spurline filter). Impedance matching for narrow-band applications is frequently achieved with a single matching stub. However, for wide-band applications the impedance-matching network assumes a filter-like design. The designer prescribes a required frequency response, and designs a filter with that response. The only difference from a standard filter design is that the filter's source and load impedances differ. ### Power dividers, combiners and directional couplers A directional coupler is a four-port device which couples power flowing in one direction from one path to another. Two of the ports are the input and output ports of the main line. A portion of the power entering the input port is coupled to a third port, known as the coupled port. None of the power entering the input port is coupled to the fourth port, usually known as the isolated port. For power flowing in the reverse direction and entering the output port, a reciprocal situation occurs; some power is coupled to the isolated port, but none is coupled to the coupled port. A power divider is often constructed as a directional coupler, with the isolated port permanently terminated in a matched load (making it effectively a three-port device). There is no essential difference between the two devices. The term directional coupler is usually used when the coupling factor (the proportion of power reaching the coupled port) is low, and power divider when the coupling factor is high. A power combiner is simply a power splitter used in reverse. In distributed-element implementations using coupled lines, indirectly coupled lines are more suitable for low-coupling directional couplers; directly coupled branch line couplers are more suitable for high-coupling power dividers. Distributed-element designs rely on an element length of one-quarter wavelength (or some other length); this will hold true at only one frequency. Simple designs, therefore, have a limited bandwidth over which they will work successfully. Like impedance matching networks, a wide-band design requires multiple sections and the design begins to resemble a filter. #### Hybrids A directional coupler which splits power equally between the output and coupled ports (a 3 dB coupler) is called a hybrid. Although "hybrid" originally referred to a hybrid transformer (a lumped device used in telephones), it now has a broader meaning. A widely used distributed-element hybrid which does not use coupled lines is the hybrid ring or rat-race coupler. Each of its four ports is connected to a ring of transmission line at a different point. Waves travel in opposite directions around the ring, setting up standing waves. At some points on the ring, destructive interference results in a null; no power will leave a port set at that point. At other points, constructive interference maximises the power transferred. Another use for a hybrid coupler is to produce the sum and difference of two signals. In the illustration, two input signals are fed into the ports marked 1 and 2. The sum of the two signals appears at the port marked Σ, and the difference at the port marked Δ. In addition to their uses as couplers and power dividers, directional couplers can be used in balanced mixers, frequency discriminators, attenuators, phase shifters, and antenna array feed networks. ### Circulators A circulator is usually a three- or four-port device in which power entering one port is transferred to the next port in rotation, as if round a circle. Power can flow in only one direction around the circle (clockwise or anticlockwise), and no power is transferred to any of the other ports. Most distributed-element circulators are based on ferrite materials. Uses of circulators include as an isolator to protect a transmitter (or other equipment) from damage due to reflections from the antenna, and as a duplexer connecting the antenna, transmitter and receiver of a radio system. An unusual application of a circulator is in a reflection amplifier, where the negative resistance of a Gunn diode is used to reflect back more power than it received. The circulator is used to direct the input and output power flows to separate ports. Passive circuits, both lumped and distributed, are nearly always reciprocal; however, circulators are an exception. There are several equivalent ways to define or represent reciprocity. A convenient one for circuits at microwave frequencies (where distributed-element circuits are used) is in terms of their S-parameters. A reciprocal circuit will have an S-parameter matrix, [S], which is symmetric. From the definition of a circulator, it is clear that this will not be the case, <math>[S] = \begin{pmatrix} ` 0 & 0 & 1\\` ` 1 & 0 & 0 \\` ` 0 & 1 & 0` \end{pmatrix}</math> for an ideal three-port circulator, showing that circulators are non-reciprocal by definition. It follows that it is impossible to build a circulator from standard passive components (lumped or distributed). The presence of a ferrite, or some other non-reciprocal material or system, is essential for the device to work. ## Active components Distributed elements are usually passive, but most applications will require active components in some role. A microwave hybrid integrated circuit uses distributed elements for many passive components, but active components (such as diodes, transistors, and some passive components) are discrete. The active components may be packaged, or they may be placed on the substrate in chip form without individual packaging to reduce size and eliminate packaging-induced parasitics. Distributed amplifiers consist of a number of amplifying devices (usually FETs), with all their inputs connected via one transmission line and all their outputs via another transmission line. The lengths of the two lines must be equal between each transistor for the circuit to work correctly, and each transistor adds to the output of the amplifier. This is different from a conventional multistage amplifier, where the gain is multiplied by the gain of each stage. Although a distributed amplifier has lower gain than a conventional amplifier with the same number of transistors, it has significantly greater bandwidth. In a conventional amplifier, the bandwidth is reduced by each additional stage; in a distributed amplifier, the overall bandwidth is the same as the bandwidth of a single stage. Distributed amplifiers are used when a single large transistor (or a complex, multi-transistor amplifier) would be too large to treat as a lumped component; the linking transmission lines separate the individual transistors. ## History Distributed-element modelling was first used in electrical network analysis by Oliver Heaviside in 1881. Heaviside used it to find a correct description of the behaviour of signals on the transatlantic telegraph cable. Transmission of early transatlantic telegraph had been difficult and slow due to dispersion, an effect which was not well understood at the time. Heaviside's analysis, now known as the telegrapher's equations, identified the problem and suggested methods for overcoming it. It remains the standard analysis of transmission lines. Warren P. Mason was the first to investigate the possibility of distributed-element circuits, and filed a patent in 1927 for a coaxial filter designed by this method. Mason and Sykes published the definitive paper on the method in 1937. Mason was also the first to suggest a distributed-element acoustic filter in his 1927 doctoral thesis, and a distributed-element mechanical filter in a patent filed in 1941. Mason's work was concerned with the coaxial form and other conducting wires, although much of it could also be adapted for waveguide. The acoustic work had come first, and Mason's colleagues in the Bell Labs radio department asked him to assist with coaxial and waveguide filters. Before World War II, there was little demand for distributed-element circuits; the frequencies used for radio transmissions were lower than the point at which distributed elements became advantageous. Lower frequencies had a greater range, a primary consideration for broadcast purposes. These frequencies require long antennae for efficient operation, and this led to work on higher-frequency systems. A key breakthrough was the 1940 introduction of the cavity magnetron which operated in the microwave band and resulted in radar equipment small enough to install in aircraft. A surge in distributed-element filter development followed, filters being an essential component of radars. The signal loss in coaxial components led to the first widespread use of waveguide, extending the filter technology from the coaxial domain into the waveguide domain. The wartime work was mostly unpublished until after the war for security reasons, which made it difficult to ascertain who was responsible for each development. An important centre for this research was the MIT Radiation Laboratory (Rad Lab), but work was also done elsewhere in the US and Britain. The Rad Lab work was published by Fano and Lawson. Another wartime development was the hybrid ring. This work was carried out at Bell Labs, and was published after the war by W. A. Tyrrell. Tyrrell describes hybrid rings implemented in waveguide, and analyses them in terms of the well-known waveguide magic tee. Other researchers soon published coaxial versions of this device. George Matthaei led a research group at Stanford Research Institute which included Leo Young and was responsible for many filter designs. Matthaei first described the interdigital filter and the combline filter. The group's work was published in a landmark 1964 book covering the state of distributed-element circuit design at that time, which remained a major reference work for many years. Planar formats began to be used with the invention of stripline by Robert M. Barrett. Although stripline was another wartime invention, its details were not published until 1951. Microstrip, invented in 1952, became a commercial rival of stripline; however, planar formats did not start to become widely used in microwave applications until better dielectric materials became available for the substrates in the 1960s. Another structure which had to wait for better materials was the dielectric resonator. Its advantages (compact size and high quality) were first pointed out by R. D. Richtmeyer in 1939, but materials with good temperature stability were not developed until the 1970s. Dielectric resonator filters are now common in waveguide and transmission line filters. Important theoretical developments included Paul I. Richards' commensurate line theory, which was published in 1948, and Kuroda's identities, a set of transforms which overcame some practical limitations of Richards theory, published by Kuroda in 1955. According to Nathan Cohen, the log-periodic antenna, invented by Raymond DuHamel and Dwight Isbell in 1957, should be considered the first fractal antenna. However, its self-similar nature, and hence its relation to fractals was missed at the time. It is still not usually classed as a fractal antenna. Cohen was the first to explicitly identify the class of fractal antennae after being inspired by a lecture of Benoit Mandelbrot in 1987, but he could not get a paper published until 1995.
51,240,628
Roman withdrawal from Africa (255 BC)
1,165,648,062
Major Roman rescue operation during the First Punic War
[ "250s BC conflicts", "255 BC", "3rd century BC in the Roman Republic", "Military withdrawals", "Naval battles of the First Punic War" ]
The Roman withdrawal from Africa was the attempt by the Roman Republic in 255 BC to rescue the survivors of their defeated expeditionary force to Carthaginian Africa during the First Punic War. A large fleet commanded by Servius Fulvius Paetinus Nobilior and Marcus Aemilius Paullus successfully evacuated the survivors after defeating an intercepting Carthaginian fleet, but was struck by a storm while returning, losing most of its ships. The Romans had invaded the Carthaginian homeland (in what is now north eastern Tunisia) in 256 BC. After initial successes, they had left a force of 15,500 men to hold their lodgement over the winter. This force, commanded by Marcus Atilius Regulus, was decisively beaten at the Battle of Tunis in the spring of 255 BC, leading to Regulus' capture. Two thousand survivors were besieged in the port of Aspis. The Roman fleet of 390 warships was sent to rescue and evacuate them. A Carthaginian fleet of 200 ships intercepted them off Cape Hermaeum (the modern Cape Bon or Ras ed-Dar), north of Aspis. The Carthaginians were defeated with 114 of their ships captured, together with their crews, and 16 sunk. Roman losses are unknown; most modern historians assume there were none. The Romans landed in Aspis, sortied, dispersed the besiegers and raided the surrounding country for food. All then re-embarked and left for Italy. Off the south-east corner of Sicily, a sudden summer storm blew up and devastated the Roman fleet. From their total of 464 warships, 384 were sunk, as were 300 transports; and more than 100,000 men were lost. Despite the heavy losses of both sides, the war continued for a further 14 years, mostly on Sicily or the nearby waters, before ending with a Roman victory. ## Primary sources The main source for almost every aspect of the First Punic War is the historian Polybius (c. 200 – c. 118 BC), a Greek sent to Rome in 167 BC as a hostage. His works include a now-lost manual on military tactics, but he is known today for The Histories, written sometime after 146 BC, or about a century after the Battle of Cape Hermaeum. Polybius's work is considered broadly objective and largely neutral as between Carthaginian and Roman points of view. Carthaginian written records were destroyed along with their capital, Carthage, in 146 BC and so Polybius's account of the First Punic War is based on several, now-lost, Greek and Latin sources. Polybius was an analytical historian and wherever possible personally interviewed participants in the events he wrote about. Only the first book of the forty comprising The Histories deals with the First Punic War. The accuracy of Polybius's account has been much debated over the past 150 years, but the modern consensus is to accept it largely at face value and the details of the withdrawal in modern sources are largely based on interpretations of Polybius's account. The modern historian Andrew Curry has stated that "Polybius turns out to [be] fairly reliable"; while Dexter Hoyos describes him as "a remarkably well-informed, industrious, and insightful historian". Other, later, histories of the war exist, but in fragmentary or summary form, and they usually cover military operations on land in more detail than those at sea. Modern historians usually take into account the later histories of Diodorus Siculus and Dio Cassius, although the classicist Adrian Goldsworthy states "Polybius' account is usually to be preferred when it differs with any of our other accounts". Other sources include inscriptions, archaeological evidence and empirical evidence from reconstructions such as the trireme Olympias. Since 2010 a number of artefacts have been recovered from the site of the Battle of the Aegates, the final battle of the war, fought fourteen years later. Their analysis and the recovery of further items are ongoing. ## Background ### Operations in Sicily In 264 BC, the states of Carthage and Rome went to war, starting the First Punic War. Carthage was a well-established maritime power in the western Mediterranean; mainland Italy south of the River Arno had recently been unified under Roman control. According to the classicist Richard Miles, Rome's expansionary attitude after southern Italy came under its control combined with Carthage's proprietary approach to Sicily caused the two powers to stumble into war more by accident than design. The immediate cause of the war was the issue of control of the independent Sicilian city state of Messana (modern Messina). ### Ships During this period the standard Mediterranean warship was the quinquereme, meaning "five-rowers". The quinquereme was a galley, c. 45 metres (150 ft) long, c. 5 metres (16 ft) wide at water level, with its deck standing c. 3 metres (10 ft) above the sea and displacing around 100 tonnes (110 short tons; 100 long tons). The modern expert on galleys John Coates suggests they could maintain 7 knots (8.1 mph; 13 km/h) for extended periods. The modern replica galley Olympias has achieved speeds of 8.5 knots (9.8 mph; 15.7 km/h) and cruised at 4 knots (4.6 mph; 7.4 km/h) for hours on end. Average speeds of 5–6 knots (6–7 mph (9.7–11.3 km/h) were recorded on contemporary voyages of up to a week. Vessels were built as cataphract, or "protected", ships, with a closed hull and a full deck able to carry marines and catapults. They had a separate "oar box" attached to the main hull which contained the rowers. These features allowed the hull to be strengthened, increased carrying capacity and improved conditions for the rowers. The generally accepted theory regarding the arrangement of oarsmen in quinqueremes is that there would be sets – or files – of three oars, one above the other, with two oarsmen on each of the two uppermost oars and one on the lower, for a total of five oarsmen per file. This would be repeated down the side of a galley for a total of 28 files on each side; 168 oars in total. The Romans had little naval experience prior to the First Punic War; on the few occasions they had previously needed a naval presence they had usually relied on small squadrons provided by their Latin or Greek allies. In 260 BC the Romans set out to construct a fleet and used a shipwrecked Carthaginian quinquereme as a blueprint for their own. As novice shipwrights, the Romans built copies that were heavier than the Carthaginian vessels and thus slower and less manoeuvrable. The quinquereme was the workhorse of the Roman and Carthaginian fleets throughout the Punic Wars, although hexaremes (six oarsmen per bank), quadriremes (four oarsmen per bank) and triremes (three oarsmen per bank) are occasionally mentioned in the sources. So ubiquitous was the type that Polybius uses it as a shorthand for "warship" in general. A quinquereme carried a crew of 300: 280 oarsmen and 20 deck crew and officers. It would also normally carry a complement of 40 marines; if battle was thought to be imminent this would be increased to as many as 120. Getting the oarsmen to row as a unit, let alone to execute more complex battle manoeuvres, required long and arduous training. At least half of the oarsmen needed to have had some experience if the ship was to be handled effectively. As a result, the Romans were initially at a disadvantage against the more experienced Carthaginians. All warships were equipped with a ram, a triple set of 60-centimetre-wide (2 ft) bronze blades weighing up to 270 kilograms (600 lb) positioned at the waterline. All of the rams recovered by modern archeologists were made individually by the lost-wax method to fit immovably to a galley's prow, and secured with bronze spikes. Ideally one would attack an enemy ship from its side or rear, thus avoiding the possibility of being rammed oneself. Skill was required to impact an opposing galley forcefully enough to break loose its timbers and cause it to founder, but not so forcefully as to embed one's own galley in the stricken enemy. Each vessel relied to a large extent on the other vessels in its squadron for protection and tactics involved the manoeuvring of whole squadrons rather than individual ships; although battles sometimes broke down into a series of ship on ship combats which have been likened to aerial dogfights. ## Invasion of Africa Largely because of the Romans' invention of the corvus, a device that enabled them to grapple and board enemy vessels more easily, the Carthaginians were defeated in large naval battles at Mylae (260 BC) and Sulci (257 BC). Encouraged by these and frustrated at the continuing stalemate in Sicily, the Romans changed their focus to a sea-based strategy and developed a plan to invade the Carthaginian heartland in North Africa and threaten Carthage (close to Tunis). Both sides were determined to establish naval supremacy and invested large amounts of money and manpower in maintaining and increasing the size of their navies. The Roman fleet of 330 warships plus an unknown number of transport ships sailed from Ostia, the port of Rome, in early 256 BC, commanded by the consuls for the year, Marcus Atilius Regulus and Lucius Manlius Vulso Longus. They embarked approximately 26,000 picked legionaries from the Roman forces on Sicily. They planned to cross to Africa and invade what is now Tunisia. The Carthaginians were aware of the Romans' intentions and mustered all available warships, 350, under Hanno and Hamilcar, off the south coast of Sicily to intercept them. A combined total of about 680 warships carrying up to 290,000 crew and marines met in the Battle of Cape Ecnomus. The Carthaginians took the initiative, anticipating that their superior ship-handling skills would tell. After a prolonged and confused day of fighting the Carthaginians were defeated, losing 30 ships sunk and 64 captured to Roman losses of 24 ships sunk. As a result of the battle, the Roman army, commanded by Regulus, landed in Africa near Aspis (modern Kelibia) and captured it. Most of the Roman ships returned to Sicily, leaving Regulus with 15,000 infantry and 500 cavalry to continue the war in Africa. Regulus advanced on the city of Adys and besieged it. The Carthaginians, meanwhile, had recalled Hamilcar from Sicily with 5,000 infantry and 500 cavalry. Hamilcar, Hasdrubal and Bostar were placed in joint command of an army which was strong in cavalry and elephants and was approximately the same size as the Romans'. The Romans carried out a night march and launched a surprise dawn attack on the Carthaginian camp from two directions. After confused fighting, the Carthaginians broke and fled. ## Roman reversal and withdrawal ### Battle of Tunis The Romans followed up and captured Tunis, only 16 kilometres (10 mi) from Carthage. In despair, the Carthaginians sued for peace, but Regulus's proposed terms were so harsh the Carthaginians decided to fight on. They gave charge of the training of their army to the Spartan mercenary commander Xanthippus. In the spring of 255 BC Xanthippus led an army of 12,000 infantry, 4,000 cavalry and 100 war elephants against the Romans' infantry-based army at the Battle of Tunis. The Romans had no effective answer to the elephants, their outnumbered cavalry were chased from the field and the Carthaginian cavalry then surrounded most of the Romans and decisively defeated them. Most of the Romans were killed, while approximately 500, including Regulus, were captured; another 2,000 Romans escaped and retreated to Aspis which was situated on a high and naturally strong position and overlooked the natural harbour of the Bay of Clupea. Xanthippus, fearful of the envy of the Carthaginian generals he had outdone, took his pay and returned to Greece. ### Battle of Cape Hermaeum Later in 255 BC the Romans sent a fleet of 350 quinqueremes and more than 300 transports to evacuate their survivors, who were under siege in Aspis. Both consuls for the year, Servius Fulvius Paetinus Nobilior and Marcus Aemilius Paullus, accompanied the fleet. They captured the island of Cossyra en route. The Carthaginians attempted to oppose the evacuation with 200 quinqueremes. They intercepted the Romans off Cape Hermaeum (the modern Cape Bon or Ras ed-Dar), a little to the north of Aspis. The 40 Roman ships which had been left to support Regulus's force over the winter sortied from Aspis to join the fight. Few details of the battle have survived. The Carthaginians were concerned they would be encircled by the larger Roman fleet and so sailed close to the coast. However, the Carthaginian ships were outmanoeuvred and pinned against the coast, where many were boarded via the corvus and captured, or forced to beach. The Carthaginians were defeated and 114 of their ships were captured, together with their crews, and 16 sunk. What, if any, the Roman losses were is not known; most modern historians assume there were none. The historian Marc DeSantis suggests that a lack of soldiers serving as marines on the Carthaginian ships, compared with the Romans', may have been a factor in their defeat and in the large number of vessels captured. ### Storm off Camarina The fleet docked at Aspis, where the Roman garrison – reinforced by the fleet's marines – sortied, dispersed the besiegers and raided the surrounding country for food. All then re-embarked and left for Italy. They sailed directly to Sicily, making landfall at its south-west corner, then proceeded along the south coast. In mid-July, somewhere between the friendly city of Camarina and Cape Passaro, the south-east corner of Sicily, a sudden summer storm blew up and devastated the Roman fleet. From their total of 464 warships, 384 were sunk, as were 300 transports and more than 100,000 men were lost. DeSantis considers 100,000 to be a conservative estimate while the historian Howard Scullard breaks the loss down as 25,000 soldiers, who would have included many of the survivors of Regulus's army; and 70,000 rowers and crew, with many of these probably being Carthaginians taken captive in the recent battle. The majority of the casualties are assumed to have been non-Roman Latin allies. It is possible that the presence of the corvus made the Roman ships unusually unseaworthy; there is no record of them being used after this disaster. Polybius is critical of what he considers the poor judgement and seamanship displayed immediately prior to the storm. Both consuls survived and, despite the loss of most of their fleet, each was awarded a triumph in January 254 for their victory at Cape Hermaeum. Scullard says this is a clear indication "the subsequent tragedy was regarded as due to natural causes rather than to bad seamanship". ## Aftermath Paullus built a column at his own expense on the Capitoline Hill in Rome celebrating the victory. In keeping with tradition he adorned it with the prows of captured Carthaginian ships. The column was destroyed by lightning in 172 BC. The war continued, with neither side able to gain a decisive advantage. The Romans rapidly rebuilt their fleet, adding 220 new ships, and captured Panormus (modern Palermo) in 254 BC. The next year they lost 150 ships to another storm. Slowly the Romans had occupied most of Sicily; in 249 BC they besieged the last two Carthaginian strongholds – in the extreme west. They also launched a surprise attack on the Carthaginian fleet, but were defeated at the Battle of Drepana. The Carthaginians followed up their victory and most of the remaining Roman warships were lost at the Battle of Phintias; the Romans were all but swept from the sea. It was to be seven years before Rome again attempted to field a substantial fleet, while Carthage put most of its ships into reserve to save money and free up manpower. After several years of stalemate, the Romans rebuilt their fleet again in 243 BC and effectively blockaded the Carthaginian garrisons. Carthage assembled a fleet which attempted to relieve them, but it was destroyed at the Battle of the Aegates Islands in 241 BC, forcing the cut-off Carthaginian troops on Sicily to negotiate for peace. The terms offered to Carthage were more generous than those proposed by Regulus. The question of which state was to control the western Mediterranean remained open, and when Carthage besieged the Roman-protected town of Saguntum in eastern Iberia in 218 BC, it ignited the Second Punic War with Rome. ## Notes, citations and sources
41,735,520
Caesar Hull
1,151,304,897
Southern Rhodesian World War II flying ace
[ "1914 births", "1940 deaths", "Aerobatic pilots", "Alumni of St John's College (Johannesburg)", "Aviators killed by being shot down", "Boxers at the 1934 British Empire Games", "Commonwealth Games competitors for South Africa", "Lightweight boxers", "Recipients of the Distinguished Flying Cross (United Kingdom)", "Rhodesian male boxers", "Royal Air Force personnel killed in World War II", "Royal Air Force pilots of World War II", "Royal Air Force squadron leaders", "South African male boxers", "Southern Rhodesian World War II flying aces", "Southern Rhodesian military personnel killed in World War II", "Sportspeople from Matabeleland North Province", "The Few", "White Rhodesian people" ]
Caesar Barrand Hull, DFC (26 February 1914 – 7 September 1940) was a Royal Air Force (RAF) flying ace during the Second World War, noted especially for his part in the fighting for Narvik during the Norwegian Campaign in 1940, and for being one of "The Few"—the Allied pilots of the Battle of Britain, in which he was shot down and killed. From a farming family, Hull's early years were spent in Southern Rhodesia, South Africa and Swaziland. He boxed for South Africa at the 1934 Empire Games. After being turned down by the South African Air Force because he did not speak Afrikaans, he joined the RAF and, on becoming a pilot officer in August 1936, mustered into No. 43 Squadron at RAF Tangmere in Sussex. A skilful pilot, Hull dedicated much of his pre-war service to aerobatics, flying Hawker Audaxes, Furies and Hurricanes. He reacted to the outbreak of war with enthusiasm and achieved No. 43 Squadron's first victory of the conflict in late January 1940. Reassigned to Norway in May 1940 to command a flight of Gloster Gladiator biplanes belonging to No. 263 Squadron, he downed four German aircraft in an hour over the Bodø area south-west of Narvik on 26 May, a feat that earned him the Distinguished Flying Cross. He was shot down the next day, and invalided back to England. Hull returned to action at the end of August, when he was made commander of No. 43 Squadron with the rank of squadron leader. A week later, he died in a dogfight over south London. With eight confirmed aerial victories during the war, including five over Norway, Hull was the RAF's first Gladiator ace and the most successful RAF pilot of the Norwegian Campaign. He was buried among fellow fighter pilots at Tangmere, and a monument to his memory was erected near his birthplace in Southern Rhodesia. This remained until 2004, when the plaque was transported to England and donated to the Tangmere Military Aviation Museum. Other memorials to Hull were built in Bodø in 1977 and Purley, where his aircraft crashed, in 2013. ## Early life Caesar Barrand Hull was born on 26 February 1914 at Leachdale Farm, a property near Shangani in Southern Rhodesia. His childhood years were divided between Rhodesia and South Africa, and in his early teens the family moved to Swaziland. He was educated at home until 1926, when he began to board at St. John's College in Johannesburg. A champion boxer, he represented South Africa in the lightweight division at the 1934 Empire Games in London. Hull attempted to join the South African Air Force in 1935, but was turned down because he did not speak Afrikaans. He joined the Royal Air Force (RAF) instead, enlisting in England in September 1935. Completing the pilot's course on 3 August 1936 with the rank of pilot officer, he joined No. 43 Squadron at RAF Tangmere in Sussex five days later. Much of Hull's early air force career was dedicated to aerobatics. He and Peter "Prosser" Hanks perfected a routine in which they would change places in a two-seater Hawker Audax in mid-air. Along with Peter Townsend (who joined the squadron at the same time as Hull) and Sergeant Frank Reginald Carey, they formed an aerobatic flight that performed stunts such as loops, barrel rolls and stall turns. Piloting a Hawker Fury, Hull flew the individual aerobatics at the air show at Hendon in 1937 honouring the coronation of King George VI. Hull was promoted to flying officer on 16 April 1938. As war loomed, the squadron began to prepare for combat in late 1938, and in December that year was re-equipped with Hawker Hurricane Mk Is. Hull reacted to the outbreak of the Second World War in September 1939 with great excitement; according to Hector Bolitho, No. 43 Squadron's intelligence officer, the Rhodesian leapt from one foot to the other in the officer's mess, repeating the words "wizard, wizard". ## Air war in Europe ### Early war In November 1939, No. 43 Squadron moved to RAF Acklington, near Newcastle-upon-Tyne, flying Hawker Hurricane Mk Is. Amid severe weather conditions, Hull scored the squadron's first victory of the war on 30 January 1940, when he shot down a Heinkel He 111 bomber of the Luftwaffe near the island of Coquet. On 26 February the squadron was transferred to RAF Wick in northern Scotland to help protect the Home Fleet at Scapa Flow. Hull, Carey and three others together downed another He 111 on 28 March 1940. On 10 April 1940, Hull took part in the destruction of a reconnaissance He 111. The aircraft had been sent out in advance of a major raid launched by He 111s from Kampfgeschwader 26 and Kampfgruppe 100, aimed at covering the German invasion of Norway. When No. 43 Squadron returned to its home base at Tangmere in May 1940, some of its leading pilots were reassigned to other units: among these were Townsend, who was assigned to No. 85 Squadron RAF as its commanding officer, and Hull, who was posted to No. 263 Squadron to command a flight of Gloster Gladiator biplanes during the unit's second committal to the Norwegian Campaign. ### Norway No. 263 Squadron was deployed to the area around Narvik, a strategically valuable port city in northern Norway then under German control, but fiercely contested by the Norwegians and Allies. Crossing the Norwegian Sea aboard the aircraft carrier HMS Furious, the pilots took off on 21 May while at sea, in groups of three each led by a Fairey Swordfish of the Fleet Air Arm, and encountered thick mist around the island of Senja; the Swordfish and two Gladiators from one of the groups crashed into a mountain. Hull led the first four aircraft through and landed safely at Bardufoss airfield, about 80 kilometres (50 mi) north-east of Narvik, at 04:20. A further 12 Gladiators followed four hours later. Fourteen Gladiators were operational and began flying patrols from Bardufoss on 22 May, carrying out 30 sorties on the first day. Hull and two other pilots together downed a He 111 over Salangen on 24 May 1940, killing two of the five German crew; the other three were captured by Norwegian troops after making an emergency landing at Fjordbotneidet. In all, during its two weeks of operations in northern Norway, No. 263 Squadron was to claim 26 confirmed kills and nine probable victories during 70 dogfights. Hull and two other pilots, South African Pilot Officer Jack Falkson and Naval Lieutenant Tony Lydekker, volunteered to be detached to an improvised airstrip at Bodø, a port about 100 kilometres (62 mi) south-west of Narvik, on 26 May 1940 to cover Allied troops who were retreating north for evacuation under Operation Alphabet. Arriving to find the airfield extremely muddy, the pilots had great difficulty moving their aircraft to drier ground to refuel from four-gallon (18-l) tin cans. A He 111 was spotted overhead while this was in progress, prompting the three pilots to scramble having only partially refuelled. Falkson's plane crashed after mud clung to its wheels, and while Lydekker took off successfully, he had so little fuel that Hull almost immediately ordered him to land to add more. The Rhodesian pursued the He 111 over the Saltdal valley and, with three attacks from astern, set the bomber ablaze, forcing it to crash. Hull then downed a Junkers Ju 52 transport plane and, after unsuccessfully chasing another He 111, destroyed two more Ju 52s. The transports had been coming to the aid of the hard-pressed German forces fighting around Narvik; one was loaded with supplies, while the other two were carrying Fallschirmjäger paratroops. One of the latter aircraft successfully landed in German-held territory before burning out, allowing the crew and paratroopers aboard to exit safely, but the second spiralled out of control and crashed, killing eight German paratroopers. Hull then attacked another He 111, which soon retreated, giving off smoke. Having used up all his ammunition, Hull returned to Bodø. In the space of about an hour, in a technologically-outdated aircraft and without assistance, he had destroyed four German planes and damaged a fifth. Hull, Falkson and Lydekker spent the night of 26/27 May 1940 patrolling the area around Rognan, about 20 kilometres (12 mi) inland from Bodø. After driving German bombers away from British and Norwegian forces fighting at Pothus south of Rognan, the Gladiators strafed German ground forces. Around 08:00 on 27 May, Bodø was attacked by 11 Ju 87 "Stuka" dive bombers from I./Sturzkampfgeschwader 1 (StG 1 – Dive Bomber Wing 1) and three Messerschmitt Bf 110 fighters attached to I./Zerstörergeschwader 76 (ZG 76 – Destroyer Wing 76). Lydekker claimed one of the Stukas, but was ultimately forced to limp north to Bardufoss to land, his Gladiator heavily damaged. Having initially been caught on the ground by the German attack, Hull got his fighter airborne during a pause in the raid. After engaging the German aircraft and shooting down Feldwebel Kurt Zube's Stuka, which fell into the sea, Hull was overcome by one of the Bf 110s, piloted by Oberleutnant Helmut Lent, and forced to crash near the Bodø airfield. Wounded in the head and the knee, he was initially treated at Bodø Hospital before being evacuated back to Britain for further treatment on a Sunderland flying boat via Harstad. Hull's kills during the Norwegian Campaign made him the RAF's first Gloster Gladiator ace, as well as the most successful RAF fighter pilot of the campaign. On 17 June, while convalescing, he was awarded the Distinguished Flying Cross for his actions in Norway. ### Battle of Britain Hull was declared fit to return to operational duty after about two months' rest and recuperation in Guildford, and on 31 August 1940 he was appointed commanding officer of his former unit, No. 43 Squadron, replacing Squadron Leader John "Tubby" Badger, who had been shot down and grievously wounded the previous day. The unit was still based at Tangmere, flying Hurricanes, and was by now fighting in the Battle of Britain, the Allied participants of which would later be dubbed "The Few". Concurrently promoted to squadron leader, Hull expressed disbelief at his sudden elevation and "as if to emphasise his surprise", Andy Saunders records, suffixed his first description of himself on paper as "Commanding No. 43 Sqn" with four exclamation marks. The first engagement of Hull's command, on 2 September, resulted in three of the squadron's Hurricanes being shot down in return for two Messerschmitt Bf 109s. On 4 September, Hull led a group of Hurricanes in a decisive aerial victory over coastal Sussex against a large group of Bf 110s from ZGs 2 and 76. Flight Lieutenant Thomas Dalton-Morgan destroyed a Bf 110 north of Worthing and chased another until it crashed near Shoreham-by-Sea, while Sergeant Jeffreys shot down another Bf 110 in a field. Pilot Officer A E A van den Hove d'Ertsenrijck, from Belgium, pursued a fourth back out to sea and sent it crashing into the English Channel, but was hit himself and compelled to make an emergency landing at RAF Ford. Hull and Pilot Officer Hamilton Upton together seriously damaged two more Bf 110s. Around 16:00 on 7 September 1940, nine Hurricanes of No. 43 Squadron scrambled to intercept a large formation of German aircraft over Kent on their way to London. Hull led six of the aircraft towards the German bombers while Flight Lieutenant John "Killy" Kilmartin, from Ireland, headed a section of three tasked with countering the fighter escort. Hull took his aircraft above the bombers, then dived towards them, telling his pilots to "smash them up". A very fast engagement followed in which Hull was killed while diving to the aid of Flight Lieutenant Dick Reynell, an Australian pilot who had come under heavy attack. Hull was last seen firing at a Dornier Do 17, and was shot down by a Bf 109. Reynell was also killed. The Rhodesian ace's body was discovered largely burnt inside the shell of his Hurricane, which had crashed in the grounds of Purley Boys' High School in Purley, Surrey. He was 26 years old. The loss of Hull and Reynell, two of the squadron's most popular pilots, affected morale deeply. Kilmartin, arriving back at Tangmere on the evening of 7 September, simply muttered "My God, My God". Dalton-Morgan took over command of the squadron. Hull's remains were recovered and returned to Tangmere, where he was buried among fellow fighter pilots at St Andrew's Church. His final confirmed record for the war was four German aircraft destroyed, two damaged and four shared destroyed (counted at half a victory each); also noted were one unconfirmed destroyed, two probably destroyed and one shared probable. ## Memorials After Hull's death, the people of Shangani organised the construction of a memorial in his honour—a granite plinth to which a brass plaque was affixed commemorating the pilot's service and bravery. This monument was completed before the end of the war and erected alongside the main road between Bulawayo and Gwelo, near the bridge over the Shangani River. A memorial to the actions of Hull, Jack Falkson and Lydekker at Bodø was built at the town's airport three decades later, and inaugurated on 17 June 1977 with the Norwegian Minister of Defence, Rolf Arthur Hansen, in attendance. After Rhodesia's reconstitution as Zimbabwe in 1980, Robert Mugabe's government disowned many old monuments making reference to the fallen of the World Wars, including the Hull memorial at Shangani. The Hull family resolved in 2003 to take the plaque down and donate it to the Tangmere Military Aviation Museum, an idea that the museum welcomed. The plaque was removed, flown to England free of charge by MK Airlines—a freight carrier owned by a former Rhodesian Air Force pilot, Mike Kruger—and ceremonially delivered to the Tangmere museum curator on 17 April 2004 by Hull's sister, Wendy Bryan. A new monument to Hull was erected at Coulsdon Sixth Form College, which today occupies the Purley High School site, in 2013. Depicting an aeroplane and a dove intertwined, it was formally dedicated on 11 November that year, Remembrance Day, with Bryan present. ## Character and reputation Hull was remembered by his comrades as an exceptional pilot and an affable, jovial personality. Jimmy Beedle, in his 1966 history of No. 43 Squadron, called Hull one of its all-time great characters, citing him as a major factor in the squadron's "high standard of flying and ... outstanding squadron spirit". John Simpson, who joined the unit as a pilot officer two months after Hull, recalled finding "a confidence when flying with Caesar that was wholly lacking otherwise." "I have never seen anyone who could throw a fighter about with so much confidence as old Caesar," said another pilot, quoted by Beedle. "Nobody gave me so much confidence to have a lead from, nobody gave me so much exhilaration and fun. Following Caesar you found yourself getting more out of your machine than you had ever imagined was possible, doing things that done by yourself would have made your hair stand on end." "All the superlatives have already been written about Caesar," Beedle wrote. "Caesar Barrand Hull, of the crinkly hair and the croaky voice, the laughing warrior whose idea of a lark was to change seats in the air ... who had a phobia about worms or slugs, who would look under the bed 'in case there are any feenies about', then kneel beside it and say his prayers." Bolitho took a similar line in his 1943 book Combat Report, attesting to Hull's "bubbling, unquenchable gaiety". According to Bolitho, Hull was "possessed of a magic power of creating happiness in others; making them belittle their cares, of inspiring them with confidence, not simply in him but in themselves. Of imbuing them with his own abounding love of life. Where Caesar was, laughter was."
758,947
Gerard K. O'Neill
1,170,111,123
American physicist, author, and inventor (1927–1992)
[ "1927 births", "1992 deaths", "20th-century American physicists", "Accelerator physicists", "American astronomers", "Cornell University alumni", "Deaths from cancer in California", "Deaths from leukemia", "Fellows of the American Physical Society", "Futurologists", "Military personnel from New York City", "Particle physicists", "Princeton University faculty", "Scientists from New York City", "Space advocates", "Space burials", "Stanford University faculty", "Swarthmore College alumni", "United States Navy sailors", "Writers from Brooklyn" ]
Gerard Kitchen O'Neill (February 6, 1927 – April 27, 1992) was an American physicist and space activist. As a faculty member of Princeton University, he invented a device called the particle storage ring for high-energy physics experiments. Later, he invented a magnetic launcher called the mass driver. In the 1970s, he developed a plan to build human settlements in outer space, including a space habitat design known as the O'Neill cylinder. He founded the Space Studies Institute, an organization devoted to funding research into space manufacturing and colonization. O'Neill began researching high-energy particle physics at Princeton in 1954, after he received his doctorate from Cornell University. Two years later, he published his theory for a particle storage ring. This invention allowed particle accelerators at much higher energies than had previously been possible. In 1965 at Stanford University, he performed the first colliding beam physics experiment. While teaching physics at Princeton, O'Neill became interested in the possibility that humans could survive and live in outer space. He researched and proposed a futuristic idea for human settlement in space, the O'Neill cylinder, in "The Colonization of Space", his first paper on the subject. He held a conference on space manufacturing at Princeton in 1975. Many who became post-Apollo-era space activists attended. O'Neill built his first mass driver prototype with professor Henry Kolm in 1976. He considered mass drivers critical for extracting the mineral resources of the Moon and asteroids. His award-winning book The High Frontier: Human Colonies in Space inspired a generation of space exploration advocates. He died of leukemia in 1992. ## Birth, education, and family life O'Neill was born in Brooklyn, New York on February 6, 1927, to Edward Gerard O'Neill, a lawyer, and Dorothy Lewis O'Neill (née Kitchen). He had no siblings. His family moved to Speculator, New York when his father temporarily retired for health reasons. For high school, O'Neill attended Newburgh Free Academy in Newburgh, New York. While he was a student there he edited the school newspaper and took a job as a news broadcaster at a local radio station. He graduated in 1944, during World War II, and enlisted in the United States Navy on his 17th birthday. The Navy trained him as a radar technician, which sparked his interest in science. After he was honorably discharged in 1946, O'Neill studied physics and mathematics at Swarthmore College. As a child he had discussed the possibilities of humans in space with his parents, and in college he enjoyed working on rocket equations. However, he did not see space science as an option for a career path in physics, choosing instead to pursue high-energy physics. He graduated with Phi Beta Kappa honors in 1950. O'Neill pursued graduate studies at Cornell University with the help of an Atomic Energy Commission fellowship, and was awarded a PhD in physics in 1954. O'Neill married Sylvia Turlington, also a Swarthmore graduate, in June 1950. They had a son, Roger, and two daughters, Janet and Eleanor, before their marriage ended in divorce in 1966. One of O'Neill's favorite activities was flying. He held instrument certifications in both powered and sailplane flight and held the FAI Diamond Badge, a gliding award. During his first cross-country glider flight in April 1973, he was assisted on the ground by Renate "Tasha" Steffen. He had met Tasha, who was 21 years younger than him, previously through the YMCA International Club. They were married the day after his flight. They had a son, Edward O'Neill. ## High-energy physics research After graduating from Cornell, O'Neill accepted a position as an instructor at Princeton University. There he started his research into high-energy particle physics. In 1956, his second year of teaching, he published a two-page article that theorized that the particles produced by a particle accelerator could be stored for a few seconds in a storage ring. The stored particles could then be directed to collide with another particle beam. This would increase the energy of the particle collision over the previous method, which directed the beam at a fixed target. His ideas were not immediately accepted by the physics community. O'Neill became an assistant professor at Princeton in 1956, and was promoted to associate professor in 1959. He visited Stanford University in 1957 to meet with Wolfgang K. H. Panofsky. This resulted in a collaboration between Princeton and Stanford to build the Colliding Beam Experiment (CBX). With a US\$800,000 grant from the Office of Naval Research, construction on the first particle storage rings began in 1958 at the Stanford High-Energy Physics Laboratory. He figured out how to capture the particles and, by pumping the air out to produce a vacuum, store them long enough to experiment on them. CBX stored its first beam on March 28, 1962. O'Neill became a full professor of physics in 1965. In collaboration with Burton Richter, O'Neill performed the first colliding beam physics experiment in 1965. In this experiment, particle beams from the Stanford Linear Accelerator were collected in his storage rings and then directed to collide at an energy of 600 MeV. At the time, this was the highest energy involved in a particle collision. The results proved that the charge of an electron is contained in a volume less than 100 attometers across. O'Neill considered his device to be capable of only seconds of storage, but, by creating an even stronger vacuum, others were able to increase this to hours. In 1979, he, with physicist David C. Cheng, wrote the graduate-level textbook Elementary Particle Physics: An Introduction. He retired from teaching in 1985, but remained associated with Princeton as professor emeritus until his death. ## Space colonization ### Origin of the idea (1969) O'Neill saw great potential in the United States space program, especially the Apollo missions. He applied to the Astronaut Corps after NASA opened it up to civilian scientists in 1966. Later, when asked why he wanted to go on the Moon missions, he said, "to be alive now and not take part in it seemed terribly myopic". He was put through NASA's rigorous mental and physical examinations. During this time he met Brian O'Leary, also a scientist-astronaut candidate, who became his good friend. O'Leary was selected for Astronaut Group 6 but O'Neill was not. O'Neill became interested in the idea of space colonization in 1969 while he was teaching freshman physics at Princeton University. His students were growing cynical about the benefits of science to humanity because of the controversy surrounding the Vietnam War. To give them something relevant to study, he began using examples from the Apollo program as applications of elementary physics. O'Neill posed the question during an extra seminar he gave to a few of his students: "Is the surface of a planet really the right place for an expanding technological civilization?" His students' research convinced him that the answer was no. O'Neill was inspired by the papers written by his students. He began to work out the details of a program to build self-supporting space habitats in free space. Among the details was how to provide the inhabitants of a space colony with an Earth-like environment. His students had designed giant pressurized structures, spun up to approximate Earth gravity by centrifugal force. With the population of the colony living on the inner surface of a sphere or cylinder, these structures resembled "inside-out planets". He found that pairing counter-rotating cylinders would eliminate the need to spin them using rockets. This configuration has since been known as the O'Neill cylinder. ### First paper (1970–1974) Looking for an outlet for his ideas, O'Neill wrote a paper titled "The Colonization of Space", and for four years attempted to have it published. He submitted it to several journals and magazines, including Scientific American and Science, only to have it rejected by the reviewers. During this time O'Neill gave lectures on space colonization at Hampshire College, Princeton, and other schools. The Hampshire lecture was facilitated by O'Leary, by now an assistant professor of astronomy and science policy assessment at the institution; in 1976, he joined O'Neill's research group at Princeton. Many students and staff attending the lectures became enthusiastic about the possibility of living in space. Another outlet for O'Neill to explore his ideas was with his children; on walks in the forest they speculated about life in a space colony. His paper finally appeared in the September 1974 issue of Physics Today. In it, he argued that building space colonies would solve several important problems: > It is important to realize the enormous power of the space-colonization technique. If we begin to use it soon enough, and if we employ it wisely, at least five of the most serious problems now facing the world can be solved without recourse to repression: bringing every human being up to a living standard now enjoyed only by the most fortunate; protecting the biosphere from damage caused by transportation and industrial pollution; finding high quality living space for a world population that is doubling every 35 years; finding clean, practical energy sources; preventing overload of Earth's heat balance. He explored the possibilities of flying gliders inside a space colony, finding that the enormous volume could support atmospheric thermals. He calculated that humanity could expand on this man-made frontier to 20,000 times its population. The initial colonies would be built at the Earth-Moon and Lagrange points. and are stable points in the Solar System where a spacecraft can maintain its position without expending energy. The paper was well received, but many who would begin work on the project had already been introduced to his ideas before it was even published. The paper received a few critical responses. Some questioned the practicality of lifting tens of thousands of people into orbit and his estimates for the production output of initial colonies. While he was waiting for his paper to be published, O'Neill organized a small two-day conference in May 1974 at Princeton to discuss the possibility of colonizing outer space. The conference, titled First Conference on Space Colonization, was funded by Stewart Brand's Point Foundation and Princeton University. Among those who attended were Eric Drexler (at the time a freshman at MIT), scientist-astronaut Joe Allen (from Astronaut Group 6), Freeman Dyson, and science reporter Walter Sullivan. Representatives from NASA also attended and brought estimates of launch costs expected on the planned Space Shuttle. O'Neill thought of the attendees as "a band of daring radicals". Sullivan's article on the conference was published on the front page of The New York Times on May 13, 1974. As media coverage grew, O'Neill was inundated with letters from people who were excited about living in space. To stay in touch with them, O'Neill began keeping a mailing list and started sending out updates on his progress. A few months later he heard Peter Glaser speak about solar power satellites at NASA's Goddard Space Flight Center. O'Neill realized that, by building these satellites, his space colonies could quickly recover the cost of their construction. According to O'Neill, "the profound difference between this and everything else done in space is the potential of generating large amounts of new wealth". ### NASA studies (1975–1977) O'Neill held a much larger conference the following May titled Princeton University Conference on Space Manufacturing. At this conference more than two dozen speakers presented papers, including Keith and Carolyn Henson from Tucson, Arizona. After the conference Carolyn Henson arranged a meeting between O'Neill and Arizona Congressman Mo Udall, then a leading contender for the 1976 Democratic presidential nomination. Udall wrote a letter of support, which he asked the Hensons to publicize, for O'Neill's work. The Hensons included his letter in the first issue of the L-5 Society newsletter, sent to everyone on O'Neill's mailing list and those who had signed up at the conference. In June 1975, O'Neill led a ten-week study of permanent space habitats at NASA Ames. During the study he was called away to testify on July 23 to the House Subcommittee on Space Science and Applications. On January 19, 1976, he also appeared before the Senate Subcommittee on Aerospace Technology and National Needs. In a presentation titled Solar Power from Satellites, he laid out his case for an Apollo-style program for building power plants in space. He returned to Ames in June 1976 and 1977 to lead studies on space manufacturing. In these studies, NASA developed detailed plans to establish bases on the Moon where space-suited workers would mine the mineral resources needed to build space colonies and solar power satellites. ### Private funding (1977–1978) Although NASA was supporting his work with grants of up to \$500,000 per year, O'Neill became frustrated by the bureaucracy and politics inherent in government-funded research. He thought that small privately funded groups could develop space technology faster than government agencies. In 1977, O'Neill and his wife Tasha founded the Space Studies Institute, a non-profit organization, at Princeton University. SSI received initial funding of almost \$100,000 from private donors, and in early 1978 began to support basic research into technologies needed for space manufacturing and settlement. One of SSI's first grants funded the development of the mass driver, a device first proposed by O'Neill in 1974. Mass drivers are based on the coilgun design, adapted to accelerate a non-magnetic object. One application O'Neill proposed for mass drivers was to throw baseball-sized chunks of ore mined from the surface of the Moon into space. Once in space, the ore could be used as raw material for building space colonies and solar power satellites. He took a sabbatical from Princeton to work on mass drivers at MIT. There he served as the Hunsaker Visiting Professor of Aerospace during the 1976–77 academic year. At MIT, he, Henry H. Kolm, and a group of student volunteers built their first mass driver prototype. The eight-foot (2.5 m) long prototype could apply 33 g (320 m/s<sup>2</sup>) of acceleration to an object inserted into it. With financial assistance from SSI, later prototypes improved this to 1,800 g (18,000 m/s<sup>2</sup>), enough acceleration that a mass driver only 520 feet (160 m) long could launch material off the surface of the Moon. ### Opposition (1977–1985) In 1977, O'Neill saw the peak of interest in space colonization, along with the publication of his first book, The High Frontier. He and his wife were flying between meetings, interviews, and hearings. On October 9, the CBS program 60 Minutes ran a segment about space colonies. Later they aired responses from the viewers, which included one from Senator William Proxmire, chairman of the Senate Subcommittee responsible for NASA's budget and an aggressive critic of government failure. His response was: "It's the best argument yet for chopping NASA's funding to the bone .... I say not a penny for this nutty fantasy". He successfully eliminated spending on space colonization research from the budget. In 1978, Paul Werbos wrote for the L-5 newsletter, "no one expects Congress to commit us to O'Neill's concept of large-scale space habitats; people in NASA are almost paranoid about the public relations aspects of the idea". When it became clear that a government-funded colonization effort was politically impossible, popular support for O'Neill's ideas started to evaporate. Other pressures on O'Neill's colonization plan were the high cost of access to Earth orbit and the declining cost of energy. Building solar power stations in space was economically attractive when energy prices spiked during the 1979 oil crisis. When prices dropped in the early 1980s, funding for space solar power research dried up. His plan had also been based on NASA's estimates for the flight rate and launch cost of the Space Shuttle, numbers that turned out to have been wildly optimistic. His 1977 book quoted a Space Shuttle launch cost of \$10 million, but in 1981 the subsidized price given to commercial customers started at \$38 million. A 1985 accounting of the full cost of a launch in 1985 raised this as high as \$180 million per flight. O'Neill was appointed by United States President Ronald Reagan to the National Commission on Space in 1985. The commission, led by former NASA administrator Thomas Paine, proposed that the government commit to opening the inner Solar System for human settlement within 50 years. Their report was released in May 1986, four months after the Space Shuttle Challenger broke up on ascent. ## Writing career O'Neill's popular science book The High Frontier: Human Colonies in Space (1977) combined fictional accounts of space settlers with an explanation of his plan to build space colonies. Its publication established him as the spokesman for the space colonization movement. It won the Phi Beta Kappa Award in Science that year, and prompted Swarthmore College to grant him an honorary doctorate. The High Frontier has been translated into five languages and remained in print as of 2008. His 1981 book 2081: A Hopeful View of the Human Future was an exercise in futurology. O'Neill narrated it as a visitor to Earth from a space colony beyond Pluto. The book explored the effects of technologies he called "drivers of change" on the coming century. Some technologies he described were space colonies, solar power satellites, anti-aging drugs, hydrogen-propelled cars, climate control, and underground magnetic trains. He left the social structure of the 1980s intact, assuming that humanity would remain unchanged even as it expanded into the Solar System. Reviews of 2081 were mixed. New York Times reviewer John Noble Wilford found the book "imagination-stirring", but Charles Nicol thought the technologies described were unacceptably far-fetched. In his book The Technology Edge, published in 1983, O'Neill wrote about economic competition with Japan. He argued that the United States had to develop six industries to compete: microengineering, robotics, genetic engineering, magnetic flight, family aircraft, and space science. He also thought that industrial development was suffering from short-sighted executives, self-interested unions, high taxes, and poor education of Americans. According to reviewer Henry Weil, O'Neill's detailed explanations of emerging technologies differentiated the book from others on the subject. ## Entrepreneurial efforts O'Neill founded Geostar Corporation to develop a satellite position determination system for which he was granted a patent in 1982. The system, primarily intended to track aircraft, was called Radio Determination Satellite Service (RDSS). In April 1983 Geostar applied to the FCC for a license to broadcast from three satellites, which would cover the entire United States. Geostar launched GSTAR-2 into geosynchronous orbit in 1986. Its transmitter package permanently failed two months later, so Geostar began tests of RDSS by transmitting from other satellites. With his health failing, O'Neill became less involved with the company at the same time it started to run into trouble. In February 1991 Geostar filed for bankruptcy and its licenses were sold to Motorola for the Iridium satellite constellation project. Although the system was eventually replaced by GPS, O'Neill made significant advances in the field of position determination. O'Neill founded O'Neill Communications in Princeton in 1986. He introduced his Local Area Wireless Networking, or LAWN, system at the PC Expo in New York in 1989. The LAWN system allowed two computers to exchange messages over a range of a couple hundred feet at a cost of about \$500 per node. O'Neill Communications went out of business in 1993; the LAWN technology was sold to Omnispread Communications. As of 2008, Omnispread continued to sell a variant of O'Neill's LAWN system. On November 18, 1991, O'Neill filed a patent application for a vactrain system. He called the company he wanted to form VSE International, for velocity, silence, and efficiency. However, the concept itself he called Magnetic Flight. The vehicles, instead of running on a pair of tracks, would be elevated using electromagnetic force by a single track within a tube (permanent magnets in the track, with variable magnets on the vehicle), and propelled by electromagnetic forces through tunnels. He estimated the trains could reach speeds of up to 2,500 mph (4,000 km/h) — about five times faster than a jet airliner — if the air was evacuated from the tunnels. To obtain such speeds, the vehicle would accelerate for the first half of the trip, and then decelerate for the second half of the trip. The acceleration was planned to be a maximum of about one-half of the force of gravity. O'Neill planned to build a network of stations connected by these tunnels, but he died two years before his first patent on it was granted. ## Death and legacy O'Neill was diagnosed with leukemia in 1985. He died on April 27, 1992, from complications of the disease at the Sequoia Hospital in Redwood City, California. He was survived by his wife Tasha, his ex-wife Sylvia, and his four children. A sample of his incinerated remains was buried in space. The Celestis vial containing his ashes was attached with vials of other Celestis participants to a Pegasus XL rocket and launched into Earth orbit on April 21, 1997. It re-entered the atmosphere in May 2002. O'Neill directed his Space Studies Institute to continue their efforts "until people are living and working in space". After his death, management of SSI was passed to his son Roger and colleague Freeman Dyson. SSI continued to hold conferences every other year to bring together scientists studying space colonization until 2001. O'Neill's work informs the company Blue Origin founded by Jeff Bezos, which wants to build the infrastructure for future space colonization. Henry Kolm went on to start Magplane Technology in the 1990s to develop the magnetic transportation technology that O'Neill had written about. In 2007, Magplane demonstrated a working magnetic pipeline system to transport phosphate ore in Florida. The system ran at a speed of 40 mph (65 km/h), far slower than the high-speed trains O'Neill envisioned. All three of the founders of the Space Frontier Foundation, an organization dedicated to opening the space frontier to human settlement, were supporters of O'Neill's ideas and had worked with him in various capacities at the Space Studies Institute. One of them, Rick Tumlinson, describes three men as models for space advocacy: Wernher von Braun, Gerard K. O'Neill, and Carl Sagan. Von Braun pushed for "projects that ordinary people can be proud of but not participate in". Sagan wanted to explore the universe from a distance. O'Neill, with his grand scheme for settlement of the Solar System, emphasized moving ordinary people off the Earth "en masse". The National Space Society (NSS) gives the Gerard K. O'Neill Memorial Award for Space Settlement Advocacy to individuals noted for their contributions in the area of space settlement. Their contributions can be scientific, legislative, and educational. The award is a trophy cast in the shape of a Bernal sphere. The NSS first bestowed the award in 2007 on lunar entrepreneur and former astronaut Harrison Schmitt. In 2008, it was given to physicist John Marburger. As of November, 2013, Gerard O'Neill's papers and work are now located in the archives at the Smithsonian National Air and Space Museum, Steven F. Udvar-Hazy Center. ## Publications ### Books ### Papers ## Patents O'Neill was granted six patents in total (two posthumously) in the areas of global position determination and magnetic levitation. - Satellite-based vehicle position determining system, granted November 16, 1982 - Satellite-based position determining and message transfer system with monitoring of link quality, granted May 10, 1988 - Position determination and message transfer system employing satellites and stored terrain map, granted June 13, 1989 - Position determination and message transfer system employing satellites and stored terrain map, granted October 23, 1990 - High speed transport system, granted February 1, 1994 - High speed transport system, granted July 18, 1995 ## See also - Konstantin Tsiolkovskii (1857–1935) wrote about humans living in space in the 1920s - J. D. Bernal (1901–1971) inventor of the Bernal sphere, a space habitat design - Rolf Wideröe (1902–1996) filed for a patent on a particle storage ring design during World War II - Krafft Ehricke (1917–1984) rocket engineer and space colonization advocate - John S. Lewis, wrote about the resources of the Solar System in Mining the Sky - Marshall Savage, author of The Millennial Project: Colonizing the Galaxy in Eight Easy Steps - Spome - Space architecture - Space-based solar power
1,377,365
La Peau de chagrin
1,160,751,145
Novel by Honoré de Balzac
[ "1831 French novels", "Books of La Comédie humaine", "French fantasy novels", "French novels adapted into films", "French philosophical novels", "Novels by Honoré de Balzac", "Novels set in the 19th century" ]
La Peau de chagrin (, The Skin of Shagreen), known in English as The Magic Skin and The Wild Ass's Skin, is an 1831 novel by French novelist and playwright Honoré de Balzac (1799–1850). Set in early 19th-century Paris, it tells the story of a young man who finds a magic piece of shagreen (untanned skin from a wild ass) that fulfills his every desire. For each wish granted, however, the skin shrinks and consumes a portion of his physical energy. La Peau de chagrin belongs to the Études philosophiques group of Balzac's sequence of novels, La Comédie humaine. Before the book was completed, Balzac created excitement about it by publishing a series of articles and story fragments in several Parisian journals. Although he was five months late in delivering the manuscript, he succeeded in generating sufficient interest that the novel sold out instantly upon its publication. A second edition, which included a series of twelve other "philosophical tales", was released one month later. Although the novel uses fantastic elements, its main focus is a realistic portrayal of the excesses of bourgeois materialism. Balzac's renowned attention to detail is used to describe a gambling house, an antique shop, a royal banquet, and other locales. He also includes details from his own life as a struggling writer, placing the main character in a home similar to the one he occupied at the start of his literary career. The central theme of La Peau de chagrin is the conflict between desire and longevity. The magic skin represents the owner's life-force, which is depleted through every expression of will, especially when it is employed for the acquisition of power. Ignoring a caution from the shopkeeper who offers him the skin, the protagonist greedily surrounds himself with wealth, only to find himself miserable and decrepit at the story's end. La Peau de chagrin firmly established Balzac as a writer of significance in France. His social circle widened significantly, and he was sought eagerly by publishers for future projects. The book served as the catalyst for a series of letters he exchanged with a Polish baroness named Ewelina Hańska, who later became his wife. It also inspired Giselher Klebe's opera Die tödlichen Wünsche. ## Background In 1830 Honoré de Balzac had only begun to achieve recognition as a writer. Although his parents had persuaded him to make his profession the law, he announced in 1819 that he wanted to become an author. His mother was distraught, but she and his father agreed to give him a small income, on the condition that he dedicate himself to writing, and deliver to them half of his gross income from any published work. After moving into a tiny room near the Bibliothèque de l'Arsenal in Paris, Balzac wrote for one year, without success. Frustrated, he moved back to his family in the suburb of Villeparisis and borrowed money from his parents to pursue his literary ambitions further. He spent the next several years writing simple potboiler novels, which he published under a variety of pseudonyms. He shared some of his income from these with his parents, but by 1828 he still owed them 50,000 francs. He published for the first time under his own name in 1829. Les Chouans, a novel about royalist forces in Brittany, did not succeed commercially, but it made Balzac known in literary circles. He achieved a major success later the same year when he published La Physiologie du mariage, a treatise on the institution of marriage. Bolstered by its popularity, he added to his fame by publishing a variety of short stories and essays in the magazines Revue de Paris, La Caricature, and La Mode. He thus made connections in the publishing industry that later helped him to obtain reviews of his novels. At the time, French literary appetites for fantastic stories had been whetted by the 1829 translation of German writer E. T. A. Hoffmann's collection Fantastic Tales; the gothic fiction of England's Ann Radcliffe; and French author Jules Janin's 1829 novel L'Âne Mort et la Femme Guillotinée (The Dead Donkey and the Guillotined Woman). Although he planned a novel in the same tradition, Balzac disliked the term "fantastic", referring to it once as "the vulgar program of a genre in its first flush of newness, to be sure, but already too much worn by the mere abuse of the word". The politics and culture of France, meanwhile, were in upheaval. After reigning for six controversial years, King Charles X was forced to abdicate during the July Revolution of 1830. He was replaced by Louis-Philippe, who named himself "King of the French" (rather than the usual "King of France") in an attempt to distance himself from the Ancien Régime. The July Monarchy brought an entrenchment of bourgeois attitudes, in which Balzac saw disorganization and weak leadership. ## Writing and publication The title La Peau de chagrin first appeared in print on 9 December 1830, as a passing mention in an article Balzac wrote for La Caricature under the pseudonym Alfred Coudreux. His scrapbook includes the following note, probably written at the same time: "L'invention d'une peau qui représente la vie. Conte oriental." ("The invention of a skin that represents life. Oriental story.") One week later, he published a story fragment called "Le Dernier Napoléon" in La Caricature, under the name "Henri B...". In it, a young man loses his last Napoleon coin at a Parisian gambling house, then continues to the Pont Royal to drown himself. During this early stage, Balzac did not think much of the project. He referred to it as "a piece of thorough nonsense in the literary sense, but in which [the author] has sought to introduce certain of the situations in this hard life through which men of genius have passed before achieving anything". Before long, though, his opinion of the story improved. By January 1831 Balzac had generated enough interest in his idea to secure a contract with publishers Charles Gosselin and Urbain Canel. They agreed on 750 copies of an octavo edition, with a fee of 1,125 francs paid to the author upon receipt of the manuscript – by mid-February. Balzac delivered the novel in July. During the intervening months, however, he provided glimpses of his erratic progress. Two additional fragments appeared in May, part of a scheme to promote the book before its publication. "Une Débauche", published in the Revue des deux mondes, describes an orgiastic feast that features constant bantering and discussion from its bourgeois participants. The other fragment, "Le Suicide d'un poète", was printed in the Revue de Paris; it concerns the difficulties of a would-be poet as he tries to compensate for his lack of funds. Although the three fragments were not connected into a coherent narrative, Balzac was excerpting characters and scenes from his novel-in-progress. The novel's delayed publication was a result of Balzac's active social life. He spent many nights dining at the homes of friends, including novelist Eugène Sue and his mistress Olympe Pélissier, as well as the feminist writer George Sand and her lover Jules Sandeau. Balzac and Pélissier had a brief affair, and she became the first lover with whom he appeared in public. Eventually he removed himself from Paris by staying with friends in the suburbs, where he committed himself to finishing the work. In late spring he allowed Sand to read a nearly-finished manuscript; she enjoyed it and predicted it would do well. Finally, in August 1831, La Peau de chagrin: Conte philosophique was published in two volumes. It was a commercial success, and Balzac used his connections in the world of Parisian periodicals to have it reviewed widely. The book sold quickly, and by the end of the month another contract had been signed: Balzac would receive 4,000 francs to publish 1,200 additional copies. This second edition included a series of twelve other stories with fantastic elements, and was released under the title Romans et contes philosophiques (Philosophical Novels and Stories). A third edition, rearranged to fill four volumes, appeared in March 1833. ## Synopsis La Peau de chagrin consists of three sections: "Le Talisman" ("The Talisman"), "La Femme sans cœur" ("The Woman without a Heart"), and "L'Agonie" ("The Agony"). The first edition contained a "Preface" and a "Moralité", which were excised from subsequent versions. A two-page Epilogue appears at the end of the final section. "Le Talisman" begins with the plot of "Le Dernier Napoléon": A young man named Raphaël de Valentin wagers his last coin and loses, then proceeds to the river Seine to drown himself. On the way, however, he decides to enter an unusual shop and finds it filled with curiosities from around the world. The elderly shopkeeper leads him to a piece of shagreen hanging on the wall. It is inscribed with "Oriental" writing; the old man calls it "Sanskrit", but it is imprecise Arabic. The skin promises to fulfill any wish of its owner, shrinking slightly upon the fulfillment of each desire. The shopkeeper is willing to let Valentin take it without charge, but urges him not to accept the offer. Valentin waves away the shopkeeper's warnings and takes the skin, wishing for a royal banquet, filled with wine, women, and friends. He is immediately met by acquaintances who invite him to such an event; they spend hours eating, drinking, and talking. Part two, "La Femme sans cœur", is narrated as a flashback from Valentin's point of view. He complains to his friend Émile about his early days as a scholar, living in poverty with an elderly landlord and her daughter Pauline, while trying fruitlessly to win the heart of a beautiful but aloof woman named Foedora. Along the way he is tutored by an older man named Eugène de Rastignac, who encourages him to immerse himself in the world of high society. Benefiting from the kindness of his landladies, Valentin maneuvers his way into Foedora's circle of friends. Unable to win her affection, however, he becomes the miserable and destitute man found at the start of "Le Talisman". "L'Agonie" begins several years after the feast of parts one and two. Valentin, having used the talisman to secure a large income, finds both the skin and his health dwindling. He tries to break the curse by getting rid of the skin, but fails. The situation causes him to panic, horrified that further desires will hasten the end of his life. He organizes his home to avoid the possibility of wishing for anything: his servant, Jonathan, arranges food, clothing, and visitors with precise regularity. Events beyond his control cause him to wish for various things, however, and the skin continues to recede. Desperate, the sickly Valentin tries to find some way of stretching the skin, and takes a trip to the spa town of Aix-les-Bains in the hope of recovering his vitality. With the skin no larger than a periwinkle leaf, he is visited by Pauline in his room; she expresses her love for him. When she learns the truth about the shagreen and her role in Raphaël's demise, she is horrified. Raphaël cannot control his desire for her and she rushes into an adjoining room to escape him and so save his life. He pounds on the door and declares both his love and his desire to die in her arms. She, meanwhile, is trying to kill herself to free him from his desire. He breaks down the door, they consummate their love in a fiery moment of passion, and he dies. ## Style Although he preferred the term "philosophical", Balzac's novel is based upon a fantastic premise. The skin grants a world of possibility to Valentin, and he uses it to satisfy many desires. Pressured into a duel, for example, he explains how he need neither avoid his opponent's gunshot nor aim his own weapon; the outcome is inevitable. He fires without care, and kills the other man instantly. Elsewhere, the supernatural qualities of the skin are demonstrated when it resists the efforts of a chemist and a physicist to stretch it. This inclusion of the fantastic, however, is mostly a framework by which the author discusses human nature and society. One critic suggests that "the story would be much the same without it". Balzac had used supernatural elements in the potboiler novels he published under noms de plume, but their presence in Peau de chagrin signaled a turning point in his approach to the use of symbolism. Whereas he had used fantastic objects and events in earlier works, they were mostly simple plot points or uncomplicated devices for suspense. With La Peau de chagrin, on the other hand, the talisman represents Valentin's soul; at the same time, his demise is symbolic of a greater social decline. Balzac's real foci in the 1831 novel are the power of human desire and the nature of society after the July Revolution. French writer and critic Félicien Marceau even suggests that the symbolism in the novel allows a purer analysis than the individual case studies of other Balzac novels; by removing the analysis to an abstract level, it becomes less complicated by variations of individual personality. As an everyman, Valentin displays the essential characteristics of human nature, not a particular person's approach to the dilemma offered by the skin. In his Preface to the novel's first edition, Balzac meditates on the usefulness of fantastic elements: "[Writers] invent the true, by analogy, or they see the object to be described, whether the object comes to them or they go toward the object ... Have men the power to bring the universe into their brain, or is their brain a talisman with which they abolish the laws of time and space?" Critics agree that Balzac's goal in La Peau de chagrin was the former. ### Realism The novel is widely cited as an important early example of the realism for which Balzac became famous. Descriptions of Paris are one example: the novel is filled with actual locations, including the Palais Royal and the Notre Dame Cathedral. The narration and characters allude repeatedly to art and culture, from Gioachino Rossini's opera Tancredi to the statue of Venus de Milo. The book's third paragraph contains a long description of the process and purpose behind the ritual in gambling houses whereby "the law despoils you of your hat at the outset." The atmosphere of the establishment is described in precise detail, from the faces of the players to the "greasy" wallpaper and the tablecloth "worn by the friction of gold". The emphasis on money evoked in the first pages – and its contrast with the decrepit surroundings – mirrors the novel's themes of social organization and economic materialism. The confluence of realist detail with symbolic meaning continues when Valentin enters the antique shop; the store represents the planet itself. As he wanders about, he tours the world through the relics of its various epochs: "Every land of earth seemed to have contributed some stray fragment of its learning, some example of its art." The shop contains a painting of Napoleon; a Moorish yataghan; an idol of the Tartars; portraits of Dutch burgomasters; a bust of Cicero; an Ancient Egyptian mummy; an Etruscan vase; a Chinese dragon; and hundreds of other objects. The panorama of human activity reaches a moral fork in the road when the shopkeeper leads Valentin to Raphael's portrait of Jesus Christ. It does not deter him from his goal, however; only when he finds the skin does Valentin decide to abort his suicidal mission. In doing so, he demonstrates humanity favoring ego over divine salvation. ### Opening image At the start of the novel, Balzac includes an image from Laurence Sterne's 1759 novel Tristram Shandy: a curvy line drawn in the air by a character seeking to express the freedom enjoyed "whilst a man is free". Balzac never explained his purpose behind the use of the symbol, and its significance to La Peau de chagrin is the subject of debate. In his comprehensive review of La Comédie humaine, Herbert J. Hunt connects the "serpentine squiggle" to the "sinuous design" of Balzac's novel. Critic Martin Kanes, however, suggests that the image symbolizes the impossibility of language to express an idea fully. This dilemma, he proposes, is directly related to the conflict between will and knowledge indicated by the shopkeeper at the start of the novel. ## Themes ### Autobiography Balzac mined his own life for details in the first parts of La Peau de Chagrin, and he likely modeled the protagonist Raphaël de Valentin on himself. Details recounted by Valentin of his impoverished living quarters are autobiographical allusions to Balzac's earliest days as an author: "Nothing could be uglier than this garret, awaiting its scholar, with its dingy yellow walls and odor of poverty. The roofing fell in a steep slope, and the sky was visible through chinks in the tiles. There was room for a bed, a table, and a few chairs, and beneath the highest point of the roof my piano could stand." Although they allow for a degree of embellishment, biographers and critics agree that Balzac was drawing from his own experience. Other parts of the story also derive from the author's life: Balzac once attended a feast held by the Marquis de Las Marismas, who planned to launch a newspaper – the same situation in which Valentin finds himself after expressing his first wish to the talisman. Later, Valentin visits the opera armed with a powerful set of glasses that allow him to observe every flaw in the women on stage (to guard against desire). These may also have been drawn from Balzac's experience, as he once wrote in a letter about a set of "divine" opera glasses he ordered from the Paris Observatory. More significant is the connection between the women in the novel and the women in Balzac's life. Some critics have noted important similarities between Valentin's efforts to win the heart of Foedora and Balzac's infatuation with Olympe Pélissier. A scene in which Valentin hides in Foedora's bedroom to watch her undress is said to come from a similar situation wherein Balzac secretly observed Pélissier. It's probable that Pélissier was not the model for Foedora, however, since she accepted Balzac's advances and wrote him friendly letters; Foedora, by contrast, declares herself outside the reach of any interested lover. Critics agree that the "Woman without a Heart" described in the novel is a composite of other women Balzac knew. The character of Pauline, meanwhile, was likely influenced by another of Balzac's mistresses, Laure de Berny. ### Vouloir, pouvoir, and savoir At the start of the book, the shopkeeper discusses with Valentin "the great secret of human life". They consist of three words, which Balzac renders in capital letters: VOULOIR ("to will"), POUVOIR ("to be able"), and SAVOIR ("to know"). Will, he explains, consumes us; power (or, in one translation, "to have your will") destroys us; and knowledge soothes us. These three concepts form the philosophical foundation of the novel. The talisman connects these precepts to the theory of vitalism; it physically represents the life force of its owner, and is reduced with each exercise of the will. The shopkeeper tries to warn Valentin that the wisest path lies not in exercising his will or securing power, but in developing the mind. "What is folly", he asks Valentin, "if not an excess of will and power?" Overcome with the possibilities offered by the skin, however, the young man throws caution to the wind and embraces his desire. Upon grabbing the talisman, he declares: "I want to live with excess." Only when his life force is nearly depleted does he recognize his mistake: "It suddenly struck him that the possession of power, no matter how enormous, did not bring with it the knowledge of how to use it ... [he] had had everything in his power, and he had done nothing." The will, Balzac cautions, is a destructive force that seeks only to acquire power unless tempered by knowledge. The shopkeeper presents a foil for Valentin's future self, offering study and mental development as an alternative to consuming desire. Foedora also serves as a model for resistance to the corruption of will, insofar as she seeks at all times to excite desire in others while never giving in to her own. That Valentin is happiest living in the material squalor of his tiny garret – lost in study and writing, with the good-hearted Pauline giving herself to him – underscores the irony of his misery at the end of the book, when he is surrounded with the fruits of his material desire. ### Society The novel extrapolates Balzac's analysis of desire from the individual to society; he feared that the world, like Valentin, was losing its way due to material excess and misguided priorities. In the gambling house, the orgiastic feast, the antique shop, and the discussions with men of science, Balzac examines this dilemma in various contexts. The lust for social status to which Valentin is led by Rastignac is emblematic of this excess; the gorgeous but unattainable Foedora symbolizes the pleasures offered by high society. Science offers no panacea. In one scene, a group of doctors offer a range of quickly formulated opinions as to the cause of Valentin's feebleness. In another, a physicist and a chemist admit defeat after employing a range of tactics designed to stretch the skin. All of these scientific approaches lack an understanding of the true crisis, and are therefore doomed to fail. Although it is only shown in glimpses – the image of Christ, for example, painted by Valentin's namesake, the Renaissance artist Raphael – Balzac wished to remind readers that Christianity offered the potential to temper deadly excess. After failing in their efforts to stretch the skin, the chemist declares: "I believe in the devil"; "And I in God", replies the physicist. The corruption of excess is related to social disorganization in a description at the start of the final section. Physically feeble though living in absolute luxury, Raphaël de Valentin is described as retaining in his eyes "an extraordinary intelligence" with which he is able to see "everything at once": > That expression was painful to see ... It was the inscrutable glance of helplessness that must perforce consign its desires to the depths of its own heart; or of a miser enjoying in imagination all the pleasures that his money could procure for him, while he declines to lessen his hoard; the look of a bound Prometheus, of the fallen Napoleon of 1815, when he learned at the Elysee the strategical blunder that his enemies had made, and asked for twenty-four hours of command in vain ... ## Reception and legacy The novel sold out immediately after going on sale, and was reviewed in every major Parisian newspaper and magazine. In some cases Balzac wrote the reviews himself; using the name "Comte Alex de B—", he announced that the book proved he had achieved "the stature of genius". Independent reviews were less sweeping, but also very positive. Poet Émile Deschamps praised the rhythm of the novel, and the religious commentator Charles Forbes René de Montalembert indicated approvingly that it highlighted the need for more spirituality in society as a whole. Although some critics chastised Balzac for reveling in negativity, others felt it simply reflected the condition of French society. German writer Johann Wolfgang von Goethe declared it a shining example of the "incurable corruption of the French nation". Critics argue about whether Goethe's comments were praise for the novel or not. This storm of publicity caused a flurry of activity as readers around France scrambled to obtain the novel. Balzac's friend and La Caricature editor Charles Philipon wrote to the author one week after publication: "there is no getting hold of La Peau de chagrin. Grandville had to stop everything to read it, because the librarian sent round every half-hour to ask if he had finished." Friends near and far wrote to Balzac indicating their similar difficulties in locating copies. The second edition was released one month later, and it was followed by parodies and derivative works from other writers. Balzac's friend Théophile Gautier included a comical homage in his 1833 story collection Les Jeunes-France when, during a recreation of the feast from Balzac's novel, a character says: "This is the point at which I'm supposed to pour wine down my waistcoat ... It says so in black and white on page 171 of La Peau de chagrin ... And this is where I have to toss a 100-sou coin in the air to see whether or not there's a God." The novel established Balzac as a prominent figure in the world of French literature. Publishers fought among themselves to publish his future work, and he became a mainstay on the list of invitation for social functions around Paris. Balzac took pride in his novel's success, and declared to the editor of the journal L'Avenir that "Elle est donc le point de départ de mon ouvrage" ("This is the point of departure for my body of work"). Consistently popular even after his death, La Peau de chagrin was republished nineteen times between 1850 and 1880. When he developed his scheme for organizing all of his novels and stories into a single sequence called La Comédie humaine, Balzac placed La Peau de chagrin at the start of the section called Études philosophiques ("Philosophical Studies"). Like the other works in this category – including the similarly autobiographical Louis Lambert (1832) – it deals with philosophy and the supernatural. But it also provides a bridge to the realism of the Études des mœurs ("Study of Manners"), where the majority of his novels were located. ### L'Étrangère The popularity of the novel extended to Ukraine, where a baroness named Ewelina Hańska read about Balzac's novels in newspapers she received from Paris. Intrigued, she ordered copies of his work, and she read them with her cousins and friends around Volhynia. They were impressed by the understanding he showed toward women in La Physiologie du mariage, but felt that La Peau de chagrin portrayed them in a cruel and unforgiving light. Hańska wrote a letter to Balzac, signed it as L'Étrangère ("The Stranger"), and mailed it from Odessa on 28 February 1832. With no return address, Balzac was left to reply in the Gazette de France, with the hope that she would see the notice. She did not, but wrote again in November: "Your soul embraces centuries, monsieur; its philosophical concepts appear to be the fruit of long study matured by time; yet I am told you are still young. I would like to know you, but feel I have no need to do so. I know you through my own spiritual instinct; I picture you in my own way, and feel that if I were to actually set eyes upon you, I should instantly exclaim, 'That is he!'" Eventually she revealed herself to him, and they began a correspondence that lasted for fifteen years. Although she remained faithful to her husband Wacław, Mme. Hańska and Balzac enjoyed an emotional intimacy through their letters. When the baron died in 1841, the French author began to pursue the relationship outside the written page. They wed in the town of Berdychiv on 14 March 1850, five months before he died. ### Recurring characters Because it was among the first novels he released under his own name, Balzac did not use characters in La Peau de chagrin from previous works. He did, however, introduce several individuals who resurfaced in later stories. Most significant of these is Eugène de Rastignac, the older gentleman who tutors Valentin in the vicious ways of high society. Thirty pages into the writing of his 1834 novel Le Père Goriot, Balzac suddenly crossed out the name he had been using for a character – Massiac – and used Rastignac instead. The relationship between teacher and student in La Peau de chagrin is mirrored in Le Père Goriot, when the young Rastignac is guided in the ways of social realpolitik by the incognito criminal Vautrin. Balzac used the character Foedora in three other stories, but eventually wrote her out of them after deciding on other models for social femininity. In later editions of La Peau de chagrin, he changed the text to name one of the bankers "Taillefer", whom he had introduced in L'Auberge rouge (1831). He also used the name Horace Bianchon for one of the doctors, thus connecting the book to the famous physician who appears in thirty-one stories in La Comédie humaine. So vividly had the doctor been rendered that Balzac himself called out for Bianchon while lying on his deathbed. The use of recurring characters lends Balzac's work a cohesion and atmosphere unlike any other series of novels. It enables a depth of characterization that goes beyond simple narration or dialogue. "When the characters reappear", notes the critic Samuel Rogers, "they do not step out of nowhere; they emerge from the privacy of their own lives which, for an interval, we have not been allowed to see." Although the complexity of these characters' lives inevitably led Balzac to make errors of chronology and consistency, the mistakes are considered minor in the overall scope of the project. Readers are more often troubled by the sheer number of people in Balzac's world, and feel deprived of important context for the characters. Detective novelist Arthur Conan Doyle said that he never tried to read Balzac, because he "did not know where to begin". ### Influence Balzac's novel was adapted for the libretto of Giselher Klebe's 1959 opera Die tödlichen Wünsche (The Deadly Wishes). 1977–1978 the German composer Fritz Geißler composed Das Chagrinleder after a libretto by Günther Deicke. In 1989–1990 the Russian composer Yuri Khanon wrote the ballet L’Os de chagrin (The Shagreen Bone), based on Balzac's text, which included an opera-interlude of the same name. In 1992 a biographic pseudo-documentary in the form of an opera-film based on his opera L'os de Chagrin («Chagrenevaia Kost»,ru) was released. The novel has also been cited as a possible influence on Oscar Wilde for his 1890 novel The Picture of Dorian Gray, although this hypothesis is rejected by most scholars. The protagonist, Dorian Gray, acquires a magical portrait that ages while he remains forever youthful. Russian literature specialist Priscilla Meyer maintains in her book How the Russians Read the French, that both La Peau de Chagrin and Pere Goriot were extensively paralleled, subverted and inverted, by Dostoevsky in Crime and Punishment. The story was first adapted into a 1909 French silent film entitled The Wild Ass's Skin, directed by Albert Capellani, written by Michel Carre and starring Henri Desfontaines, which, despite its brief 19-minute running time, was formatted into three acts. In 1915, American director Richard Ridgely made a film adaptation of Balzac's novel entitled The Magic Skin for Thomas A. Edison, Inc. The 50-minute film starred Mabel Trunnelle, Bigelow Cooper, and Everett Butterfield, and diluted the supernatural aspects of the story by revealing it all to be a dream. In 1920, it was adapted again as a 54-minute British silent film called Desire (aka The Magic Skin), written and directed by George Edwardes-Hall, and starring Dennis Neilson-Terry, Yvonne Arnaud and Christine Maitland. George D. Baker directed yet another version of the story, a 1923 American silent film called Slave of Desire starring George Walsh and Bessie Love. In 1960 Croatian animator Vladimir Kristl made an animated short entitled Šagrenska koža (The Piece of Shagreen Leather) inspired by Balzac's novel. It was adapted for French television in 1980, with Marc Delsaert, Catriona MacColl, Anne Caudry, Richard Fontana and Alain Cuny. In 2010, a French and Belgian television production featured Thomas Coumans, Mylène Jampanoï, Jean-Paul Dubois, Julien Honoré, Jean-Pierre Marielle and Annabelle Hettmann. Toward the end of his life, Austrian psychoanalyst Sigmund Freud felt a special connection to Balzac's novel, since he believed that his world was shrinking like Valentin's talisman. Diagnosed with a fatal tumor, Freud resolved to commit suicide. After re-reading La Peau de chagrin, he said to his doctor: "This was the proper book for me to read; it deals with shrinking and starvation." The next day, his doctor administered a lethal dose of morphine, and Freud died. In 2011 French director Marianne Badrichani staged an adaptation of La Peau de Chagrin in London's Holland Park.
1,171,869
Central London Railway
1,134,534,066
Underground railway company in London
[ "1889 establishments in England", "Predecessor companies of the London Underground", "Railway companies disestablished in 1933", "Railway companies established in 1889", "Transport in the City of London", "Transport in the City of Westminster", "Transport in the London Borough of Camden", "Transport in the London Borough of Ealing", "Transport in the London Borough of Hammersmith and Fulham", "Transport in the London Borough of Hounslow", "Transport in the London Borough of Richmond upon Thames", "Transport in the Royal Borough of Kensington and Chelsea", "Underground Electric Railways Company of London" ]
The Central London Railway (CLR), also known as the Twopenny Tube, was a deep-level, underground "tube" railway that opened in London in 1900. The CLR's tunnels and stations form the central section of the London Underground's Central line. The railway company was established in 1889, funding for construction was obtained in 1895 through a syndicate of financiers and work took place from 1896 to 1900. When opened, the CLR served 13 stations and ran completely underground in a pair of tunnels for 9.14 kilometres (5.68 mi) between its western terminus at Shepherd's Bush and its eastern terminus at the Bank of England, with a depot and power station to the north of the western terminus. After a rejected proposal to turn the line into a loop, it was extended at the western end to Wood Lane in 1908 and at the eastern end to Liverpool Street station in 1912. In 1920, it was extended along a Great Western Railway line to Ealing to serve a total distance of 17.57 kilometres (10.92 mi). After initially making good returns for investors, the CLR suffered a decline in passenger numbers due to increased competition from other underground railway lines and new motorised buses. In 1913, it was taken over by the Underground Electric Railways Company of London (UERL), operator of the majority of London's underground railways. In 1933 the CLR was taken into public ownership along with the UERL. ## Establishment ### Origin, 1889–1892 In November 1889, the CLR published a notice of a private bill that would be presented to Parliament for the 1890 parliamentary session. The bill proposed an underground electric railway running from the junction of Queen's Road (now Queensway) and Bayswater Road in Bayswater to King William Street in the City of London with a connection to the then-under construction, City and South London Railway (C&SLR) at Arthur Street West. The CLR was to run in a pair of tunnels under Bayswater Road, Oxford Street, New Oxford Street, High Holborn, Holborn, Holborn Viaduct, Newgate Street, Cheapside, and Poultry. Stations were planned at Queen's Road, Stanhope Terrace, Marble Arch, Oxford Circus, Tottenham Court Road, Southampton Row, Holborn Circus, St. Martin's Le Grand and King William Street. The tunnels were to be 11 feet (3.35 m) in diameter, constructed with a tunnelling shield, and would be lined with cast iron segments. At stations, the tunnel diameter would be 22 feet (6.71 m) or 29 feet (8.84 m) depending on layout. A depot and power station were to be constructed on a 1.5-acre (0.61 ha) site on the west side of Queen's Road. Hydraulic lifts from the street to the platforms were to be provided at each station. The proposals faced strong objections from the Metropolitan and District railways (MR and DR) whose routes on the Inner Circle, to the north and the south respectively, the CLR route paralleled; and from which the new line was expected to take passengers. The City Corporation also objected, concerned about potential damage to buildings close to the route caused by subsidence as was experienced during the construction of the C&SLR. The Dean and Chapter of St Paul's Cathedral objected, concerned about the risks of undermining the cathedral's foundations. Sir Joseph Bazalgette objected that the tunnels would damage the city's sewer system. The bill was approved by the House of Commons, but was rejected by the House of Lords, which recommended that any decision be postponed until after the C&SLR had opened and its operation could be assessed. In November 1890, with the C&SLR about to start operating, the CLR announced a new bill for the 1891 parliamentary session. The route was extended at the western end to run under Notting Hill High Street (now Notting Hill Gate) and Holland Park Avenue to end at the eastern corner of Shepherd's Bush Green, with the depot and power station site relocated to be north of the terminus on the east side of Wood Lane. The westward extension of the route was inspired by the route of abandoned plans for the London Central Subway, a sub-surface railway that was briefly proposed in early 1890 to run directly below the roadway on a similar route to the CLR. The eastern terminus was changed to Cornhill and the proposed Southampton Row station was replaced by one in Bloomsbury. Intermediate stations were added at Lansdowne Road, Notting Hill Gate, Davies Street (which the CLR planned to extend northwards to meet Oxford Street) and at Chancery Lane. The earlier plan to connect to the C&SLR was dropped and the diameter of the CLR's tunnels was increased to 11 feet 6 inches (3.51 m). This time the bill was approved by both Houses of Parliament and received royal assent on 5 August 1891 as the Central London Railway Act, 1891. In November 1891, the CLR publicised another bill. The eastern end of the line was re-routed north-east and extended to end under the Great Eastern Railway's (GER's) terminus at Liverpool Street station with the Cornhill terminus dropped and a new station proposed at the Royal Exchange. The proposals received assent as the Central London Railway Act 1892 on 28 June 1892. The money to build the CLR was obtained through a syndicate of financiers including Ernest Cassel, Henry Oppenheim, Darius Ogden Mills, and members of the Rothschild family. On 22 March 1894, the syndicate incorporated a contractor to construct the railway, the Electric Traction Company Limited (ETCL), which agreed a construction cost of £2,544,000 (approximately £ today) plus £700,000 in 4 per cent debenture stock. When the syndicate offered 285,000 CLR company shares for sale at £10 each in June 1895, only 14 per cent was bought by the British public, which was cautious of such investments following failures of similar railway schemes. Some shares were sold in Europe and the United States, but the unsold remainder was bought by members of the syndicate or by the ETCL. ### Construction, 1896–1900 To design the railway, the CLR employed the engineers James Henry Greathead, Sir John Fowler, and Sir Benjamin Baker. Greathead had been the engineer for the Tower Subway and the C&SLR, and had developed the tunnelling shield used to excavate those companies' tunnels under the River Thames. Fowler had been the engineer on the Metropolitan Railway, the world's first underground railway opened in 1863, and Baker had worked on New York's elevated railways and on the Forth Railway Bridge with Fowler. Greathead died shortly after work began and was replaced by Basil Mott, his assistant during the construction of the C&SLR. Like most legislation of its kind, the act of 1891 imposed a time limit for the compulsory purchase of land and the raising of capital. The original date specified for completion of construction was the end of 1896, but the time required to raise the finance and purchase station sites meant that construction had not begun by the start of that year. To give itself extra time, the CLR had obtained an extension of time to 1899 by the Central London Railway Act, 1894. Construction works were let by the ETCL as three sub-contracts: Shepherd's Bush to Marble Arch, Marble Arch to St Martin's Le Grand and St Martin's Le Grand to Bank. Work began with demolition of buildings at the Chancery Lane site in April 1896 and construction shafts were started at Chancery Lane, Shepherd's Bush, Stanhope Terrace and Bloomsbury in August and September 1896. Negotiations with the GER for the works under Liverpool Street station were unsuccessful, and the final section beyond Bank was only constructed for a short distance as sidings. To minimise the risk of subsidence, the routing of the tunnels followed the roads on the surface and avoided passing under buildings. Usually the tunnels were bored side by side 60–110 feet (18–34 m) below the surface, but where a road was too narrow to allow this, the tunnels were aligned one above the other, so that a number of stations have platforms at different levels. To assist with the deceleration of trains arriving at stations and the acceleration of trains leaving, station tunnels were located at the tops of slight inclines. Tunnelling was completed by the end of 1898, and, because a planned concrete lining to the cast iron tunnel rings was not installed, the internal diameter of the tunnels was generally 11 feet 8.25 inches (3.56 m). For Bank station, the CLR negotiated permission with the City Corporation to construct its ticket hall beneath a steel framework under the roadway and pavements at the junction of Threadneedle Street and Cornhill. This involved diverting pipework and cables into ducts beneath the subways linking the ticket hall to the street. Delays on this work were so costly that they nearly bankrupted the company. A further extension of time to 1900 was obtained through the Central London Railway Act, 1899. Apart from Bank, which was completely below ground, all stations had buildings designed by Harry Bell Measures. They were single-storey structures to allow for future commercial development above and had elevations faced in beige terracotta. Each station had lifts manufactured by the Sprague Electric Company in New York. The lifts were provided in a variety of sizes and configurations to suit the passenger flow at each station. Generally they operated in sets of two or three in a shared shaft. Station tunnel walls were finished in plain white ceramic tiles and lit by electric arc lamps. The electricity to run the trains and the stations was supplied from the power station at Wood Lane at 5,000V AC which was converted at sub-stations along the route to 550V DC to power the trains via a third rail system. ## Opening The official opening of the CLR by the Prince of Wales took place on 27 June 1900, one day before the time limit of the 1899 Act, although the line did not open to the public until 30 July 1900. The railway had stations at: - Shepherd's Bush - Holland Park - Notting Hill Gate - Queen's Road (now Queensway) - Lancaster Gate - Marble Arch - Bond Street (opened 24 September 1900) - Oxford Circus - Tottenham Court Road - British Museum (closed 1933) - Chancery Lane - Post Office (now St. Paul's) - Bank The CLR charged a flat fare of two pence for a journey between any two stations, leading the Daily Mail to give the railway the nickname of the Twopenny Tube in August 1900. The service was very popular, and, by the end of 1900, the railway had carried 14,916,922 passengers. By attracting passengers from the bus services along its route and from the slower, steam-hauled, MR and DR services, the CLR achieved passenger numbers around 45 million per year in the first few years of operation, generating a high turnover that was more than twice the expenses. From 1900 to 1905, the company paid a dividend of 4 per cent to investors. ## Rolling stock Greathead had originally planned for the trains to be hauled by a pair of small electric locomotives, one at each end of a train, but the Board of Trade rejected this proposal and a larger locomotive was designed which was able to pull up to seven carriages on its own. Twenty-eight locomotives were manufactured in America by the General Electric Company (of which syndicate member Darius Ogden Mills was a director) and assembled in the Wood Lane depot. A fleet of 168 carriages was manufactured by the Ashbury Railway Carriage and Iron Company and the Brush Electrical Engineering Company. Passengers boarded and left the trains through folding lattice gates at each end of the carriages; these gates were operated by guards who rode on an outside platform. The CLR had originally intended to have two classes of travel, but dropped the plan before opening, although its carriages were built with different qualities of interior fittings for this purpose. Soon after the railway opened, complaints about vibrations from passing trains began to be made by occupiers of buildings along the route. The vibrations were caused by the heavy, largely unsprung locomotives which weighed 44 tons (44.7 tonnes). The Board of Trade set up a committee to investigate the problem, and the CLR experimented with two solutions. For the first solution, three locomotives were modified to use lighter motors and were provided with improved suspension, so the weight was reduced to 31 tons (31.5 tonnes), more of which was sprung to reduce vibrations; for the second solution, two six-carriage trains were formed that had the two end carriages converted and provided with driver's cabs and their own motors so they could run as multiple units without a separate locomotive. The lighter locomotives did reduce the vibrations felt at the surface, but the multiple units removed it almost completely and the CLR chose to adopt that solution. The committee's report, published in 1902, also found that the CLR's choice of 100 lb/yard (49.60 kg/m) bridge rail for its tracks rather than a stiffer bullhead rail on cross sleepers contributed to the vibration. Following the report, the CLR purchased 64 driving motor carriages for use with the existing stock; together, these were formed into six- or seven-carriage trains. The change to multiple unit operation was completed by June 1903 and all but two of the locomotives were scrapped. Those two were retained for shunting use in the depot. ## Extensions ### Reversing loops, 1901 The CLR's ability to manage its high passenger numbers was constrained by the service interval that it could achieve between trains. This was directly related to the time taken to turn around trains at the termini. At the end of a journey, a locomotive had to be disconnected from the leading end of the train and run around to the rear, where it was reconnected before proceeding in the opposite direction; an exercise that took a minimum of 21⁄2 minutes. Seeking to shorten this interval, the CLR published a bill in November 1900 for the 1901 parliamentary session. The bill requested permission to construct loops at each end of the line so that trains could be turned around without disconnecting the locomotive. The loop at the western end was planned to run anti-clockwise under the three sides of Shepherd's Bush Green. For the eastern loop the alternatives were a loop under Liverpool Street station or a larger loop running under Threadneedle Street, Old Broad Street, Liverpool Street, Bishopsgate and returning to Threadneedle Street. The estimated cost of the loops was £800,000 (approximately £ today), most of which was for the eastern loop with its costly wayleaves. The CLR bill was one of more than a dozen tube railway bills submitted to Parliament for the 1901 session, To review the bills on an equal basis, Parliament established a joint committee under Lord Windsor, but by the time the committee had produced its report, the parliamentary session was almost over and the promoters of the bills were asked to resubmit them for the following 1902 session. Among the committee's recommendations were the withdrawal of the CLR's City loop, and that a quick, tube route from Hammersmith to the City of London would benefit London's commuters. ### Loop line, 1902–1905 Rather than resubmit its 1901 bill, the CLR presented a much more ambitious alternative for the 1902 parliamentary session. The reversing loops were dropped, and the CLR instead proposed to turn the whole railway into a single large loop by constructing a new southern route between the two existing end points, adopting the committee's recommendation for a Hammersmith to City route. At the western end, new tunnels were to be extended from the dead-end reversing siding west of Shepherd's Bush station and from the depot access tunnel. The route was to pass under Shepherd's Bush Green and run under Goldhawk Road as far as Hammersmith Grove where it was to turn south. At the southern end of Hammersmith Grove a station was to be provided on the corner of Brook Green Road (now Shepherd's Bush Road) to provide an interchange with the three stations already located there. From Hammersmith, the CLR's route was to turn eastwards and run under Hammersmith Road and Kensington High Street with interchange stations at the DR's Addison Road (now Kensington Olympia) and High Street Kensington stations. From Kensington High Street, the route was to run along the south side of Kensington Gardens beneath Kensington Road, Kensington Gore and Knightsbridge. Stations were to be constructed at the Royal Albert Hall and the junction of Knightsbridge and Sloane Street, where the Brompton & Piccadilly Circus Railway (B&PCR) already had permission to build a station. From Sloane Street, the CLR's proposed route ran below that approved for the B&PCR under the eastern portion of Knightsbridge, under Hyde Park Corner and along Piccadilly to Piccadilly Circus. At Hyde Park Corner, a CLR station was to be sited close to the B&PCR's station and the CLR's next station at St James's Street was a short distance to the east of the B&PCR's planned Dover Street station. At Piccadilly Circus, the CLR planned an interchange with the partially completed station of the stalled Baker Street and Waterloo Railway. The CLR route was then to turn south-east beneath Leicester Square to a station at Charing Cross and then north-east under Strand to Norfolk Street to interchange with the planned terminus of the Great Northern & Strand Railway. The route was then to continue east under Fleet Street to Ludgate Circus for an interchange with the South Eastern and Chatham Railway's (SECR's) Ludgate Hill station, then south under New Bridge Street, and east into Queen Victoria Street where a station was planned to connect to the District Railway's Mansion House station. The route was then to continue under Queen Victoria Street to reach the CLR's station at Bank, where separate platforms below the existing ones were to be provided. The final section of the route developed on the proposed loop from the year before with tunnels winding under the city's narrow, twisting streets. The tunnels were to run east, one below the other, beneath Cornhill and Leadenhall Street, north under St Mary Axe and west to Liverpool Street station, then south under Blomfield Street, east under Great Winchester Street, south under Austin Friars and Old Broad Street and west under Threadneedle Street where the tunnels were to connect with the existing sidings back into Bank. Two stations were to be provided on the loop; at the south end of St Mary Axe and at Liverpool Street station. To accommodate the additional rolling stock needed to operate the longer line, the depot was to be extended northwards. The power station was also to be enlarged to increase the electricity supply. The CLR estimated that its plan would cost £3,781,000 (approximately £ today): £2,110,000 for construction, £873,000 for land and £798,000 for electrical equipment and trains. The CLR bill was one of many presented for the 1902 parliamentary session (including several for the Hammersmith to City route) and it was examined by another joint committee under Lord Windsor. The proposal received support from the mainline railway companies the route interchanged with and from the C&SLR, which had a station at Bank. The London County Council and the City Corporation also supported the plan. The Metropolitan Railway opposed, seeing further competition to its services on the Inner Circle. Questions were raised in Parliament about the safety of tunnelling so close to the vaults of many City banks and the risk that subsidence might cause vault doors to jam shut. Another concern was the danger of undermining the foundations of the Dutch Church in Austin Friars. The Windsor committee rejected the section between Shepherd's Bush and Bank, preferring a competing route from the J. P. Morgan-backed Piccadilly, City and North East London Railway (PC&NELR). Without the main part of its new route, the CLR withdrew the City loop, leaving a few improvements to the existing line to be approved in the Central London Railway Act, 1902 on 31 July 1902. In late 1902, the PC&NELR plans collapsed after a falling out between the scheme's promoters led to a crucial part of the planned route coming under the control of a rival, the Underground Electric Railways Company of London (UERL), which withdrew it from parliamentary consideration. With the PC&NELR scheme out of the way, the CLR resubmitted its bill in 1903, although consideration was again held up by Parliament's establishment of the Royal Commission on London Traffic tasked to assess the manner in which transport in London should be developed. While the Commission deliberated, any review of bills for new lines and extensions was postponed, so the CLR withdrew the bill. The CLR briefly re-presented the bill for the 1905 parliamentary session but withdrew it again, before making an agreement with the UERL in October 1905 that neither company would submit a bill for an east–west route in 1906. The plan was then dropped as the new trains with driving positions at both ends made it possible for the CLR to reduce the minimum interval between trains to two minutes without building the loop. ### Wood Lane, 1906–1908 In 1905, the government announced plans to hold an international exhibition to celebrate the Entente cordiale signed by France and Britain in 1904. The location of the Franco-British Exhibition's White City site was across Wood Lane from the CLR's depot. To exploit the opportunity to carry visitors to the exhibition, the CLR announced a bill in November 1906 seeking to create a loop from Shepherd's Bush station and back, on which a new Wood Lane station close to the exhibition's entrance would be built. The new work was approved on 26 July 1907 in the Central London Railway Act, 1907. The new loop was formed by constructing a section of tunnel joining the end of the dead-end reversing tunnel to the west of Shepherd's Bush station and the north side of the depot. From Shepherd's Bush, trains ran anti-clockwise around the single track loop, first through the original depot access tunnel, then passed the north side of the depot and through the new station before entering the new section of tunnel and returning to Shepherd's Bush. Changes were also made to the depot layout to accommodate the new station and the new looped operations. Construction work on the exhibition site had started in January 1907, and the exhibition and new station opened on 14 May 1908. The station was on the surface between the two tunnel openings and was a basic design by Harry Bell Measures. It had platforms both sides of the curving track – passengers alighted on to one and boarded from the other (an arrangement now known as the Spanish solution). ### Liverpool Street, 1908–1912 With the extension to Wood Lane operational, the CLR revisited its earlier plan for an eastward extension from Bank to Liverpool Street station. This time, the Great Eastern Railway (GER) agreed to allow the CLR to build a station under its own main line terminus, provided that no further extension would be made north or north-east from there – territory served by the GER's routes from Liverpool Street. A bill was announced in November 1908, for the 1909 parliamentary session and received Royal Assent as the Central London Railway Act, 1909 on 16 August 1909. Construction started in July 1910 and the new Liverpool Street station was opened on 28 July 1912. Following their successful introduction at the DR's Earl's Court station in 1911, the station was the first underground station in London to be built with escalators. Four were provided, two to Liverpool Street station and two to the North London Railway's adjacent Broad Street station. ### Ealing Broadway, 1911–1920 The CLR's next planned extension was westward to Ealing. In 1905, the Great Western Railway (GWR) had obtained parliamentary approval to construct the Ealing and Shepherd's Bush Railway (E&SBR), connecting its main line route at Ealing Broadway to the West London Railway (WLR) north of Shepherd's Bush. From Ealing, the new line was to curve north-east through still mostly rural North Acton, then run east for a short distance parallel with the GWR's High Wycombe line, before curving south-east. The line was then to run on an embankment south of Old Oak Common and Wormwood Scrubs before connecting to the WLR a short distance to the north of the CLR's depot. Construction work did not begin immediately, and, in 1911, the CLR and GWR agreed running powers for CLR services over the line to Ealing Broadway. To make a connection to the E&SBR, the CLR obtained parliamentary permission for a short extension northward from Wood Lane station on 18 August 1911 in the Central London Railway Act, 1911. The new E&SBR line was constructed by the GWR and opened as a steam-hauled freight only line on 16 April 1917. Electrification of the track and the start of CLR services were postponed until after the end of World War I, not starting until 3 August 1920 when a single intermediate station at East Acton was also opened. Wood Lane station was modified and extended to accommodate the northward extension tracks linking to the E&SBR. The existing platforms on the loop were retained, continuing to be used by trains that were turning back to central London, and two new platforms for trains running to or from Ealing were constructed at a lower level on the new tracks, which connected to each side of the loop. Ealing Broadway station was modified to provide additional platforms for CLR use between the existing but separate sets of platforms used by the GWR and the DR. To provide services over the 6.97-kilometre (4.33 mi) extension, the CLR ordered 24 additional driving motor carriages from the Brush Company, which, when delivered in 1917, were first borrowed by the Baker Street and Waterloo Railway for use in place of carriages ordered for its extension to Watford Junction. The new carriages were the first for tube-sized trains that were fully enclosed, without gated platforms at the rear, and were provided with hinged doors in the sides to speed-up passenger loading times. To operate with the new stock the CLR converted 48 existing carriages, providing a total of 72 carriages for twelve six-car trains. Modifications made while in use on the Watford extension meant that the new carriages were not compatible with the rest of the CLR's fleet and they became known as the Ealing stock. The E&SBR remained part of the GWR until nationalisation at the beginning of 1948, when (with the exception of Ealing Broadway station) it was transferred to the London Transport Executive. Ealing Broadway remained part of British Railways, as successors to the GWR. ### Richmond, 1913 and 1920 In November 1912, the CLR announced plans for an extension from Shepherd's Bush on a new south-westwards route. Tunnels were planned under Goldhawk Road, Stamford Brook Road and Bath Road to Chiswick Common where a turn to the south would take the tunnels under Turnham Green Terrace for a short distance. The route then was to head west again to continue under Chiswick High Road before coming to the surface east of the London and South Western Railway's (L&SWR's) Gunnersbury station. Here a connection would be made to allow the CLR's tube trains to run south-west to Richmond station over L&SWR tracks that the DR shared and had electrified in 1905. Stations were planned on Goldhawk Road at its junctions with The Grove, Paddenswick Road and Rylett Road, at Emlyn Road on Stamford Brook Road, at Turnham Green Terrace (for a connection with the L&SWR's/DR's Turnham Green station) and at the junction of Chiswick High Road and Heathfield Terrace. Beyond Richmond, the CLR saw further opportunities to continue over L&SWR tracks to the commuter towns of Twickenham, Sunbury and Shepperton, although this required the tracks to be electrified. The CLR received permission for the new line to Gunnersbury on 15 August 1913 in the Central London Railway Act, 1913, but World War I prevented the works from commencing and the permission expired. `In November 1919, the CLR published a new bill to revive the Richmond extension, but using a different route that required only a short section of new tunnel construction. The new proposal was to construct tunnels southwards from Shepherd's Bush station, which would come to the surface to connect to disused L&SWR tracks north of Hammersmith Grove Road station that had closed in 1916. From Hammersmith, the disused LS&WR tracks continued westwards, on the same viaduct as the DR's tracks through Turnham Green to Gunnersbury and Richmond. The plan required electrification of the disused tracks, but avoided the need for costly tunnelling and would have shared the existing stations on the route with the DR. The plan received assent on 4 August 1920 as part of the Central London and Metropolitan District Railway Companies (Works) Act, 1920, although the CLR made no attempt to carry out any of the work. The disused L&SWR tracks between Ravenscourt Park and Turnham Green were eventually used for the westward extension of the Piccadilly line from Hammersmith in 1932.` ## Competition, co-operation and sale, 1906–1913 From 1906 the CLR began to experience a large fall in passenger numbers caused by increased competition from the DR and the MR, which electrified the Inner Circle in 1905, and from the Great Northern, Piccadilly and Brompton Railway (GNP&BR) which opened its rival route to Hammersmith in 1906. Road traffic also offered a greater challenge as motor buses began replacing the horse drawn variety in greater numbers. In an attempt to maintain income, the company increased the flat fare for longer journeys to three pence in July 1907 and reduced the fare for shorter journeys to one penny in March 1909. Multiple booklets of tickets, which had previously been sold at face value, were offered at discounts, and season tickets were introduced from July 1911. The CLR looked to economise through the use of technological developments. The introduction in 1909 of dead-man's handles to the driver controls and "trip cocks" devices on signals and trains meant that the assistant driver was no longer required as a safety measure. Signalling automation allowed the closure of many of the line's 16 signal boxes and a reduction in signalling staff. From 1911, the CLR operated a parcel service, making modifications to the driving cars of four trains to provide a compartment in which parcels could be sorted. These were collected at each station and distributed to their destinations by a team of tricycle riding delivery boys. The service made a small profit, but ended in 1917 because of wartime labour shortages. The problem of declining revenues was not limited to the CLR; all of London's tube lines and the sub-surface DR and MR were affected by competition to some degree. The reduced income from the lower passenger numbers made it difficult for the companies to pay back borrowed capital, or to pay dividends to shareholders. The CLR's dividend payments fell to 3 per cent from 1905, but those of the UERL's lines were as low as 0.75 per cent. From 1907, the CLR, the UERL, the C&SLR, and the Great Northern & City Railway companies began to introduce fare agreements. From 1908, they began to present themselves through common branding as the Underground. In November 1912, after secret take-over talks, the UERL announced that it was purchasing the CLR, swapping one of its own shares for each of the CLR's. The take-over took effect on 1 January 1913, although the CLR company remained legally separate from the UERL's other tube lines. ## Improvements and integration, 1920–1933 Following the takeover, the UERL took steps to integrate the CLR's operations with its own. The CLR's power station was closed in March 1928 with power instead being supplied from the UERL's Lots Road Power Station in Chelsea. Busier stations were modernised; Bank and Shepherd's Bush stations received escalators in 1924, Tottenham Court Road and Oxford Circus in 1925 and Bond Street in 1926, which also received a new entrance designed by Charles Holden. Chancery Lane and Marble Arch stations were also rebuilt to receive escalators in the early 1930s. On 5 November 1923 new stations were opened on the Ealing extension at North Acton and West Acton. They were built to serve residential and industrial developments around Park Royal and, like East Acton, the station buildings were basic structures with simple timber shelters on the platforms. The poor location of British Museum station and the lack of an interchange with the GNP&BR's station at Holborn had been a considered a problem by the CLR almost since the opening of the GNP&BR in 1906. A pedestrian subway to link the stations was considered in 1907, but not carried out. A proposal to enlarge the tunnels under High Holborn to create new platforms at Holborn station for the CLR and to abandon British Museum station was included in a CLR bill submitted to parliament in November 1913. This was given assent in 1914, but World War I prevented any works taking place, and it was not until 1930 that the UERL revived the powers and began construction work. The new platforms, along with a new ticket hall and escalators to both lines, opened on 25 September 1933, British Museum station having closed at the end of traffic the day before. Between March 1926 and September 1928, the CLR converted the remaining gate stock carriages in phases. The end platforms were enclosed to provide additional passenger accommodation and two sliding doors were inserting in each side. The conversions increased capacity and allowed the CLR to remove gatemen from the train crews, with responsibility for controlling doors moving to the two guards who each managed half the train. Finally, the introduction of driver/guard communications in 1928 allowed the CLR to dispense with the second guard, reducing a train crew to just a driver and a guard. The addition of doors in the sides of cars caused problems at Wood Lane where the length of the platform on the inside of the returning curve was limited by an adjacent access track into the depot. The problem was solved by the introduction of a pivoted section of platform which usually sat above the access track and allowed passengers to board trains as normal, but which could be moved to allow access to the depot. ## Move to public ownership, 1923–1933 Despite closer co-operation and improvements made to the CLR stations and to other parts of the network, the Underground railways continued to struggle financially. The UERL's ownership of the highly profitable London General Omnibus Company (LGOC) since 1912 had enabled the UERL group, through the pooling of revenues, to use profits from the bus company to subsidise the less profitable railways. However, competition from numerous small bus companies during the early 1920s eroded the profitability of the LGOC and had a negative impact on the profitability of the whole UERL group. To protect the UERL group's income, its chairman Lord Ashfield lobbied the government for regulation of transport services in the London area. Starting in 1923, a series of legislative initiatives were made in this direction, with Ashfield and Labour London County Councillor (later MP and Minister of Transport) Herbert Morrison, at the forefront of debates as to the level of regulation and public control under which transport services should be brought. Ashfield aimed for regulation that would give the UERL group protection from competition and allow it to take substantive control of the LCC's tram system; Morrison preferred full public ownership. After seven years of false starts, a bill was announced at the end of 1930 for the formation of the London Passenger Transport Board (LPTB), a public corporation that would take control of the UERL, the MR and all bus and tram operators within an area designated as the London Passenger Transport Area. The board was a compromise – public ownership but not full nationalisation – and came into existence on 1 July 1933. On this date, ownership of the assets of the CLR and the other Underground companies transferred to the LPTB. ## Legacy In 1935 the LPTB announced plans as part of its New Works Programme to extend the CLR at both ends by taking over and electrifying local routes owned by the GWR in Middlesex and Buckinghamshire and by the LNER in east London and Essex. Work in the tunnels to lengthen platforms for longer trains and to correct misaligned tunnel sections that slowed running speeds was also carried out. A new station was planned to replace the cramped Wood Lane. The service from North Acton through Greenford and Ruislip to Denham was due to open between January 1940 and March 1941. The eastern extension from Liverpool Street to Stratford, Leyton and Newbury Park and the connection to the LNER lines to Hainault, Epping and Ongar were intended to open in 1940 and 1941. World War II caused works on both extensions to be halted and London Underground services were extended in stages from 1946 to 1949, although the final section from West Ruislip to Denham was cancelled. Following the LPTB takeover, the Harry Beck-designed tube map began to show the route's name as the "Central London Line" instead of "Central London Railway". In anticipation of the extensions taking its services far beyond the boundaries of the County of London, "London" was omitted from the name on 23 August 1937; thereafter it was simply the "Central line". The CLR's original tunnels form the core of the Central line's 72.17-kilometre (44.84 mi) route. During World War II, 4 kilometres (2.5 mi) of completed tube tunnels built for the eastern extension between Gants Hill and Redbridge were used as a factory by Plessey to manufacture electronic parts for aircraft. Other completed tunnels were used as air-raid shelters at Liverpool Street, Bethnal Green and between Stratford and Leyton, as were the closed parts of British Museum station At Chancery Lane, new tunnels 16 feet 6 inches (5.03 m) in diameter and 1,200 feet (370 m) long were constructed below the running tunnels during 1941 and early 1942. These were fitted out as a deep level shelter for government use as a protected communications centre. Work on a similar shelter was planned at Post Office station (renamed St Paul's in 1937) but was cancelled; the lift shafts that were made redundant when the station was given escalators in January 1939 were converted for use as a protected control centre for the Central Electricity Board. ## See also - Horace Field Parshall, chairman and designer of the line's electrical distribution system
3,652,716
Silent Hill 4: The Room
1,168,325,900
2004 video game
[ "2000s horror video games", "2004 video games", "Konami games", "PlayStation 2 games", "Psychological horror games", "Silent Hill games", "Single-player video games", "Survival video games", "Video games about ghosts", "Video games developed in Japan", "Video games scored by Akira Yamaoka", "Video games set in the United States", "Windows games", "Xbox games" ]
is a 2004 survival horror game developed by Team Silent, a group in Konami Computer Entertainment Tokyo, and published by Konami. The fourth installment in the Silent Hill series, the game was released in Japan in June and in North America and Europe in September. Silent Hill 4 was released for the PlayStation 2, Xbox, and Microsoft Windows. Its soundtrack was released at the same time. In 2012, it was released on the Japanese PlayStation Network. On October 2, 2020, it was re-released on GOG.com with patches to make it playable on Windows 10. Unlike the previous installments, which were set primarily in the town of Silent Hill, this game is set in the southern part of the fictional city of Ashfield, and follows Henry Townshend as he attempts to escape from his locked-down apartment. During the course of the game, Henry explores a series of supernatural worlds and finds himself in conflict with an undead serial killer named Walter Sullivan. Silent Hill 4 features an altered gameplay style with third-person navigation and plot elements taken from previous installments. Upon its release, the game received generally favorable critical reaction, but its departure from the typical features of the series received a range of reactions. ## Gameplay The objective of Silent Hill 4: The Room is to guide player character Henry Townshend as he seeks to escape from his apartment. Gameplay centers on the apartment, which is shown through a first-person perspective and contains the only save point. The other areas of the game are reached through holes formed in the apartment. For the first half of the game, the room restores Henry's health; in the second half of the game, however, the room becomes possessed by hauntings that drain his health. In the main levels of the game the player uses the usual third-person view of the Silent Hill series. The player has a limited item inventory which can be managed by leaving unneeded items in a chest in Henry's room. Silent Hill 4 emphasizes combat during gameplay, with a near-absence of complex puzzles in favor of simple item-seeking tasks. Unlike previous games in the series, separate difficulty settings for combat and puzzles are not available; changing the combat difficulty also affects the difficulty of puzzles. In the second half of the game Henry is accompanied and helped in combat by his neighbor Eileen Galvin. Eileen cannot die while she is with Henry, although as she takes damage she succumbs to possession, which also occurs if she is given a firearm. The damage Eileen takes in the game determines whether or not she dies during the final boss fight, directly affecting the ending achieved. ### Combat Combat in Silent Hill 4 follows the pattern set by the other games with a few key differences. The player has access to a variety of melee weapons but only two firearms. Certain melee weapons are breakable. Items which can be equipped such as talismans (which protect the player from damage from the hauntings in Henry's room) will eventually break after a short period of use. Another key difference in the combat system is that melee attacks may be "charged" before they are used, inflicting a greater amount of damage to an opponent than a quick attack. One of the most significant changes is the introduction of immortal ghosts of antagonist Walter Sullivan's victims. The ghosts, which have the ability to hurt Henry, can be nullified by two items. These items can also exorcise the hauntings in Henry's apartment. Ghosts can also be knocked down for a lengthy period of time with one of two special bullets or pinned permanently with a special sword. ## Plot ### Characters The protagonist and player character of Silent Hill 4 is Henry Townshend, a resident of the South Ashfield Heights Apartments building in the fictitious town of Ashfield. Henry is an "average" man who has been described by Konami as an introvert in his late 20s. For the most part Henry navigates the game's world alone, although he eventually works with his neighbor Eileen Galvin. Henry also deals with the new supporting characters of Cynthia Velasquez, Andrew DeSalvo, Richard Braintree and Jasper Gein. Silent Hill 4: The Room incorporates two unseen, minor characters from previous installments: investigative journalist Joseph Schreiber and deceased serial killer Walter Sullivan. Joseph was first referenced in Silent Hill 3 with a magazine article he has written condemning the "Hope House" orphanage run by Silent Hill's religious cult, which the game's protagonist, Heather, can discover. In Silent Hill 2, Walter is referenced in a newspaper article detailing his suicide in his jail cell after his murder of two children. Sullivan appears in two forms: an undead adult enemy and a neutral child supporting character. Walter's previous victims play a small role in the game as enemies. ### Plot Henry Townshend finds himself locked in his apartment for five days with no means of communication and having recurring nightmares. Shortly afterwards, a hole appears in the wall of his bathroom, through which he enters alternate dimensions. He ends up in an abandoned subway station, where he meets Cynthia Velasquez, a woman convinced she is dreaming and who is soon killed by an unknown man. Awakening in his apartment, he hears confirmation on his radio that she is indeed dead in the real world. Similar events repeat with the next few people Henry finds: Jasper Gein, a man fascinated with the paranormal and the cult of Silent Hill; Andrew DeSalvo, a former employee of an orphanage run by the aforementioned cult; and Richard Braintree, a resident in Henry's apartment complex. All the deaths bear similarities to deceased serial killer Walter Sullivan's modus operandi. Henry finds diary scraps belonging to journalist Joseph Schreiber—the former inhabitant of his apartment—who was investigating Walter's murder spree. He discovers that Walter is an orphan who has been led to believe his biological mother was in Henry's apartment, where he had been found abandoned after birth. To "purify" the area, Walter, now in an undead state, is attempting to perform a ritual, which requires twenty-one murders to be committed. As Walter prepares to kill his twentieth victim, Eileen Galvin, a child manifestation of himself appears and stops him. Eileen agrees to join Henry in locating Joseph. At the same time, supernatural occurrences begin to manifest in Henry's apartment. The two eventually find Joseph's ghost, who tells them that the only way to escape is to kill Walter, and reveals that Henry is the intended twenty-first victim. Shortly after Henry acquires Walter's umbilical cord, which they require to kill him, Eileen leaves Henry and returns to his apartment. Henry follows and finds her, possessed and about to walk into a deathtrap, and a fight between Henry and Walter ensues. There are four possible endings, determined by whether or not Eileen survives and the condition of Henry's apartment. The "21 Sacraments" ending sees Walter and his child manifestation in his apartment, while the radio reveals that Henry and Eileen have died, along with several others. In "Eileen's Death," Henry awakens in his apartment, and learns from his radio that Eileen has died, to his sorrow. In "Mother," Henry escapes from his apartment building, and brings flowers to Eileen, who plans to return to the apartment building. His apartment, meanwhile, has become completely possessed. "Escape" begins similarly to the "Mother" ending, but Eileen resolves to find a new place to live, and his apartment is not shown to be possessed. ## Development Development of the fourth Silent Hill game by Konami Computer Entertainment Tokyo's development group Team Silent began shortly after the release of Silent Hill 2 and alongside Silent Hill 3, with the intentions of creating a new style of game that would take the series in a different direction than the previous games. Despite what has been popularized around the Internet, Silent Hill 4 was always meant to be connected to Silent Hill and not an unrelated separate horror game that later became a Silent Hill title, although different gameplay mechanics and change were intended. News of the game's development was made public by October 2003, and official announcements by Konami followed at Gamers' Day 2004. The game was produced by the series' recurring sound designer and composer Akira Yamaoka. Its working title, prior to its incorporation into the rest of the series, was simply Room 302. The main concept behind the new game structure was to take the idea of "the room" as "the safest part of your world" and make it a danger zone. The first-person perspective was included in this area of the game to give the room's navigation a personal and claustrophobic feel. The producers nonetheless retained the classic third-person perspective in all other areas to accommodate the increased emphasis on action and combat. The developers re-used locations already explored in the first half of the game to show the changes undergone by each character introduced in the locations. It was noted that the game, like previous titles in the series, refers to the film Jacob's Ladder (1990) and that the protagonist Henry Townshend shares a likeness to actor Peter Krause. The architecture of the apartment and the addition of the hole is comparable to a similar non-Euclidean space in author Mark Z. Danielewski's novel House of Leaves (2000). Other nods includes the novel Rosemary's Baby (1967), American television series Twin Peaks (1990–1991), and American horror author Stephen King. The creators of the game have acknowledged writer Ryū Murakami's book Coin Locker Babies (1980) and the film The Cell (2000) as inspirations for the game's premise. ### Music The soundtrack for Silent Hill 4: The Room was released alongside the game in 2004, composed by Akira Yamaoka with vocals by Mary Elizabeth McGlynn and Joe Romersa. The Japanese version featured a second disc containing music by series composer Akira Yamaoka played along to the reading of traditional Japanese stories. The American version contained 13 exclusive tracks and remixes. A remix of the song "Your Rain" from the game's soundtrack was used on Konami's Dance Dance Revolution Extreme. Several tracks from the game were also featured in the Silent Hill Experience promotional UMD. ## Release and reception Silent Hill 4: The Room was first released in Japan on June 17, 2004. The game was shipped for its subsequent North American and European releases on September 7, with pre-ordering customers receiving the soundtrack for free with the game in the former market. The game, alongside its two PS2 predecessors, was rereleased in 2006 as part of The Silent Hill Collection European boxset, as a tie-in with the release of the Silent Hill film, and again in 2009. Microsoft confirmed that their Xbox 360 console is backward compatible with the game's Xbox port. The previews of Silent Hill 4: The Room provided at E3 2004 led IGN to name it the best PlayStation 2 adventure game in show. Upon its release in 2004 the game also attracted the attention of mainstream news outlets CNN, the BBC and The Times. Silent Hill 4 topped game sales charts in Japan during a video game sales slump, but dropped to tenth place one week later. Official statements by Konami referred to sales of the game in North America as "favorable". Review aggregator Metacritic shows an average score rating of 76 out of 100 for both the PS2 and Xbox versions, indicating "generally favorable reviews". Marc Saltzman of CNN wrote: "Unlike Hollywood horror movies that often get worse with each new sequel (Friday the 13th Part VIII: Jason Takes Manhattan, for example), Konami's scary Silent Hill series gets better -- and creepier -- with age". Video game magazine Game Informer praised Silent Hill 4: The Room, stating that its "disarming voyeurism, bizarre camera angles, and exceptionally well-placed tension is what the series has been trying to do all along, but The Room is the first entry to do it right". According to a reviewer for Edge magazine, "[l]ook at it one way, and it's a choking journey with unprecedented attention to unease and psychological horror, a game framed with unparalleled sophistication. From another angle, it's just a clunky PSone throwback, with all the design wit of a dodo". The New York Times found it completely lacking in "true terror". The plot of the game was generally well received by reviewers, who praised it as horrifying, compelling, and "dark". 1UP.com praised the titular room as constantly maintaining a sense of unease for the player. Game Revolution enjoyed the relatively normal appearance of the environment outside Henry's room at the game's beginning, writing: "Are these strange otherworlds real, or are they just the nightmares of some lunatic shut-in who chained up his own door? It effectively blurs the line between reality and delusion, leading to a singularly creepy game". In contrast, IGN's Douglass C. Perry felt that the familiarity of the story as compared with the other Silent Hill storylines detracted from its horror appeal, although he cared about its characters more than in previous games. Critics were, for the most part, pleased with the voice acting in the game, although it did receive criticism for the characters' calmness. Nevertheless, producer and composer Akira Yamaoka said that the characters were, to him, "a little weak". The graphics of the game environments were praised as detailed. According to Bethany Massimilla of GameSpot: "The game looks its best in corroded, bloody, gritty environments, like the damp, steel halls of the water prison or the subterranean subway layers that, at one point in the game, are walled in living, moving flesh". The character and monster designs received praise as well-done. Reviewers generally commended the audio as contributing to the horror of the game, although 1UP wrote that it was sub-par for the series. The gameplay's departures from that of previous installments in the series drew mixed reactions. GameZone enjoyed the changes, writing that they were needed to keep the series fresh. The decision to place the only save point and storage area for items in the titular room, with no option to discard unwanted items, was generally criticised, with reviewers finding it inconvenient to have to return there. The puzzles had mixed reactions. Kristan Reed of Eurogamer expressed disappointment with the degree to which the game had been geared as a combat game with an absence of standard Silent Hill puzzles, while GameSpy's Bryn Williams worried that the puzzles' obscurity and "non-lateral" nature might discourage more casual players. IGN disliked the replacement of logic-based puzzles in favour of obtaining various items, and was also displeased by the lack of boss fights. Another source of criticism was the repetition of the first four environments during the second half of the game. Metacritic shows a lower average rating of 67 out of 100 for the PC version, indicating "mixed or average reviews". IGN's Perry complained about "the blurriest textures we've seen in years and some serious graphical glitches" and "extremely low mouse sensitivity" inhibiting gameplay. GameSpot's review praised the graphics as having "been optimized well for the PC" but acknowledging "keyboard and mouse controls just don't fare that well in an environment of constantly shifting perspective views that can make navigation frustrating". Silent Hill 4 was a nominee for GameSpot's 2004 "Best Adventure Game" award, which ultimately went to Myst IV: Revelation.
4,703,033
Concerto delle donne
1,171,567,073
Group of professional female singers in the late Renaissance court of Ferrara, Italy
[ "1580 establishments in Italy", "1597 disestablishments in Europe", "Baroque music", "History of Ferrara", "Italian classical music groups", "Musical groups established in the 16th century", "Renaissance music", "Women in classical music" ]
The concerto delle donne (lit. 'consort of ladies'; also concerto di donne or concerto delle (or di) dame) was a group of professional female singers in the late Italian Renaissance, primarily in the court of Ferrara, Italy. Renowned for their technical and artistic virtuosity, the ensemble was founded by Alfonso II, Duke of Ferrara, in 1580 and was active until the court was dissolved in 1597. Giacomo Vincenti, a music publisher, praised the women as "virtuose giovani" (young virtuosas), echoing the sentiments of contemporaneous diarists and commentators. The origins of the ensemble lay in an amateur group of high-placed courtiers who performed for each other within the context of the Duke's informal musica secreta (lit. 'secret music') in the 1570s. The ensemble evolved into an all-female group of professional musicians, the concerto delle donne, who performed formal concerts for members of the inner circle of the court and important visitors. Their signature style of florid, highly ornamented singing brought prestige to Ferrara and inspired composers of the time. The concerto delle donne revolutionized the role of women in professional music, and continued the tradition of the Este court as a musical center. Word of the ladies' ensemble spread across Italy, inspiring imitations in the powerful courts of the Medici and Orsini. The founding of the concerto delle donne was the most important event in secular Italian music in the late sixteenth century; the musical innovations established in the court were important in the development of the madrigal, and eventually the seconda pratica. ## History ### Formation At the court in Ferrara, the Duke Alfonso II d'Este formed a group of mostly female singers by at least 1577. They performed within the context of the Duke's ongoing musica secreta (lit. 'secret music'), a regular series of chamber music concerts under the Duke's artistic control performed for a private audience. Although it is uncertain whether the group's members were amateurs or professional musicians, they were noblewomen and would have attended court regardless. These singers included sisters Lucrezia and Isabella Bendidio, as well as Leonora Sanvitale, and Vittoria Bentivoglio. The professional bass singer Giulio Cesare Brancaccio also joined the ensemble. The Duke formally established the concerto delle donne (lit. 'consort of ladies'; also concerto di donne or concerto delle (or di) dame) in 1580. He did not announce the creation of a professional, all-female ensemble; instead, the group infiltrated and gradually dominated the musica secreta concerts. This new ensemble was created by the Duke in part to amuse his young new wife, Margherita Gonzaga d'Este who was musically-inclined herself, and in part to help the Duke achieve his artistic goals for the court. Margherita's influence on the church through her brother-in-law, the bishop Luigi d'Este, allowed the concerto to use church assets such as the San Vito convent outside of Ferrara. The first recorded performance by the professional ladies was on 20 November 1580; Brancaccio joined the new group the next month. By the 1581 carnival season, they were performing together regularly. This new "consort of ladies" was viewed as an extraordinary and novel phenomenon; most witnesses did not connect the concerto delle donne with the earlier group of ladies from the 1870s. However, modern musicologists now view the earlier group as a crucial part of the creation and development of the social and vocal genre of the concerto delle donne. The culture at the Italian courts of that time had a political dimension, as families aimed to present their greatness by non-violent means. ### Roster and duties The most prominent member of the new ensemble was Laura Peverara, who was joined by Livia d'Arco and Anna Guarini, daughter of the prolific poet Giovanni Battista Guarini. The latter wrote poems for many of the madrigals which were set for the ensemble, and wrote texts for the balletto delle donne dances. The well-known singer Tarquinia Molza was involved with the group, but modern scholars disagree on whether she sang with them or was solely as an advisor and instructor. Whether Molza ever performed with them or not, she was ousted from any role in the group after her affair with the composer Giaches de Wert came to light in 1589. After the dismissal of Brancaccio for insubordination in 1583, no more permanent male members of the musica secreta were hired; however, the ensemble occasionally sang with male singers. The composer Luzzasco Luzzaschi directed and wrote music to showcase the ensemble, and accompanied them on the harpsichord. The composer and lutenist Ippolito Fiorini was the maestro di cappella, in charge of the entire court's musical activities. In addition to his duties to the overall court, Fiorini accompanied the concerto on the lute. The singers of the concerto delle donne were officially ladies-in-waiting of the Duchess Margherita, but were hired primarily as singers. Peverara's musical abilities prompted the Duke to specifically ask the Duchess to bring Peverara from Mantua as part of her retinue. The new singers played instruments, including the lute, harp, and viol, but focused their energies on developing vocal virtuosity. This skill became highly prized in the mid-sixteenth century, beginning with basses like Brancaccio, but by the end of the century virtuosic bass singing went out of style, and higher voices came into vogue. The ladies' musical duties included performing with the duchess' balletto delle donne, a group of female dancers who frequently crossdressed. Despite their upper-class background, the singers would not have been welcomed into the court's inner circle had they not been such skilled performers. D'Arco belonged to the nobility, but a minor family only. Peverara was the daughter of a wealthy merchant, and Molza came from a prominent family of artists. The women performed up to six hours a day, either singing their own florid repertoire from memory, sight-reading from partbooks, or participating in the balletti as singers and dancers. Thomasin LaMay posits that the women of the concerti delle donne provided sexual favors for members of the court, but there is no evidence for this, and the circumstances of their marriages and dowries argues against this interpretation. The women were paid salaries and received other benefits, such as dowries and apartments in the ducal palace. Peverara received 300 scudi a year and lodging in the ducal palace for herself, her husband, and her mother – as well as a dowry of 10,000 scudi upon her marriage. Despite having married three times in the hopes of producing an heir, Alfonso II died in 1597 without issue, legitimate or otherwise. His cousin Cesare inherited the Duchy, but the city of Ferrara, which was legally a Papal fief, was annexed to the Papal States in 1598 through a combination of "firm diplomacy and unscrupulous pressure" by Pope Clement VIII. The Este court had to abandon Ferrara in disarray and its music establishment was disbanded. While the existence of the concerto delle donne was widely known, its detailed history was largely lost, dispersed between archival records, until the beginning 20th century when the Italian literature critic Angelo Solerti drew attention to Ferrara's 16th century court culture. ## Music The greatest musical innovation of the concerto delle donne was its departure from one voice singing diminutions above an instrumental accompaniment to two or three highly ornamented voices singing varying diminutions at once. Such ornaments were meticulously notated by the composers, leaving a detailed record of the concerto delle donne'''s performance practice. Many Italian composers wrote music either inspired by the concerto delle donne or specifically for them. The upper voices were written to display the skill of the singers; oftentimes lower static voices accompanied them in contrast. Such works are characterized by a high tessitura, a virtuosic and florid style, and a wide range. Lodovico Agostini's third book of madrigals was perhaps the first publication fully dedicated to the new singing style. Agostini dedicated songs to Guarini, Peverara, and Luzzaschi. Gesualdo wrote music for the group in 1594 while visiting Ferrara to marry the Duke's niece Leonora d'Este; much of Gesualdo's music for the group does not survive. De Wert's Seventh Book of Madrigals à 5 and Marenzio's First Book à 6 were the first true musical monuments to the new concerto delle donne. Monteverdi's Canzonette a tre voci was probably influenced by the "Ladies of Ferrara". Peverara was singularly lauded for her skill in this genre of accompanied solo singing. Some madrigals in the two-book Madrigaletti et napolitane by Giovanni de Macque were written with the Concerto delle donne in mind, due to their technically demanding content. Works written for the concerto delle donne were not limited to music: Torquato Tasso and G.B. Guarini wrote poems dedicated to the ladies in the concerto, some of which were later set by composers. Tasso wrote over seventy-five poems to Peverara alone. Luzzaschi's book of madrigals for one, two, and three sopranos with keyboard accompaniment, published in 1601 as the Madrigali per cantare e sonare, comprises works written throughout the 1580s. In 1584, Alessandro Striggio, responding to requests from Francesco I de' Medici, Grand Duke of Tuscany, described the ladies and composed pieces imitating their style so that Francesco could start his own concerto delle donne. Striggio mentioned an ornamented four voice madrigal for three sopranos and a dialogue with imitative diminutions for two sopranos. He added that he had forgotten the intabulation for the madrigal in Mantua, and noted that the skilled singer Giulio Caccini could play the bass part on either lute or harpsichord. Baldini's first publication for the Duke was Il lauro secco (1582), which was followed by Il lauro verde (1583), both containing music by the leading composers of Rome and Northern Italy. Music in honor of the concerto was printed as far away as Venice, with Paolo Virchi's First Book à 5, published by Giacomo Vincenti and Ricciardo Amadino containing the madrigal which begins SeGU'ARINAscer LAURA e prenda LARCO / Amor soave e dolce / Ch'ogni cor duro MOLCE. This capitalization is in the original, clearly spelling out the equivalent of the names Anna Guarini, Laura Peverara, Livia d'Arco, and Tarquinia Molza. With the obvious exception of Brancaccio, all the singers in the concerto were female sopranos. There is no evidence that the ensemble used falsettists. This fact is surprising, considering that castrati were shortly to become the biggest stars of a new art form, opera. In 1607, Monteverdi's Orfeo featured four castrato roles out of a cast of nine, showing the new dominance of this vocal type. It also contrasts with Margherita's father's court, where Guglielmo Gonzaga actively sought out eunuchs. The ladies were thoroughly coached and rehearsed in their work, down to all hand gestures and facial movements. Polyphonic arrangements called for the women to sing diminutions (melodic divisions of longer notes) and other ornaments in consort. Diminutions were traditionally improvised in performance. However, to coordinate their voices, they transcribed and rehearsed the music in advance, transforming these improvisations into highly developed musical forms that composers would emulate. The singers may have used the more traditional practice in their solo repertoire, performing ornaments extemporaneously. Specific ornaments used by the concerto delle donne, mentioned in a source from 1581, were such popular sixteenth-century devices as passaggi (division of a long note into many shorter notes, usually stepwise), cadenze (decoration of the penultimate note, sometimes quite elaborate), and tirate (rapid scales). Accenti (connection of two longer notes, using dotted rhythms), a staple of early Baroque music, are absent from the list. In 1592 Caccini claimed that Alfonso II asked him to teach his ladies the new accenti and passaggi styles. ### Styles There are two separate styles of madrigals written for and inspired by the concerto delle donne. The first is the "luxuriant" style of the 1580s. The second is music in the style of the seconda pratica, written in the 1590s. Luzzaschi wrote music in both of these styles. The style of the earlier period, as exemplified in the works of Luzzaschi, involves the use of madrigal texts written by poets within the Ferrarese sphere, such as Tasso and G.B. Guarini. These poems tend to be short and witty with single sections. Musically, Luzzaschi's works are highly sectionalized and based on melodic themes, rather than harmonic structures. Luzzaschi lessens the sectionalizing effect of his compositional techniques by weakening cadences. His tendency to reiterate melodies in different voices, including the bass voice, leads to tonal creations which are sometimes bewildering. These aspects make Luzzaschi's music much more polyphonic than Monteverdi's later compositions, and thus more conservative; however, Luzzaschi's use of jarring melodic leaps and harmonic dissonance are individualistic. These dissonances, which contrast sharply with the careful treatment of dissonance during most of the 16th century, is closely connected with the ornamented polyphonic madrigals of the concerto delle donne. In Giovanni Artusi's socratic dialogue, the character defending Monteverdi connects haphazard treatment of dissonance with ornamental singing. ### Performance The concerto delle donne transformed the musica secreta. In the past, members of the audience would perform, and performers would become audience members. During the ascendancy of the concerto delle donne the roles within the musica secreta became fixed, as did the roster of those who performed for the Duke's pleasure every night. The performances had a restricted audience; only selected dignitaries and few courtiers saw the concerto delle donne; one such dignitary may have been the Russian ambassador Istoma Shevrigin, in 1581. The elite, hand-selected audience members favored with admission to performances by the concerto delle donne demanded diversions and entertainment beyond the pleasures of beautiful music alone. During the concerts, members of the concerto's audience would sometimes play cards. Orazio Urbani, ambassador of the Grand Duke of Tuscany, having waited several years to see the concerto, complained that he was forced not only to play cards, distracting him from the performance, but also simultaneously admire and praise the women's music to their patron Alfonso. After at least one concert, to continue the entertainment, a dwarf couple danced. Alfonso was not as interested in these peripheral entertainments, and in one instance excused himself from the party to go sit under a tree to listen to the ladies, and follow along with the madrigal texts and musical scores, including embellishments, which were made available to listeners. ## Influence While they were neither the first nor only female musicians in Ferrara, the concerto delle donne was a revolutionary musical establishment that helped effect a shift in women's role in music; its success took women from obscurity to "the apex of the profession". Women were openly brought to court to train as professional musicians, and by 1600, a woman could have a viable career as a musician, independent of her husband or father. New women's ensembles inspired by the concerto delle donne resulted in more positions for women as professional singers and more music for them to perform. The concerto delle donne contested the viewpoint of some contemporaries that women were unfit to achieve noteworthy deeds. Despite Alfonso not publicizing the composed music and the dissolution of the court in 1597, the musical style which was inspired by the concerto delle donne spread throughout Europe, and remained prominent for almost fifty years. The concerto delle donne was so influential that other courts developed similar concerti and it became a cliché of northern Italian courts, having one was a sign of prestige. It heavily influenced the development of the madrigal and eventually the seconda practica. The group' brought Alfonso and his court international prestige, as the ladies' reputation spread throughout Italy and southern Germany. It functioned as a powerful tool of propaganda, projecting an image of strength and affluence. Having seen the concerto delle donne in Ferrara, Caccini created a rival group made up of his family and a pupil. This ensemble was sponsored by the Medici, and traveled as far abroad as Paris to perform for Marie de' Medici. Francesca Caccini had much success composing and singing in the style of the concerto delle donne. Rival groups were planned in Florence by the Medici, Rome by the Orsini, and Mantua by the Gonzaga. There was even a rival group in Ferrara based in the Castello Estense, the very palace where the concerto delle donne performed. This group was formed by Alfonso's sister Lucrezia d'Este, Duchess of Urbino. She had lived at the Este court since 1576, and shortly after Margherita's marriage to Alfonso in 1579, Alfonso and his henchmen killed Lucrezia's lover. Lucrezia was unhappy about being replaced as the matron of the house by Margherita, and upset by the murder of her lover, leading to her desire to be separate from the rest of her family during her evening entertainments. The success of the concerto delle donne also led to the increased professionalization of court music. Barbara Strozzi was among the last composers and performers in this style, which by the mid-seventeenth century was considered archaic. At least one instrument used by the concerto delle donne, the harp L'Arpa di Laura'' in the Galleria Estense art gallery, has become famous.
11,879,599
Ba Cụt
1,165,209,066
Vietnamese military leader (c. 1923–1956)
[ "1923 births", "1956 deaths", "Executed Vietnamese people", "Hòa Hảo", "People executed by South Vietnam", "People executed by Vietnam by decapitation", "People executed by guillotine", "People from An Giang Province", "Vietnamese Buddhists", "Vietnamese military personnel" ]
Lê Quang Vinh (c. 1923 – 13 July 1956), popularly known as Ba Cụt (), was a Vietnamese military commander of the Hòa Hảo religious sect, which operated from the Mekong Delta and controlled various parts of southern Vietnam during the 1940s and early 1950s. Ba Cụt and his forces fought the Vietnamese National Army (VNA), the Việt Minh, and the Cao Đài religious movement from 1943 until his capture in 1956. Known for his idiosyncrasies, he was regarded as an erratic and cruel leader who fought with little ideological purpose. His sobriquet came from the self-amputation of his left index finger (although it was erroneously reported that it was his middle or "third cut finger"). He later swore not to cut his hair until the communist Việt Minh were defeated. Ba Cụt frequently made alliances with various Vietnamese factions and the French. He invariably accepted the material support offered in return for his cooperation, and then broke the agreement—nevertheless, the French made deals with him on five occasions. The French position was weak because their military forces had been depleted by World War II, and they had great difficulty in re-establishing control over French Indochina, which had been left with a power vacuum after the defeat of Japan. In mid-1955, the tide turned against the various sects, as Prime Minister Ngô Đình Diệm of the State of Vietnam and his VNA began to consolidate their grip on the south. Ba Cụt and his allies were driven into the jungle, and their position was threatened by government offensives. After almost a year of fighting, Ba Cụt was captured. He was sentenced to death and publicly beheaded in Cần Thơ. ## Early life and background Ba Cụt was born circa 1923 in Long Xuyên, a regional town in the Mekong Delta, in the far south of Vietnam. He was orphaned at an early age and adopted by a local peasant family. Ba Cụt was illiterate and was known from childhood as a temperamental and fiery person. The family's rice paddie field were confiscated by a prominent landlord, the father of Nguyễn Ngọc Thơ. This caused a life long and fanatical hatred towards landowners. Thơ rose to become a leading politician in the 1950s and played a key role in Ba Cụt's eventual capture and execution. An aura of mystery surrounded Ba Cụt during his life, and foreign journalists incorrectly reported that he had severed his finger as part of a vow to defeat the French. As Ba Cụt became more fanatical in his religious beliefs and spent increasing time with local religious men, his father demanded that he work more in the family's rice fields. A defiant Ba Cụt severed his index finger, which was necessary for work in the rice paddies. Vietnam was a tumultuous place during Ba Cụt's youth, particularly in the Mekong Delta. In 1939, Huỳnh Phú Sổ founded the Hòa Hảo religious movement, and within a year had gained more than 100,000 followers. He drew adherents for two reasons: the prophecies he made about the outbreak of World War II and the conquest of South-East Asia by Japan, which proved to be correct; and his work as a mystical healer—his patients claimed to have been miraculously cured from all manner of serious illnesses after seeing him, when Western medicine had failed. Sổ's cult-like appeal greatly alarmed the French colonial authorities. During World War II, Imperial Japan invaded and seized control of Vietnam from France; its defeat and withdrawal at the end of the war in 1945 left a power vacuum in the country. The Hòa Hảo formed their own army and administration during the war, and started a de facto state in their Mekong Delta stronghold. They came into conflict with the Cao Đài, another new religious movement, which also boasted a private army and controlled a nearby region of southern Vietnam around Tây Ninh. Meanwhile, in Saigon, the Bình Xuyên organised crime syndicate ruled much of the city through its gangster militia. These three southern forces vied for control of southern Vietnam with the main protagonists: the French, who were attempting to re-establish colonial control across the entire nation; and the communist-dominated Việt Minh, who sought Vietnamese independence. The Hòa Hảo initially engaged in large-scale clashes with the Việt Minh in 1945, but by mid-1946 the two groups had agreed to stop fighting each other and fight the French instead. However, in June 1946, Sổ became estranged from his military leaders and started the Dân Xã (Social Democratic Party). Because of his charisma, the Việt Minh saw Sổ as a threat and assassinated him, leaving the Hòa Hảo leaderless and causing Sổ's military leaders to go their separate ways. The split caused an increase in violence as the various Hòa Hảo factions engaged in conflicts among themselves. At the time, the many groups vying for power—including their respective factions—engaged in alliances of convenience that were frequently broken. Historian David Elliott wrote: "[T]he most important eventual cause of the French decline was the inherently unstable nature of the political alliances they had devised ... [T]he history of the French relations with the Hoa Hao sect is a telling illustration of the pitfalls of short-term political deals between forces whose long-term interests conflict." ## Career Ba Cụt joined the Hòa Hảo militia when it was formed in 1943–44, and became a commander within a year. He was feared by his enemies, and was described as "a sort of lean Rasputin" who claimed to be immortal. According to historian and writer Bernard Fall, "the hapless farmers who were under the rule of the maniacal Ba Cut fared worse [than those under other military leaders], for the latter [Ba Cụt] was given to fits of incredible cruelty and had no sense of public duty." American journalist Joseph Alsop described Ba Cụt as "war-drunk". Ba Cụt was famous for inventing a torture contraption that drilled a steel nail through the victim's ear, a device he used to extort villagers and wealthy landlords to fund his forces. He was said to have arranged "temporary marriages" of his men with village girls. Ba Cụt raised a large amount of funds for the Hòa Hảo and himself personally by charging traders and landlords high prices to stop pirates in the local area. The severed heads of the pirates were subsequently impaled on stakes and put on public display. In 1947, he led his own faction of the sect after its various military leaders pursued their own policies towards the French and Hồ Chí Minh's Việt Minh in the wake of Sổ's death. At the time, France was in a ruinous financial state following World War II and was experiencing great difficulty in its attempts to re-establish control over its colonies. Ba Cụt had only 1,000 men in five battalions at the time, fewer than 5% of Hòa Hảo forces, whereas Trần Văn Soái had 15,000 men. The French tried to maintain their hold with a divide and conquer strategy towards the Hòa Hảo. They coaxed Soái into joining with them and recognised him as the leader of the Hòa Hảo. In 1948, Ba Cụt rallied to the French and Soái, but broke away again soon after, relocating to Đồng Tháp Province and resuming his military activities against the French. In 1950, Ba Cụt was involved in a battle with another Hòa Hảo leader, Nguyễn Giác Ngộ. He was defeated and driven from the district of Chợ Mới in February, provoking Soái to attack Ngo. Ba Cụt then moved to Thốt Nốt and began attacking the civilians and the French forces there. The French saw the disagreements as an opportunity to divide the Hòa Hảo and gain an anti-Việt Minh ally, and offered material aid, which Ba Cụt accepted. Ba Cụt repeatedly made treaties with the French colonial forces to fight the Việt Minh in return for arms and money, but he broke his end of the bargain and sometimes fought the Cao Đài instead of the communists. He made five such deals with the French, but he abandoned his military responsibilities each time. It was said that Ba Cụt sometimes broke away with the encouragement of Soái, who was still allied to the French, but nevertheless is believed to have given Ba Cụt weapons to fight the French. The French continued to furnish him with supplies despite his disloyalty and unreliability because they lacked the personnel to patrol all of Vietnam but had spare equipment. Some historians have claimed Ba Cụt's anti-French activities were not taken seriously as he was able to pass through French checkpoints without incident. There are also reports that he was accompanied by French intelligence agents during periods when he was nominally opposed to the French. The other Hòa Hảo commanders generally had the same general outlook as Ba Cụt; they were stridently opposed to the Việt Minh due to Sổ's assassination, and sometimes fought alongside and received supplies from the French, but at times they lapsed into apathy and refused to attack. The most notable instance of Ba Cụt's abandoning the fight against the Việt Minh came in mid-1953. At that time, his forces had been helping to defend the regional Mekong Delta town of Mỹ Tho, but the French decided to transfer more of the military power to their more mainstream allies, the Vietnamese National Army (VNA). As the French tried to undermine his position, tensions with Ba Cụt increased. On 25 June, the Hòa Hảo leader ordered his men to evacuate their French-supplied bases; they took their weapons with them and razed the camps. Ba Cụt then withdrew his forces from a string of military posts in the Plain of Reeds and retreated to Châu Đốc in the extreme south of the country. As a result, the French-aligned presence in the Mekong Delta was severely dented and the Việt Minh made substantial gains in the area. Eventually, the French defeat at Điện Biên Phủ in May 1954 signaled the end of French Indochina. When the Geneva Conference in July 1954 ended the First Indochina War, it handed North Vietnam to Hồ Chí Minh's Việt Minh, and the south to the State of Vietnam. To reunify the country, national elections were scheduled for 1956, following which the French would withdraw from Indochina. The partition of Vietnam angered Ba Cụt and he vowed not to cut his hair until the nation was reunified. Having fought against the Việt Minh since 1947, Ba Cụt's principal criticism of Prime Minister Ngô Đình Diệm's State of Vietnam government stemmed from his belief that Diệm had been too passive in rejecting the partition, and that half of the country should not have been yielded to the communists. In mid-1954, General Nguyễn Văn Hinh, the head of the State of Vietnam's VNA, announced that he did not respect the leadership of Prime Minister Diệm, and vowed to overthrow him. The coup never materialised and Hinh was forced into exile, but not before appointing Ba Cụt to the rank of colonel in the VNA in an attempt to undermine Diệm, as the Hòa Hảo warlord was openly contemptuous of the prime minister. In August, Ba Cụt and his 3,000 men broke from the VNA and left their Thốt Nốt base for the jungle, and fought against those who had briefly been their comrades; this put him at odds with most Hòa Hảo leaders, who accepted government payments to integrate their forces into the VNA. Operation Ecaille, the initial military offensive by the VNA against Ba Cụt was a failure, possibly because the details of the planned attack on his forces were leaked to him by Soái, a Hòa Hảo member of the National Defence Committee. During the transition period between the signing of the Geneva Accords and the planned reunification elections, South Vietnam remained in chaos as the VNA tried to subdue the remaining autonomous factions of the Hòa Hảo, Cao Đài, and Bình Xuyên militias. In early 1955, during a battle with the Cao Đài forces of Trình Minh Thế, after a dispute over control of the That Son region, Ba Cụt was wounded in a disputed incident. Thế claimed to have tried initiating peace talks with Ba Cụt, but received no reply, so he decided to try to capture his rival. He sent some of his militant disciples to infiltrate Ba Cụt's forces and try to capture the Hòa Hảo leader. When they located Ba Cụt and surrounded him, he refused to surrender but instead tried to shoot his way out. Ba Cụt was severely wounded by a bullet that penetrated his chest. It seemed that he would die, but a French Air Force helicopter flew in and airlifted him to a colonial hospital. He recovered and in the interim the fighting stopped. Another account claims the two military leaders had been on good terms and exchanging diplomatic missions, but that the skirmish was caused by one of Ba Cụt's aides addressing the envoy in an abrasive and rude manner, and that the injuries were minor. Yet another account holds that the reaction by Thế's envoy was premeditated and that the claim the firing was in response to rudeness was merely a cover for an assassination attempt. According to this theory, Thế, whose units were then being integrated into Diệm's VNA, had given orders to target Ba Cụt. This was allegedly done on the orders of CIA agent Edward Lansdale, who was trying to help secure Diệm in power at the time. Lansdale has been accused of failing in an earlier attempt to bribe Ba Cụt to cease his activities. By this time, with France preparing to withdraw from Indochina, senior French officers had begun to undermine Diệm's leadership and his attempts to stabilise South Vietnam. The VNA later implicated the French in the organisation of weapons air drops to Ba Cụt, prompting a protest from Diệm's government. Diệm complained to a French general, alleging that Ba Cụt's men were using French equipment that was of higher quality than that given to the VNA. The Hòa Hảo accused Diệm of treachery in his negotiations with various groups. They charged the prime minister with integrating Thế's forces into the VNA in return for them being allowed to attack Ba Cụt with the aid of the VNA, and that this part of the deal had been kept secret. They warned that other Hòa Hảo leaders who had stopped fighting could join Ba Cụt, and appealed to Diệm's U.S. sponsors. In response, Ba Cụt ambushed a VNA unit in Long Mỹ, killing three officers and injuring some thirty men. ## War with Diệm In 1955, Diệm tried to integrate the remaining Hòa Hảo armies into the VNA. Ba Cụt was one of four Hòa Hảo military leaders who refused the government offer on 23 April, and continued to operate autonomously. At one stage, the Cao Đài, Hòa Hảo and Bình Xuyên formed an alliance called the United Front, in an attempt to pressure Diệm into handing over power; Ba Cụt was named senior military commander. However, this had little meaning as the various units were still autonomous of each other, and the United Front was more a showpiece than a means of facilitating coordinated action, and did not in any way strengthen any military threat to Diệm. The leaders were suspicious of one another and often sent subordinates to meetings. Initially, American and French representatives in Vietnam hoped that Diệm would take up a ceremonial role and allow the sect leaders—including Ba Cụt—to hold government positions. However, Diệm refused to share power and launched a sudden offensive against Ba Cụt in Thốt Nốt on 12 March, shelling the area heavily. The battle was inconclusive and both sides blamed the other for causing instability and disrupting the situation. Diệm then attacked the Bình Xuyên's Saigon headquarters in late April, quickly crushing them. During the fighting, the Hòa Hảo attempted to help the Bình Xuyên by attacking towns and government forces in their Mekong Delta heartland. Ba Cụt's men, who had also been angered by the recent arrest of some colleagues, blockaded the Mekong and Bassac rivers and laid siege to various towns, including Sa Đéc, Long Xuyên and Châu Đốc, stifling the regional economy. The Hòa Hảo shut down several important regional roads and stopped the flow of agricultural produce from the nation's most fertile region into the capital, causing food prices to rise by 50%, as meat and vegetables became scarce. Ba Cụt then attacked a battalion of VNA troops south of Sa Đéc. Soon after, they retreated to a Hòa Hảo citadel on the banks of the Bassac. After reinforcing their base, the Hòa Hảo proceeded to fire mortars across the water into the city of Cần Thơ, which stood on the opposite side of the river. During this period, the United Front publicly accused Diệm of trying to bribe Ba Cụt with 100 million piasters, to which the Hòa Hảo responded with a series of attack on outposts and blasts to destroy bridges. With the Bình Xuyên vanquished, Diệm turned his attention to conquering the Hòa Hảo. As a result, a battle between government troops led by General Dương Văn Minh and Ba Cụt's men commenced in Cần Thơ on 5 June. Five Hòa Hảo battalions surrendered immediately; Ba Cụt and three remaining leaders had fled to the Cambodian border by the end of the month. Having surrendered his forces, Ngo excoriated Soai and Ba Cụt, claiming that their activities were not consistent with Hòa Hảo religious practices and accused them of fighting with communists. The soldiers of the three other leaders eventually surrendered, but Ba Cụt's men continued to the end, claiming loyalty to the Emperor Bảo Đại. Diệm responded by replacing the officers of Bảo Đại's personal regiments with his own men and used the royal units to attack Ba Cụt's rebels near Hà Tiên and Rạch Giá, outnumbering the Hòa Hảo by at least a factor of five. Knowing that they could not defeat the government in open conventional warfare, Ba Cụt's forces destroyed their own bases so that the VNA could not use their abandoned resources, and retreated into the jungle. Ba Cụt's 3,000 men spent the rest of 1955 evading 20,000 VNA troops who had been deployed to quell them, notwithstanding a bounty of one million piasters was put on the head of Ba Cụt, who scattered trails of money in the jungle, hoping to distract his pursuers, but to no avail. The communists claimed in a history written decades later that Ba Cụt had tried to forge an alliance with them, but that talks broke down a few months later. Despite his weak military situation, Ba Cụt sought to disrupt the staging of a fraudulent referendum that Diệm had scheduled to depose Bảo Đại as head of state. Ba Cụt distributed a pamphlet condemning Diệm as an American puppet, asserting that the prime minister was going to "Catholicize" the country; the referendum was partly funded by the U.S. government and various Roman Catholic organisations. Diệm had strong support from American Roman Catholic politicians and the powerful Cardinal Francis Spellman and his elder brother, Pierre Martin Ngô Đình Thục, was Archbishop of Huế. Ba Cụt presciently noted that the referendum was a means "for Diem to gather the people from all towns and force them to demonstrate one goal: to depose Bao Dai and proclaim the puppet Diem as the chief-of-state of Vietnam." On the day of the poll, Ba Cụt's men prevented voting in the border regions which they controlled, and ventured out of the jungles to attack polling stations in Cần Thơ. Despite that disruption, Diệm was fraudulently credited with more than 90% of support in Hòa Hảo-controlled territory, and a near unanimous turnout was recorded in the area. These results were replicated across the nation, and Diệm deposed Bảo Đại. Eventually, Ba Cụt was surrounded, and sought to make a peace deal with the Diệm government to avoid being taken prisoner. Ba Cụt sent a message to Nguyễn Ngọc Thơ, the public official who oversaw the civilian side of the campaign against the Hòa Hảo, asking for negotiations so that his men could be integrated into mainstream society and the nation's armed forces. Thơ agreed to meet Ba Cụt alone in the jungle, and despite fears that the meeting was a Hòa Hảo trap, he was not ambushed. However, Ba Cụt began asking for additional concessions and the meeting ended in a stalemate. According to historian Hue-Tam Ho Tai, Ba Cụt's lifelong antipathy towards Thơ's family influenced his behaviour during his last stand. Ba Cụt was arrested by a patrol on 13 April 1956, and his remaining forces were defeated in battle. Contemporary political commentators based in France and Vietnam saw his capture as the death knell for domestic military opposition to President Diệm, while U.S. Embassy official Daniel Anderson speculated that defeat of "the most able and spectacular leader" of the sects would lead to a collapse in non-communist armed opposition. ## Trial and execution Initially, American commentators and observers thought that Diệm might try a reconciliatory approach and integrate Ba Cụt into the mainstream to increase the appeal of his government, rather than punish the Hòa Hảo leader. They felt that Ba Cụt had a high level of military skill and popular appeal that could be used in favour of the government, citing his colourful "Robin Hood" image as an attraction with the rural populace. U.S. officials were also worried that a harsh punishment such as the death penalty could provoke an anti-government backlash, and that it could be exploited by other opposition groups. However, Diệm saw Ba Cụt as contrary to Vietnamese values of struggle and self-sacrifice and felt that strong measures were required. Diệm's government put Ba Cụt on trial for treason, under Article 146 of the Military Code of the Republic of Vietnam. Diệm spoke out and accused Ba Cụt of rallying to and defecting from the central government four times from 1945 to 1954, and that at his peak in mid-1954, Ba Cụt commanded 3,500 troops armed with 3,200 firearms. Ba Cụt was also accused of collaborating with the communists. The government submitted that the charge of treason was established by a series of attacks on VNA personnel, officers and vehicles from July 1954 until Ba Cụt's capture. The government prosecutor sought the death penalty and tendered petitions signed by residents of the Mekong Delta and southwestern Vietnam calling for the military destruction of Ba Cụt's militants. However, according to the historian Jessica Chapman, these petitions were organised by the government and heavily publicised in the Diêm-controlled media, and not representative of public opinion. During the proceedings, Ba Cụt theatrically removed his shirt so that the public gallery could see how many scars he had suffered while fighting the communists. This, according to him, demonstrated his devotion to Vietnamese nationalism. He challenged any other man to show as many scars. However, the Diệmist judge was unimpressed. Ba Cụt was found guilty of arson and multiple murders and sentenced to death on 11 June. An appeal was dismissed on 27 June. On 4 July, Ba Cụt was also found guilty in a military court and sentenced to death "with degradation and confiscation of his property". It then fell to Diệm to consider a plea for clemency. Diệm rejected this and ordered the Justice Minister to put in place the orders for execution. On the very same day, a Hòa Hảo lawyer lodged an appeal against all of the verdicts to the Supreme Appeals Court in Saigon, but the submissions were rejected in a matter of hours. The Hòa Hảo reacted strongly to the legal verdicts as "shameful and unjust". The Dân Xã issued a statement describing the verdict and death penalty as being motivated by spite and being unsupported by evidence. Ba Cụt's defence counsel said the trial set a bad precedent for South Vietnam's fledgling legal system and questioned the integrity of the process. He claimed that VNA troops had engaged in mass rape and plunder of local civilians in their final push against Ba Cụt, and accused the Diệm regime of double standards in not investigating and prosecuting these alleged incidents. He claimed that South Vietnam had "no democracy and no freedom" and "only shamelessness and foolishness" and said that members of the Hòa Hảo would continue to resist the Saigon administration politically and militarily. In addition, Diệm's adviser, Colonel Edward Lansdale from the CIA, was one of many who protested against the decision. Lansdale felt that the execution would tarnish Diệm—who had proclaimed the Republic of Vietnam (commonly known as South Vietnam) and declared himself President—and antagonise Ba Cụt's followers. Ngô Đình Nhu, Diệm's younger brother and chief adviser, denied a reprieve as the army, particularly Minh, opposed any clemency. Some sections of the southern public, however, were sympathetic to Ba Cụt, who was compared to a character from the Wild West. Ba Cụt was publicly guillotined at 5:40 am on 13 July 1956, in a cemetery in Cần Thơ. A crowd numbering in the hundreds, including members of Diệm's National Assembly, Minh, regional officials and both domestic and overseas journalists witnessed the beheading. Anderson believed the use of the guillotine, instead of a firing squad, as was normal for military executions, was used to emphasise that Ba Cụt's actions were being portrayed as common crimes rather than as political opposition. Chapman said that the dual military and civilian trial indicated that Diệm viewed any opposition activities as not only politically unacceptable but also as crimes related to bad character. Ba Cụt's body was later diced into small pieces, which were then buried separately. Some followers, led by a hardcore deputy named Bảy Đớm, retreated to a small area beside the Cambodian border, where they vowed not to rest until Ba Cụt was avenged. Many of his followers later joined the Việt Cộng—the movement that succeeded the Việt Minh their leader had fought—and took up arms against Diệm.
1,348,409
Percy Chapman
1,158,327,331
English cricketer
[ "1900 births", "1961 deaths", "A. E. R. Gilligan's XI cricketers", "Alumni of Pembroke College, Cambridge", "Berkshire cricketers", "Cambridge University cricketers", "Cricketers from Reading, Berkshire", "England Test cricket captains", "England Test cricketers", "English cricketers", "English cricketers of 1919 to 1945", "Free Foresters cricketers", "Gentlemen cricketers", "Kent cricket captains", "Kent cricketers", "L. H. Tennyson's XI cricket team", "Marylebone Cricket Club Australian Touring Team cricketers", "Marylebone Cricket Club South African Touring Team cricketers", "Marylebone Cricket Club cricketers", "Minor Counties cricketers", "North v South cricketers", "People educated at Uppingham School", "Wisden Cricketers of the Year" ]
Arthur Percy Frank Chapman (3 September 1900 – 16 September 1961) was an English cricketer who captained the England cricket team between 1926 and 1931. A left-handed batsman, he played 26 Test matches for England, captaining the side in 17 of those games. Chapman was appointed captain for the final, decisive Test of the 1926 series against Australia; under his captaincy, England defeated Australia to win the Ashes for the first time since 1912. An amateur cricketer, Chapman played Minor Counties cricket for Berkshire and first-class cricket for Cambridge University and Kent. Never a reliable batsman, Chapman nevertheless had a respectable batting record. He could score runs very quickly and was popular with spectators. As a fielder, contemporaries rated him extremely highly. Although opinions were divided on his tactical ability as a captain, most critics accepted he was an inspirational leader. Born in Reading, Berkshire and educated at Uppingham School, Chapman established a reputation as a talented school cricketer and was named one of Wisden's schoolboy Cricketers of the Year in 1919. He went to Pembroke College, Cambridge and represented the University cricket team with great success; his fame reached a peak when he scored centuries against Oxford University and in the Gentlemen v Players match within the space of a week. Chapman made his Test debut in 1924, although he had yet to play County Cricket. Having qualified for Kent, he was the surprise choice to take over from Arthur Carr as England captain in 1926. He achieved victory in his first nine matches in charge but lost two and drew six of his remaining games. Perceived tactical deficiencies and possibly growing concerns over his heavy drinking meant that Chapman was dropped from the team for the fifth Test against Australia in 1930. He captained England on one final tour in 1930–31, after which he never played another Test. After he assumed the Kent captaincy in 1931, his career and physique declined until he resigned the position in 1936; he retired altogether in 1939, by which time he was drinking heavily. Chapman's fame as a cricketer made him a popular public figure; he and his wife, whom he married in 1925, were well known figures in fashionable society and their appearances were followed closely in the press. Outside of cricket, he worked for a brewery. In his later years, Chapman increasingly suffered from the effects of alcoholism and was often seen drunk in public. He and his wife divorced in 1942; he spent his final years, mainly alone, suffering from depression, arthritis and a continued dependence on alcohol. Following a fall at his home and a subsequent operation, Chapman died in 1961, aged 61. ## Early life Chapman was born on 3 September 1900 in Reading, Berkshire, the son of Frank Chapman, a schoolteacher, and his wife Bertha Finch. Chapman's father encouraged him to play cricket and coached him personally. Chapman was first educated at his father's preparatory school, Fritham House, and by the age of eight was in the school's first eleven. In September 1910, he joined Oakham School and scored his first century, dominating the cricket and football teams. From 1914 to 1918, he attended Uppingham School. Although his academic performance was undistinguished, he soon established a cricketing reputation. By 1916, he was in the Uppingham first team; he achieved second place in the school's batting averages, bringing him to the attention of the wider public. Chapman improved his record in 1917, scoring 668 runs at an average of 111.33; he hit two fifties, two centuries and a double century in his last five innings. In 1918, Chapman scored 472 runs at 52.44 and took 15 wickets; the following year, he captained the team, scored 637 runs at an average of 70.77 and took 40 wickets. As a consequence of his achievements, he was chosen as one of the Cricketers of the Year for 1919 in Wisden Cricketers' Almanack. In both 1918 and 1919 he was selected for prestigious school representative matches at Lord's Cricket Ground; although his weak defensive play drew comment, he was regarded as one of the most promising cricketers of his generation when he left Uppingham in 1919. ## University cricket In 1919, Chapman entered Pembroke College, Cambridge. He failed in two trial games, organised prior to the 1920 cricket season to inform the selection of the Cambridge team, and despite his reputation, was omitted from the University's opening first-class match against Essex. But on the day of the match, a player withdrew from the Cambridge team and Chapman replaced him. Making his first-class debut on 15 May 1920, he scored 118 in a rapid innings and kept his place in the team for the remainder of the season. After a century and two fifties, he was selected for the University Match against Oxford. Chapman scored 27 in this final game of the university season to aggregate 613 runs at an average of 40.86, second in the Cambridge batting averages. Unusually for someone in their first year of University cricket, he was subsequently selected for the prestigious Gentlemen v Players match at Lord's. Although not particularly successful with the bat, critics singled him out for his effective fielding. During August, he played second-class Minor Counties cricket for Berkshire as an amateur and headed the team's batting averages; he later appeared in three end-of-season first-class games at the Scarborough Festival where he scored 101 in a Gentlemen and Players game against a bowling attack containing three internationals. In all first-class matches in 1920, Chapman scored 873 runs at 39.68. In 1921, Chapman averaged over 50 for the University and scored three centuries, although his growing reputation meant some critics felt he had underachieved. He once again played in the University match against Oxford, and for the Gentlemen against the Players, and impressed commentators. Some critics suggested he, along with other promising University players, should play for England; the Test side were in the middle of a series against Australia which was lost 3–0, in the course of which an unusually large number of players were selected. Chapman once more appeared for Berkshire in August, scoring 468 runs and taking 19 wickets. At the end of the season, he was selected by Archie MacLaren in a match at Eastbourne, playing for an all-amateur non-representative England team against the undefeated Australian touring team. In a match which became famous in later years, MacLaren's team became the first to defeat the tourists, although Chapman was not successful personally. Chapman finished the season with 954 runs at 39.75. That winter, The Cricketer magazine named Chapman as a young cricketer of the year. However, at the beginning of the 1922 season, his form was so poor that critics suggested leaving him out of the University Match. He had scored 300 runs from 14 innings, but retained his place partially on the strength of his fielding. After Cambridge batted very slowly on the first day, Chapman attacked the bowling on the second morning to score 102 not out. Cambridge won easily, concluding Chapman's cricket at the university, but his innings impressed critics to the extent that he was again selected for the Gentlemen v Players match at Lord's. There, he scored 160 and shared century partnerships with Arthur Carr and Frank Mann. Chapman earned praise for his aggression and his stroke-play on the off side. The Times described it as "one of the great innings in the history of the game". Shortly after this, Sydney Pardon wrote in The Times: "In the cricket field the most interesting figure at the moment is, beyond all comparison, Mr. A. P. F. Chapman. A fortnight ago we were all lamenting his ill-success this season and wondering whether he would ever do justice to his great gifts and fulfil the hopes entertained of him in 1920. Most effectually he has put his critics to shame ... he is in such a position that if an England eleven had to meet Australia next week he would be picked at once with acclamation." Prior to this, only R. E. Foster had scored centuries in both the University Match and the Gentlemen v Players match in the same year. Chapman ended his season by scoring 805 runs and taking 19 wickets for Berkshire, and playing in festival games. He aggregated 607 runs at 33.72 in first-class matches for the season. Chapman was popular at Cambridge and enjoyed his time there. He took part in a variety of social engagements and became involved in other sports. These included fives, tennis, rugby union, golf and football. He captained Pembroke College at rugby and was close to playing for the full university side. Chapman continued to play rugby for Berkshire Wanderers until he was nearly 30 years old. Also for Pembroke, he played as goalkeeper in the football team and might have played for the university at hockey had he taken the sport seriously. In later years, he also displayed proficiency at tennis, in which critics thought he could have reached a high standard if motivated to do so, and golf. ## Cricket career in the mid-1920s ### MCC tour to Australia and New Zealand During the English winter of 1922–23, the Marylebone Cricket Club (MCC) selected a team to tour Australia and New Zealand. This side, captained by Archie MacLaren and composed mainly of amateurs, was not particularly strong and contained several players chosen for their social standing rather than cricketing ability. The team played four first-class games in Australia against state teams; the first was drawn and the others were lost. After scores of 75 and 58 against Western Australia, Chapman played consecutive innings of 53, 73 and 69 against South Australia and Victoria, followed by 100 in the most eagerly awaited match of the tour against a strong New South Wales side. The press and public praised his attacking batting and his fielding, although Frank Iredale, a former Test cricketer, noticed some flaws in his technique. When the team moved on to New Zealand, after an uncertain start Chapman scored 533 runs at an average of 48.45, including two centuries. The tourists returned to Australia for the last leg of the tour; Chapman scored 91 against New South Wales and 134 in 142 minutes against South Australia. In all the Australian games, he totalled 782 runs at 65.16; in all the matches on tour he had 1,315 runs at an average of 57.15. ### Qualifying for Kent When Chapman returned to England, he began to work for a brewery based in Kent, H & G Symonds; his residence in that county allowed him to qualify for Kent County Cricket Club. There were few opportunities for Chapman to appear in first-class cricket until he qualified. His cricket was mainly restricted to club level in 1923, with some further games for Berkshire. In addition, he played 12 first-class games for a variety of teams; he was selected for the Gentlemen v Players matches at Lord's and The Oval, scoring 83 in the latter game, and played in two trial matches for players on the verge of England selection, although no Tests were played that year. In total, he scored 615 first-class runs at 29.28. The focus of attention during the 1924 season was selection of a team to contest the Ashes during a Test-playing tour of Australia the following winter. Critics regarded Chapman as a certainty for the team. Continuing to play as an amateur, he made his first appearance for Kent in a non-Championship match, as he was still qualifying, and was very successful in early season club matches. That summer, England played South Africa in a Test series and Chapman was selected for a trial game before the first Test. He scored 64 not out and 43 for "The Rest", and following the withdrawal of a batsman owing to injury before the first Test, Chapman made his Test debut against South Africa on 14 June. He became one of the few cricketers to represent England while playing for a minor county rather than a team playing in the County Championship. Chapman batted once and scored eight runs; he drew praise from Wisden for an "amazing" catch on the last day as South Africa were heavily beaten. He retained his place for the second Test but did not bat: only four English batsmen were needed in the game which the home side won by an innings. Although selected for the third game, Chapman did not play owing to a motorbike accident. He was not seriously hurt but missed the remainder of the Test series and the Gentlemen v Players game at Lord's. Upon recovering, he returned to play for Berkshire without much success and played several festival games at the end of the season. By this stage, he had already been selected to tour Australia. In the final match of the season, he was selected for "The Rest" to play the County Champions, Yorkshire. He scored 74 in 50 minutes and hit three sixes, two of them from consecutive deliveries from Wilfred Rhodes. This was his highest score of the season, in which he made 561 first-class runs at 31.16. ### Second tour to Australia The MCC team to Australia was led by Arthur Gilligan. In the opening matches, Chapman was cheered by the crowds who remembered his achievements on the last tour, but failed to make any significant scores. His first big innings came against Victoria; he made 72 runs out of 111 scored while he was batting and played a large part in a win for the MCC. Against Queensland in the following match, he scored 80 in 70 minutes and then hit 93 against a representative Australian XI. He was selected for the first four Tests of the five-match series. Batting aggressively, he made several substantial scores but only once passed fifty— in the third Test, he scored 58, his first Test half century. During the same Test, Gilligan strained a muscle while bowling and had to leave the field; Chapman took over as captain. England lost the first three matches, giving Australia an insurmountable lead in the series, but won the fourth. Chapman was left out of the side for the final Test. In the series, he scored 185 runs at an average of 30.83, and critics were divided as to his ability and effectiveness. The former Australian captain Monty Noble believed Chapman could be a good batsman if he curbed his aggression but The Cricketer considered his technique to be faulty. Wisden did not judge Chapman a complete failure and noted that he "made useful scores at times". In all first-class games, Chapman scored 625 runs at 34.72. Although Chapman had a mixed time on the cricket field, the tour was a success for him socially. Now qualified to play county cricket for Kent, Chapman played only four times in the County Championship in 1925, preferring to establish himself in his new career in the brewery trade. Not sufficiently wealthy to play cricket full-time as an amateur, Chapman's business commitments frequently restricted his appearances on the cricket field. During his limited first-class appearances in 1925, he scored 207 runs at 25.87 and Wisden said that he "did nothing out of the common". ## England captain ### Ashes series of 1926 By the beginning of the 1926 season, Chapman was no longer the star of English cricket. Although still respected for his earlier achievements, he had a modest record in Test and first-class cricket. During the season, the Australians toured England for another Ashes series. Chapman did not play any early season games and his first match for Kent was against the touring side. He scored 51, his first first-class fifty since January 1925. A week later, he scored 159 in the County Championship, bringing him back into contention for an England place, then scored 89 in a Test trial match played against the Australians. Chapman's appearances for Kent were sporadic for the rest of the season, but he scored 629 runs in his nine County Championship games at an average of 57.18 to lead the Kent averages. He also scored a century for the Gentlemen against the Players at Lord's. Early in the season, Arthur Carr was named as England captain for the start of the series; Carr was a popular choice and the only other serious contender at the time was Percy Fender. Chapman played in two of the three trial matches and was chosen for the first Test but did not bat in a match ruined by rain. The second Test was drawn but Chapman scored fifty. Australia dominated most of the third Test but England saved the game; Chapman scored 15 and 42 not out in the match. However, Carr's tactical approach during the match was heavily criticised and he dropped a crucial catch on the first morning. Chapman was omitted from the side for the fourth Test, but fielded as substitute when Carr became ill during the game. As the first four matches of the series were drawn, the final Test, played at The Oval, was decisive. Aware that England had beaten Australia only once in 19 matches, the selectors made several changes to the team; Chapman, at the time fourth in the national batting averages, replaced Carr as captain. This decision was controversial; the press favoured Carr, particularly as Chapman was young, unproven as captain and not fully established in the team. When the match began on 14 August, Chapman won the toss and decided that England should bat first. When it was his turn to bat, he was given a good reception by the crowd. During his innings, Wisden noted, Chapman "hit out in vigorous fashion", but once he was dismissed for 49, the remaining batsmen were out quickly, leaving England with a disappointing total of 280. Australia replied with 302. On a pitch affected by rain, England then scored 436, mainly because of a large partnership between opening batsmen Jack Hobbs and Herbert Sutcliffe. Australia needed to score 415 to win, which was unlikely given the condition of the pitch. The team were bowled out for 125, and at least one of Chapman's tactical decisions resulted in Australia losing a wicket. Wisden reported that "not a catch was missed nor was a run given away, the whole England side rising gallantly to the occasion. Naturally a scene of tremendous enthusiasm occurred at the end, the crowd swarming in thousands in front of the pavilion, and loudly cheering the players, both English and Australian." The correspondent also commented "Chapman ... despite lack of experience in leading a first-class team in the field, turned out a very happy nomination for the post of captain, the young amateur, for the most part, managing his bowling with excellent judgement, and in two or three things he did, showing distinct imagination." Throughout the match, Chapman chose to follow his own tactics rather than rely on the veteran players in the team for advice. In the series, he scored 175 runs at 58.33. ### Aftermath and success Following the match, Chapman was lauded as a cricketing hero, and among those who sent congratulatory messages were George V and Prime Minister Stanley Baldwin. In all first-class matches in the season, he scored 1,381 runs at an average of 51.14, the first time he had passed four figures in a season. In his history of the England cricket captaincy, Alan Gibson notes that the controversy over Chapman's appointment was soon forgotten following his success. He writes: "English cricket had a new hero who looked the part ... Every selector was a champion!" In its summary of the 1927 season, Wisden named him as Kent's best batsman and noted an improvement in his defensive technique. Against Lancashire, who retained the County Championship, he scored 260 in three hours' batting, the highest score of his career. The Lancashire bowling attack included former Australian Test bowler Ted McDonald, regarded as the fastest bowler in the world at the time and feared by most county batsmen. Many critics praised Chapman's innings as one of the best ever played. He was selected to lead the Gentlemen against the Players at Lord's for the first time, and led representative sides in two of the three Test trials held that season; the press judged his captaincy to be good. He totalled 1,387 runs in first-class games at an average of 66.04, the highest aggregate and average of his career. The Kent captaincy became available at the end of the season, but Chapman was not appointed; according to Chapman's biographer, David Lemmon, he was probably approached but was unable to dedicate the necessary time to the position. Chapman was unavailable for the Test series in South Africa in the winter of 1927–28, but was a certainty to lead the MCC team to Australia in 1928–29. The selectors wished him to play more regularly, so he played more often in 1928 than any other season. He began in good form, but was never as effective as in 1927. Although his captaincy continued to be highly regarded, there were concerns in the press over his increasing weight, although these were offset by his impressive fielding in that season's Tests. He captained England to a 3–0 series win over West Indies, who were playing their first Test matches, and scored one fifty. In total, he scored 967 first-class runs at 37.19. As expected, Chapman was named as captain for the Australian tour. The MCC touring team was regarded as a strong one by commentators; the only controversy was the omission of Frank Woolley which was not fully explained. Rumours in later years said that Chapman was responsible for leaving Woolley out as he was jealous of his county teammate, but Lemmon regards this as unlikely. ### Tour of Australia 1928–29 According to Douglas Jardine's biographer, Christopher Douglas, "[Chapman] hardly put a foot wrong during the tour and, even though he gave Australia their biggest hiding to date, he was and probably remains ... one of the most popular English captains to tour Australia." From the opening games, England followed a strategy of accumulating large totals. For the first Test, to strengthen the team's batting, Chapman and the tour selection committee chose only three specialist bowlers; as the Tests were "timeless"—played to a finish with no time limit—he believed batting to be the key to victory. England batted first and scored 521; Chapman scored 50, but critics believed he should have batted more cautiously. When Australia began their innings, he held a catch from Bill Woodfull in the gully which several observers rated as among the best they had seen. Sydney Southerton, writing of the English fielding, said: "The high note was struck by Chapman himself at Brisbane when, with a catch that will be historic, he dismissed Woodfull ... It is my opinion that catch had a pronounced effect on the course of events in the three subsequent Tests ... [Chapman's fielding] exercised a most restraining influence on the Australian batsmen." Australia were bowled out for 122; Chapman did not ask Australia to follow-on but batted again, to the crowd's displeasure, and his batsmen relentlessly built up the England lead. When Chapman became the first captain to declare an innings closed in a timeless Test match, Australia needed 742 to win. On a rain-affected pitch, Australia were bowled out for 66; England's win by 675 runs remains in 2016 the largest margin of victory by runs in Tests. Chapman's team won the second Test comfortably after scoring 636 in their first innings, the highest team total in Tests at that time. In the third Test, England began the fourth innings requiring 332 to win on a rain-damaged pitch, a task critics believed impossible. A large opening partnership from Hobbs and Sutcliffe gave England a chance, and Hobbs sent a message to the England dressing room suggesting a tactical change in the batting order. But the team could not find Chapman, who according to Percy Fender, in attendance as a journalist, spent most of his time socialising with guests in the Ladies' Stand. Consequently, the team followed Hobbs' plan without the approval of the captain. England's batsmen took the total to within 14 of victory when the fourth wicket fell. Chapman came in and batted in an unusual way; after attempting some big shots, he played ultra-defensively, possibly in an attempt to allow Patsy Hendren to reach fifty runs before England won. Hendren was out soon after, then Chapman tried to hit a six and was caught. The batsmen continued to play recklessly and a further wicket fell to a run out. Douglas describes the end of the match: "Meanwhile, [England batsman George Geary] was quite unruffled by the sudden upsets. He wound up for the next delivery and thumped it through mid-on for 4, bellowing, 'Dammit, we've done 'em!' It was an appropriate way for a side under Chapman to win the Ashes." England's victory in the third Test ensured the Ashes were retained, and the team also won the fourth Test to take a 4–0 lead in the series. Up to this time, Chapman had enjoyed a harmonious relationship with the Australian crowds. However, in the match against Victoria which followed the fourth Test, the crowd barracked the MCC team when Chapman brought on Harold Larwood, a fast bowler, to bowl against Bert Ironmonger, the number eleven, a tactic regarded as unsporting. As the team returned to the pavilion, Chapman was insulted by members of the crowd in the midst of a minor scuffle. Possibly influenced by these events, he withdrew from the final Test; illness and his poor form may also have been factors. According to Lemmon, it was suggested in later years that Chapman did not play owing to his heavy drinking. In his absence, Australia won the fifth Test. After the fifth day of play and having played both his innings, Jardine left to catch a boat to India, for reasons which are unclear, and Chapman acted as his substitute in the field. Douglas notes that it looked like England "were trying to pull a fast one by picking their strongest batting side (which meant dropping Chapman) without weakening the fielding (since Chapman was Jardine's substitute)." The Australians agreed to the substitution on the condition that Chapman did not field near the batsmen. In the Tests, Chapman scored 165 runs at 23.57, and in all first-class matches he reached 533 runs and averaged 33.31. Southerton summarised his performance: "Chapman himself began well in batting but in the later matches was too prone to lash out at the off ball and, as the tour progressed, the Australian bowlers discovered his weakness." On his captaincy, Southerton wrote: "Chapman captained the side uncommonly well, improving out of all knowledge as the tour progressed." Socially, Chapman enjoyed the tour; he attended many functions and events; Bill Ferguson, the team scorer, only saw him annoyed once on the tour: when his accustomed drink was not waiting for him at a lunch interval. ### Ashes series of 1930 Following the end of the 1928–29 tour, Chapman did not return to England until July, midway through the cricket season; Jack White and Arthur Carr captained England in his absence. Chapman resumed playing for Kent shortly after his return home but appeared in only seven matches, with a top-score of 28. His season was curtailed when he fell awkwardly while fielding in a match against Sussex at the beginning of August. He also missed the two MCC tours that winter to New Zealand and West Indies, neither of which involved a full-strength team. In 1930, Australia toured England once more. Before the Test series, Chapman was not a unanimous choice among press correspondents; several critics believed he should not be in the team on account of his rapidly increasing weight—former England captain Pelham Warner suggested he needed to lose at least two stone—and concern over his poor batting form. However, Chapman began the season well, impressing commentators with his batting, fielding and captaincy, and was named as England captain for the first Test match. In the first innings, he scored 52 in 65 minutes, and England won the match by 93 runs on the fourth day. The Wisden correspondent wrote: "Chapman, with his resources limited, managed his bowling well and himself fielded in dazzling fashion." This was Chapman's sixth successive victory over Australia and he had won all nine of the Tests in which he was captain. However, it was to be his last Test victory. England lost the second Test by seven wickets, and Gibson describes the match as the "turning point in Chapman's fortunes". Wisden observed: "Briefly, the Englishmen lost a match, which, with a little discretion on the last day, they could probably have saved." England scored 425 in their first innings, but Donald Bradman hit 254 runs and Australia reached 729 for six declared. When Chapman came in to bat in the second innings, England still trailed by 163 runs and had lost four wickets—a fifth fell soon after. He attacked the bowling immediately, and shared a large partnership with Gubby Allen. When the latter was out, Chapman began to score even faster. He took England into the lead, hitting out at almost every delivery to reach his only Test century after 140 minutes' batting. Wisden commented: "It was about this time that, with a little care and thoughtfulness, England might have saved the game ... So far from devoting their energies to defence they continued hitting away, adding another 113 runs in an hour and a quarter afterwards but losing their last five wickets." Chapman was finally dismissed for 121, after batting for 155 minutes and striking 12 fours and 4 sixes. England were all out for 375, leaving Australia needing to score 72 runs to win. Although Chapman held a difficult catch from Bradman which was praised by commentators, Australia won comfortably. Chapman's century made him the first batsman to score centuries at Lord's in the University match, in the Gentlemen v Players game and for England in a Test match; only Martin Donnelly later performed a similar feat, though his Test century was scored for New Zealand. As the Gentlemen v Players match ceased in 1962, the feat will never be repeated. In the immediate aftermath of the game, Chapman was praised for his batting; the team and selectors, rather than Chapman, were blamed for the defeat. However, his captaincy and tactics were later criticised, by Pelham Warner among others. In particular, his placement of fielders and his refusal to play defensively were questioned. Gibson notes that historians regard this match as a turning point in Test matches; afterwards, captains became more concerned to avoid defeat rather than follow Chapman's policy of playing entertaining, attacking cricket whatever the result. Chapman's unwillingness to play for a draw was in later years held up as "the last sporting gesture by an England captain". In the third Test, Bradman made the highest individual score in a Test match by scoring 334 out of Australia's 566. Assisted by rain that shortened the available playing time, England drew the match. Chapman scored 45 in his only innings. The fourth Test match was also badly affected by rain which brought about another draw. Chapman now faced further criticism of his captaincy. His field placings were again queried; Warner noted that Chapman's tactics were poor and that he was slow to react to the opposition. According to cricket writer Leo McKinstry, the selectors lost faith in Chapman on account of his inconsistent, risky batting and his increased tactical shortcomings. However, McKinstry also writes that the selectors and other influential members of the cricketing establishment were privately concerned by Chapman's heavy drinking which they felt was affecting his leadership. There were also rumours that he was drunk during some sessions of the fourth Test. Following an extended meeting of the selectors, Chapman was left out of the side and replaced as captain by Bob Wyatt. The press were united in attacking the decision, praising Chapman's batting and captaincy while denigrating Wyatt's lack of experience. Gibson observes: "In 1930, despite the occasional criticisms, Chapman's position did not seem in any danger. He was still the popular, boyish, debonair hero. He had been having his most successful series with the bat, and as a close fieldsman England still did not contain his equal. He could not seriously be blamed because the English bowlers could not get Bradman out (though this was perhaps more apparent in retrospect than at the time). Wyatt, though nothing was known against him ... was a figure markedly lacking in glamour." In the final Test, Bradman scored another century and England lost the match and series, although Wyatt played a substantial innings, and Wisden conceded Chapman could have made little difference except as a fielder. The two men remained friends during and after the controversy. In comparing circumstances of Chapman's appointment with those of his replacement by Wyatt, Gibson writes: "In 1926, England won: in 1930, England lost. That is why the echoes took so long to die down and why the selectors remained villains." He concludes that, even though Wyatt did relatively well, "It does seem, after all these years, an odd decision to have taken." In the series, Chapman scored 259 runs at 43.16. In all first-class cricket, he passed four figures for the final time, reaching 1,027 runs at an average of 29.34. ### South Africa tour 1930–31 Already chosen as tour captain before the final 1930 Ashes Test, Chapman led an MCC team to a 1–0 series defeat in South Africa the following winter. Several first-choice players were not selected and the team suffered from injuries and illness. Chapman was popular with the crowds but made a poor start to the tour with the bat until he scored more substantially in the lead-up to the Test series. England lost the opening match of the series by 28 runs and the other four were drawn. Needing to win the final match to level the series, England were frustrated when the start of the match was delayed. Chapman won the toss and chose to bowl on a damp pitch which would have favoured his bowlers. However, the umpires discovered the bails were the wrong size and would not start the game until new ones could be made; in the 20 minutes which were lost, the pitch dried out and England lost much of the advantage of bowling first. Chapman made an official protest before leading his team onto the field. In the series, he scored 75 runs at 10.71, and 471 runs at 27.70 in all first-class games. Wisden observed that "without finding his full powers as a punishing hitter, Chapman occasionally batted well". Socially, the tour was more successful. Chapman was accompanied by his wife, and his parents joined the tour for a time. He took part in many social events and visited several whiskey firms which were associated with his employers in England. Chapman played no further Test cricket; in 26 Tests, he scored 925 runs at an average of 28.90 and held 32 catches. He captained England in 17 matches, winning nine and losing two with the others drawn. Under him the team achieved seven consecutive victories, equalling the English record, which was not surpassed until 2004. His nine victories came in his first nine games as captain. ## Later career ### Kent captain Although Chapman lost the England captaincy after the South African tour, he became official captain of Kent in 1931, having previously captained the side occasionally. Wisden commented that Chapman "exercised an invigorating influence" on the side. Before Chapman assumed the Kent captaincy, the county team was sharply divided along social lines and the amateur leadership was aloof from and often dismissive of the professional players. Members of the team felt that he improved the atmosphere within the side and made the game enjoyable. Critics and players thought that he was past his best by the time he became captain, and already affected by alcoholism, but Chapman was successful as leader. His fielding remained influential. However, his batting form was poor: in 1931, he scored 662 runs at an average of 18.38. Sections of the press thought he should remain England captain, but he was replaced as Test captain by Jardine, who was not a popular choice; the selectors chose Jardine to exercise more discipline on the team than Chapman had done. At the end of the season, Chapman toured Jamaica in a team captained by Lord Tennyson and scored 203 runs in first-class matches at 33.83. Chapman began the 1932 season in good form and appeared fitter than he had for many seasons. There were further calls in the press for him to captain England. Jardine's captaincy in 1931 left critics unimpressed and C. Stewart Caine, the editor of Wisden, wrote that "the impression appears to be widely entertained that Chapman, were he in [batting] form, would again be given charge of the [England] team." Christopher Douglas believes that the difference between Jardine and Chapman in captaincy style made it harder for the press to accept Jardine. He writes: "Chapman's was just the kind of daredevil approach that is remembered with affection and, even though it was barely a year since he had lost the leadership, his reign was being regarded through rose-coloured specs." However, it is unlikely that the selectors ever considered returning to him. During the season, Chapman scored 951 runs, averaged 29.71, and led Kent to third place in the County Championship for the second year in succession. ### Decline In 1933, he scored 834 runs but his average fell to 21.94 and he never again averaged over 23 in any season in which he played regularly. Owing to his increasing weight and lack of physical fitness, he found batting much harder. As his physique declined, he was unable to produce the same batting feats he had managed previously. In the field, although still catching effectively, his inability to chase the ball meant he fielded closer to the batsmen; he also took fewer catches. In both 1934 and 1935, he averaged around 22 with the bat and scored under 800 runs. In 1935, he scored his final first-class century, against Somerset, having not reached the landmark since 1931. Teammates and observers noticed that in the final years of his career, Chapman frequently left the field during matches, and they suspected he was drinking in the pavilion. Chapman played infrequently in 1936, and the captaincy was shared between him and two others. He was reluctant to bat, to the extent of dropping down the batting order to avoid doing so, and his friends believed that his nerve had gone. At the end of the season, he announced that business commitments forced him to give up the captaincy. Over the following three seasons, Chapman played for Kent in three more matches: against the New Zealand touring side in 1937 and in two Championship games in 1938. He also captained a non-representative England XI in a festival game against the New Zealanders in 1937, batting at number ten in the order and scoring 61. His remaining first-class matches were low-profile games against Oxford and Cambridge Universities; he played 13 games in his final three seasons. In his last first-class game, in 1939, he captained MCC against Oxford, scoring 12 and 0. In all first-class cricket, Chapman scored 16,309 runs in 394 matches at an average of 31.97, and held 356 catches. By the time his career ended, his weight had increased even further, and Lemmon believes that he had become an embarrassment to other cricketers. Subsequently, Chapman faded away without much comment. ## Technique and critical judgements Writer Neville Cardus described Chapman as "the schoolboy's dream of the perfect captain of an England cricket eleven. He was tall, slim, always youthful, and pink and chubby of face. His left-handed batting mingled brilliance and grace ... His cricket was romantic in its vaunting energy but classic in shape." While batting, Chapman always tried to attack the bowling; although this meant he made mistakes which resulted in his dismissal, it meant that he could change the course of a game in a short time. Cricket writer R. C. Robertson-Glasgow described him as: "Tall, strong, and lithe, he was a left-handed hitter with orthodox defence, much of which was rendered unnecessary by a vast reach, and an ability to drive good-length balls over the head of mid-off, bowler, and mid-on. His cover-driving, too, was immensely strong." Gibson notes that Chapman's career batting figures were good, but that critics believed that, with his talent, he should have scored more runs. Gibson writes: "When Chapman was going well, he looked quite as good as Woolley [his Kent and England team-mate] at the other end, and in the mid-1920s there was no other English left-hander, possibly no other England batsman at all except Hobbs, of whom that could be said." His increased weight in the 1930s robbed him of confidence and slowed him down to the point where his batting declined. When batting, Chapman usually wore the Quidnuncs cap. Commentators claimed that Chapman was not a subtle captain and lacked tactical astuteness. Even so, his record is better than most others who led England during Chapman's career. Pelham Warner believed that Chapman started well, but that in the later stages of 1930, his tactical sense markedly deteriorated. On the other hand, several of Chapman's contemporaries believed him to be one of the best captains. Arthur Gilligan, one of Chapman's predecessors, considered him to be a model for the role, and Bert Oldfield, who played against Chapman as Australia's wicket-keeper, thought that Chapman possessed an "aptitude" for leadership. Chapman's teams were usually harmonious and his sympathetic handling of his players often brought out the best in them. Writing in 1943, Robertson-Glasgow said: "He knew his men as perhaps no other captain of modern times has known them." Cricket writer E. W. Swanton believes that Chapman's cavalier reputation was misleading in assessing his effectiveness, and that "underlying the boyish facade was both a shrewd cricket brain and the good sense to ask advice from those of greater experience." Robertson-Glasgow described Chapman as among the greatest fielders of all time, and The Times observed that "at his best he had been one of the finest fielders ever to play for England". In his earlier years, he fielded in the deep but when he played for Kent and England, he was positioned closer to the batsmen—usually at gully or silly point. The Cricketer commented that his "capacious hands made him a brilliant close-to-the-wicket fielder, and some of his catches were miraculous". In his youth, Chapman bowled quite regularly, but his negative experience bowling for Berkshire lessened his enthusiasm, and he did not take it seriously. ## Personal life ### Marriage and fame During May 1921, Chapman met Gertrude ("Beet" or "Beety") Lowry, the sister of Tom Lowry, a cricketer from New Zealand who played for Cambridge and Somerset and went on to captain his country. The couple met again when Chapman toured New Zealand in 1922–23, and became engaged. At the end of the 1924–25 Australia tour, they married and returned to England together. The wedding was widely reported and until the end of the decade the couple were heavily involved in social events. They were popular guests at functions, and became notable figures in the fashionable society of the upper classes. In 1923, Chapman joined a Kent brewery, H & G Symonds. His wife believed that his choice of a career working in the alcohol trade made his life difficult and contributed to his heavy drinking. The social duties associated with his job also contributed to his increased weight and failing fitness in the later part of his cricket career. Further problems arose through his fame; as he wanted to keep people happy, he drank frequently and attended many social functions. Cricket writer Ivo Tennant believes that Chapman's "taste for conviviality was his undoing". He always appeared happy, but Gibson observes "that is the way some men disguise their unhappiness", and Lemmon suggests that Chapman was seeking acceptance and felt lonely at heart. According to Lemmon, by the end of the Second World War, Chapman was largely living in the past, and that "mentally he was still in the happy days of University cricket." ### Later struggle E. W. Swanton observes that "from the war onwards [Chapman's] life went into a sad eclipse." In 1942, Chapman was divorced from his wife; according to Lemmon, "Beet had stood much, but there is a point for all relationships beyond which one must not go". She returned to live in New Zealand in 1946. After 1946, Chapman shared a house with the steward of West Hill Golf Club, Bernard Benson, and his health continued to deteriorate. He was frequently observed to be drunk in public, although his appearance and manners remained impeccable; the cricket establishment ignored him, regarding him as an embarrassment, particularly on the occasions he watched matches at Lord's. By the end of his life, he was unable to attend any cricket matches. In addition to his alcoholism, Chapman became increasingly isolated, suffering from loneliness and depression. By the 1950s, he had developed arthritis, probably as a result of his sporting activities. On one occasion in 1955, Chapman was invited to a dinner organised by Kent; he was later discovered in the car park on the bumper of a car in a distressed state and had to be assisted back inside. In September 1961, Chapman fractured his knee when he fell at his home. He was taken to hospital for an operation but died on 16 September 1961. The newspapers reported that he had been ill for a long time; his former wife later commented that "he must have died a very sad man". Tributes focused on his successes as a cricketer and appealing personality. Summing up Chapman's life, Gibson writes: "But just as a good end can redeem a sad life, so a good life can redeem a sad end, and he had known his hours, his years of glory." Swanton concluded his obituary of Chapman in 1961: "The elderly and the middle-aged will recall him rather in his handsome sunlit youth, the epitome of all that was gay and fine in the game of cricket."
3,744,098
Hannah Montana
1,173,702,872
American teen sitcom
[ "2000s American musical comedy television series", "2000s American teen sitcoms", "2006 American television series debuts", "2010s American musical comedy television series", "2010s American teen sitcoms", "2011 American television series endings", "American musical television series", "Disney Channel original programming", "Disney controversies", "English-language television shows", "Hannah Montana", "Television series about fictional musicians", "Television series about teenagers", "Television series by It's a Laugh Productions", "Television shows involved in plagiarism controversies", "Television shows set in Malibu, California" ]
Hannah Montana (titled Hannah Montana Forever for the fourth and final season) is an American teen sitcom created by Michael Poryes, Rich Correll and Barry O'Brien that aired on Disney Channel for four seasons between March 2006 and January 2011. The series centers on Miley Stewart (Miley Cyrus), a teenage girl living a double life as famous pop singer Hannah Montana, an alter ego she adopted so she could maintain her anonymity and live a normal life as a typical teenager. Episodes deal with Miley's everyday struggles to cope with the social and personal issues of adolescence while maintaining the added complexities of her secret identity, which she sustains by wearing a blonde wig. Miley has strong relationships with her brother Jackson (Jason Earles) and father Robby Ray (Billy Ray Cyrus), as well as her best friends Lilly Truscott (Emily Osment) and Oliver Oken (Mitchel Musso), who become aware of her secret. Overarching themes include a focus on family and friendships as well as the importance of music and discovering one's identity. The Walt Disney Company commissioned the series after the success of Disney Channel's previous music-based franchises, such as the made-for-television film High School Musical (2006). Hannah Montana was produced by It's a Laugh Productions in association with Poryes's production company, and premiered on Disney Channel on March 24, 2006. A concert film, Hannah Montana & Miley Cyrus: Best of Both Worlds Concert, in which Miley Cyrus performs as Hannah Montana and herself, was released in 2008. The following year, the feature film Hannah Montana: The Movie was released. The series concluded on January 16, 2011, as a result of Cyrus's growing popularity and music career, and her desire to move into more mature acting roles. Hannah Montana is one of Disney Channel's most commercially successful franchises. It received consistently high viewership in the United States on cable television and influenced the development of merchandise, soundtrack albums, and concert tours; however, television critics disliked the writing and depiction of gender roles and stereotypes. Hannah Montana helped launch Cyrus's musical career and established her as a teen idol; after Cyrus began developing an increasingly provocative public image, commentators criticized Hannah Montana as having a negative influence on its audience. The series was nominated for four Primetime Emmy Awards for Outstanding Children's Program between 2007 and 2010; Cyrus won a Young Artist Award for Best Performance in a TV Series, Leading Young Actress in 2008. ## Premise ### Story and characters Miley Stewart is a fourteen-year-old middle school student who appears to live a normal life but has a secret identity, pop singer Hannah Montana, an alias she chose so she could have a private life away from the public spotlight. To conceal her true identity, she wears a blonde wig when she appears as Hannah. Miley's father, Robby Ray Stewart, was a famous country music singer before retiring after his wife's death to focus on raising his two children: Miley and her older brother Jackson. At the start of the series, the family have moved from Tennessee to Malibu, California, to allow Miley to develop her musical career; Robby Ray works as her manager. As her schoolmates idolize Hannah Montana, Miley is often tempted to reveal her secret and assume a celebrity status at school. In the pilot episode, Miley's best friend Lilly Truscott uncovers the truth about her alter ego and throughout the first season, Lilly adopts the alias Lola Luftnagle to help protect Miley's secret. Miley later reveals her secret to close friend Oliver Oken, leaving him and Lilly as the only schoolmates she trusts with the secret; he adopts the alias Mike Stanley III. Jackson works for Rico Suave at a local beach food stand; he and Rico often feature in the show's subplots. Miley and her friends begin attending high school at the start of the second season, and in the following season, Lilly and Oliver develop a romantic relationship. In the third season finale, Miley relocates her horse Blue Jeans to California after she feels homesick for Tennessee. The horse is uncomfortable after being moved, and Miley contemplates permanently returning to her hometown. The Stewart family compromise and move out of their house in Malibu to a nearby ranch. In the final season, Miley is faced with extra difficulties in maintaining her double life, which affect her capacity to attend college with Lilly. She must decide between continuing being Hannah Montana and divulging her secret. Ultimately, she reveals her true identity to the world and before leaving for college has to deal with the effects of this decision. She merges her celebrity persona with her former private identity, and Miley Stewart enters adulthood with a newfound celebrity status. ### Themes The central conflict of the series is the disconnect between the public and private lives of Miley Stewart, and the lengths to which she must go to secure her life as a normal teenager and protect her relationships with her friends. She values her core identity as "just Miley" and endeavors to protect her sense of self. This is made evident in the pilot when she fears her friends might not treat her the same way if they become aware of her celebrity status; Miley's friendships and social opportunities at school are important to her. Jacques Steinberg of The New York Times said the series suggests celebrity status should not be confused with real life and that happiness comes as a result of staying true to one's self. In the Celebrity Studies journal, Melanie Kennedy states Miley must learn to remain as her "authentic self" while still being a celebrity; Tyler Bickford of Women's Studies Quarterly observes that lyrics in the theme song "celebrate authenticity" while also accentuating the benefits of a celebrity lifestyle. Morgan Genevieve Blue of Feminist Media Studies distinguished Hannah Montana from other programs about secret identities because of the public nature of Miley's alter ego. Series creator Michael Poryes said his goal was not to focus on the gimmick but to write about characters and relationships, exploring the real issues Miley faces and how they would be affected by her celebrity lifestyle. While Miley discloses her secret to her close friends, she largely continues to hide her identity because the loss of the anonymity would, to her, represent a loss of her youth. When she reveals her true identity to the world, it is a symbolic representation of the end of her childhood. The final episodes reflect Miley's struggle to say goodbye to her alter ego. According to Kennedy, Hannah Montana parallels the idea of "becoming a celebrity" with "growing up female" and teaches young women the perceived importance of investing in celebrity culture. This intensifies and normalizes the desire of young people to become famous. Bickford said the series discusses themes of publicness and consumerism. Friendship is an important theme of the series, which is evident between Miley and her best friend Lilly. When Miley tells Lilly about her hidden persona in the pilot episode, Lilly promises not to divulge the secret to anyone. Bickford described these relationships as the "emotionally fraught", "intensely valued" core of the series, reflecting the way best-friendship is an important element of childhood. ## Production ### Development In the early 2000s, The Walt Disney Company found success with its pay television network Disney Channel, which had a pattern of original programming for a preadolescent audience that featured music. The girl group The Cheetah Girls was made popular by the eponymous television film and found commercial success outside the movie, and Hilary Duff's music was used to cross-promote the series Lizzie McGuire. Disney sponsored concerts featuring music from the network and used their talent to build on the brands; Gary Marsh, the president of Disney Channels Worldwide, cited Lizzie McGuire as its "first success". The network believed the new series Hannah Montana could be marketed in a similar manner. Disney Channel had also found success with musical episodes of its earlier comedy series Even Stevens and That's So Raven. Hollywood.com said the show could build on the success of Disney's television film High School Musical (2006), which also includes music. The sitcom premiered two months after High School Musical. The concept of Hannah Montana was originally labeled "cast contingent", meaning the series would not progress until the central roles were appropriately cast. The project was publicly announced in 2004; casting advertisements for the filming of a pilot were published in January 2005. Disney Channel officially greenlit Hannah Montana as a new, half-hour sitcom in August 2005. Twenty episodes were initially ordered for the first season and six extra episodes were later added to the commission. The series was developed by Poryes, who had previously co-created and produced That's So Raven for Disney Channel. Poryes created the show with Rich Correll and Barry O'Brien, and Steven Peterman joined Poryes as an executive producer. Disney selected the pilot for Hannah Montana to progress to a series against a potential spin-off of Lizzie McGuire, which the network also considered during the 2004–05 pilot season. The full main cast were attached to the project in August and filming for the remainder of the first season was scheduled to begin in November 2005. It's a Laugh Productions produced the program in association with the network. Former president of Disney Channels Worldwide Rich Ross stated the concept of the series conforms to the typical Disney Channel formula; "an ordinary person in an extraordinary situation". The series is primarily aimed at a preadolescent female audience, however, its framework as a family sitcom allows it to have a wider appeal. ### Casting The program and its primary cast were announced in August 2005; Miley Cyrus would be portraying the central character of Miley Stewart. After receiving the script from her agents, Miley Cyrus, aged eleven at the time, auditioned against over 1,000 applicants for the lead role, originally named Chloe Stewart. She was rejected for being too young to play the character; Marsh cited her lack of professional experience. Cyrus persistently sent the producers more audition tapes. After six months of further casting searches, Marsh asked Cyrus, aged twelve, to audition again, and she received the role. Poryes later stated Marsh was responsible for selecting Cyrus over other "safe" choices who were more in-line with the producers' original vision. After Cyrus was cast, the character's name was changed to Miley Stewart in an attempt to limit confusion about the show's characters and premise. Network executives cited her confidence, comic timing, and "husky" singing voice as reasons for her casting on the series. In 2006, Time commented that Disney typically selected actors who had the potential to become popular celebrity figures and that Cyrus would likely experience the same process. Cyrus's father, Billy Ray Cyrus, joined the cast as Miley's father Robby Ray Stewart; he was only asked to audition after his daughter had received the role. Peterman praised the pair's "natural chemistry". Billy Ray Cyrus was initially apprehensive about being cast in the series—he did not want to "screw up Miley's show" and suggested a "real actor" be cast instead—but later accepted the role. The series also stars Emily Osment as Lilly Truscott, Mitchel Musso as Oliver Oken, and Jason Earles as Miley's older brother Jackson Stewart. Moisés Arias appears as Rico Suave in a supporting role throughout the first season; he was promoted to the main cast for the show's second season. The network dropped Musso's character Oliver to a recurring role in the fourth season because he had been cast in Pair of Kings, which was developed for the sister channel Disney XD. Guest stars including Vicki Lawrence, Jesse McCartney, and the Jonas Brothers appear throughout the series. Brooke Shields portrays Miley's deceased mother in dream sequences, through which she typically offers advice. Singer Dolly Parton, Cyrus's real life godmother, had a recurring role as Miley's godmother, Aunt Dolly. Parton stated Cyrus persuaded executives to write her into the series and credited her role for gaining her a following among young people. The final season includes guest roles from musicians Sheryl Crow and Iyaz; actors Christine Taylor, Ray Liotta, and Angus T. Jones; and television personalities Phil McGraw, Jay Leno, and Kelly Ripa. ### Music Hannah Montana includes original music; Disney released albums of songs from the series. Miley Cyrus performs as Hannah Montana and sings the show's theme song, "The Best of Both Worlds". By April 2006, a soundtrack was scheduled for release in the latter half of the year; this would be followed by a studio album by Cyrus the following year. The soundtrack album Hannah Montana was released in October 2006; many of the songs' lyrics allude to the show's premise and Miley Stewart's secret identity. Songwriter Matthew Gerrard intended to encompass the show's premise in the lyrics of the songs. Jeannie Lurie, another key songwriter, explained that it was important for their team to capture the character's voice and feelings within each song's lyrics. The soundtrack albums Hannah Montana 2: Meet Miley Cyrus (2007), Hannah Montana 3 (2009), and Hannah Montana Forever (2010) were released to coincide with their respective seasons. The lyrical themes later became more mature, and reflected storylines from the series such as romantic relationships. The show's music includes elements of teen pop, pop rock, and country pop genres. Steve Vincent, an executive of Disney Channel music, had previously worked on The Cheetah Girls and High School Musical, and helped to develop the sound of the projects. Vincent drew inspiration from country pop artists Shania Twain and Carrie Underwood, as well as pop artists such as Kelly Clarkson, to establish Hannah Montana's musical style. The music makes prominent use of acoustic guitars, synthesizers, and backing vocals. "Ready, Set, Don't Go", a song Billy Ray Cyrus wrote when Miley was cast, was used in the program. Guest stars, such as singer-songwriter David Archuleta, also contributed to songs on the series. ### Filming Hannah Montana was recorded in front of a live studio audience at Sunset Bronson Studios on Thursdays and Fridays. Cyrus was required to attend school on set, while Osment attended an external prep school. While filming the pilot, Cyrus performed a concert as Hannah Montana at Glendale Centre Theatre to acquire footage for the show. Production of the second season began in Los Angeles, California, in November 2006, and concluded in September 2007. In April 2008, the program was renewed for a third season, which had commenced production by August. By this time, Disney had optioned the program for a fourth season. That December, the network ordered another six episodes, extending the third season to 30 episodes. Filming for the third season concluded in mid-2009. The series also filmed episodes which aired as part of network crossover specials. The first special, That's So Suite Life of Hannah Montana, aired on July 28, 2006, as a crossover featuring That's So Raven and The Suite Life of Zack & Cody. The second special, Wizards on Deck with Hannah Montana, aired on July 17, 2009, and contained episodes of Wizards of Waverly Place and The Suite Life on Deck. ### Conclusion and impact on Cyrus Hannah Montana was renewed for a fourth season on June 1, 2009. The new set of episodes has a new setting; the Stewart family move out of their Malibu home to a nearby ranch. Billy Ray Cyrus stated this would be the final season and that Miley Cyrus hoped there would be a conclusion to the show's story. Production for the season began in January 2010, when Disney confirmed the program would be officially concluding. The series finale was scheduled to air in early 2011. As the final season was filmed, Cyrus said she wanted to move on from the series, stating, "I can't base my career off of the six-year-olds". She became increasingly uncomfortable wearing the extravagant, colorful costumes associated with Hannah and stated she had "grown out of it". In 2019, Cyrus said at the time she felt she had matured beyond working on the series and dressing up as Hannah Montana. The final season premiered on July 11, 2010. ## Episodes ## Reception ### Critical reception Bickford said Hannah Montana helped Disney return to a level of commercial success that had been absent since its musical films of the 1990s, and built on the success of the network's programs Lizzie McGuire and That's So Raven. He explained that Hannah Montana adopted a business model of combining celebrity acts with film, television, and popular music for a pre-adolescent audience and compared this model to 1990s teen pop artists such as Britney Spears and NSYNC, who were also marketed to children. Heather Phares of AllMusic described the melodies of the featured songs as strong and Cyrus's vocals as charismatic. Ruthann Mayes-Elma said in a journal article Hannah Montana is a wholesome, "bubble-gum" television show, and that the use of Miley's catchphrase "sweet nibblets" in the place of profanity in the scripts helped solidify the show's family-friendly appeal. The A.V. Club's Marah Eakin found fault with the writing of Hannah Montana, criticizing its "oppressive" laugh track, and its use of stereotypes. The series has been examined for its depiction of gender roles and stereotypes. Blue said the series establishes stereotypical femininity as part of girlhood. She explained that the primary female characters, Miley and her alter ego—Hannah, are positioned as post-feminist subjects in a way their representation is confined to notions of celebrity and consumerism. Bickford interpreted the theme song "The Best of Both Worlds" as an expression of Miley's choice between her contradictory identities, saying the choice is "as simple as choosing a pair of shoes" and that the character is privileged because she has multiple shoes and identities. Blue noted the contradiction of Miley's "normal life" being directly influenced by her celebrity status in ways such as financial security and a spacious home; she suggested Miley supports the family financially. Mayes-Elma criticized the portrayal of Miley as an "airhead" rather than as a "strong, agentic girl", and Blue said Lilly is depicted as a tomboy who does not uphold the femininity Miley represents. In the book The Queer Fantasies of the American Family Sitcom, Tison Pugh analyzed the subtle sexualization present within the characters of Hannah Montana, such as Jackson's girlfriend Siena, who works as a bikini model. In a journal article, Shirley Steinberg cites Miley as a character who maintains chastity but wears objectifying clothing. Mayes-Elma said guest stars such as the Jonas Brothers were incorporated by Disney to encourage the viewership of young teenage girls. Pugh stated that the program obscures the divergence between fiction and reality, due to the character of Miley Stewart sharing similarities to Miley Cyrus, Robby Ray Stewart being difficult to distinguish from Billy Ray Cyrus, and guest actors such as Parton and the Jonas Brothers playing fictional versions of themselves. Kennedy added that featuring celebrity guests, such as Leno and his real talk-show The Tonight Show with Jay Leno, contributed to Miley being placed in the "real world" and thus becoming easily confusable with Cyrus. Pugh explained that displaying Miley as an authentic and likable character was a key marketing strategy, which led to Cyrus becoming closely associated with the Hannah Montana branding; Mayes-Elma explicated that Disney was selling Cyrus—a then-sixteen-year-old girl—to consumers as a "form of pop cultural prostitution". Blue also took note of the intersection between the world of the fictional characters and that of Cyrus. ### U.S. television ratings The series premiere of Hannah Montana was aired on March 24, 2006, as a lead-in to a rerun of High School Musical, and received 5.4 million viewers. This was the highest-rating premiere episode in the history of Disney Channel as of 2006. By April 2006, Hannah Montana had an average of more than 3.5 million viewers for each episode, many whom were aged between six and fourteen. The show's most-viewed episode, "Me and Mr. Jonas and Mr. Jonas and Mr. Jonas", was aired on August 17, 2007, as a lead-out to the premiere of High School Musical 2 and was viewed by 10.7 million people. \| link2 = Hannah Montana (season 2) \| episodes2 = 29 \| start2 = April 23, 2007 (2007-04-23) \| end2 = October 12, 2008 (2008-10-12) \| startrating2 = 3.5 \| endrating2 = 4.4 \| viewers2 = 4.74 \| link3 = Hannah Montana (season 3) \| episodes3 = 30 \| start3 = November 2, 2008 (2008-11-02) \| end3 = March 14, 2010 (2010-03-14) \| startrating3 = 5.5 \| endrating3 = 7.6 \| viewers3 = 4.75 \| link4 = Hannah Montana (season 4) \| episodes4 = 13 \| start4 = July 11, 2010 (2010-07-11) \| end4 = January 16, 2011 (2011-01-16) \| startrating4 = 5.7 \| endrating4 = 6.2 \| viewers4 = 5.05 }} ### Awards and nominations ## Controversies ### Cyrus's public image In 2008, Marsh commented on the importance of Cyrus maintaining a wholesome public image while starring on the network. He said, "for Miley Cyrus to be a 'good girl' is now a business decision for her". Cyrus, however, continued to develop an increasingly provocative image as Hannah Montana progressed and the series received criticism for appearing to be a negative influence on its younger audience. Pugh writes that the series acted as a natural appendage to Cyrus's "controversial transition into a sexual provocateur". Cyrus performed a pole dance the following year during her act at the Teen Choice Awards, later defending it as "right for the song and that performance", while Disney representatives did not comment. Her suggestive persona continued with the music video for "Can't Be Tamed" in 2010. The following year, Cyrus was listed as the worst celebrity influence in a JSYK poll voted on by children, following the leakage of a video showing her smoking the psychoactive plant Salvia divinorum at the age of eighteen. In the journal Tobacco Control, Cyrus's high-risk actions were described as a "turning point" for how fans perceived her behavior. Cyrus's public image continued to become more provocative and sexualized following the conclusion of the series. After a controversial performance at the 2013 MTV Video Music Awards, Melissa Henson of the Parents Television Council said parents would no longer feel comfortable allowing their children to watch Hannah Montana due to Cyrus's sexualized stage persona. Billy Ray Cyrus blamed the program for damaging his family and causing Miley's unpredictable behavior. Miley Cyrus expressed her annoyance at her history with the program in 2013, stating she wanted to suppress her previous music and re-establish her career as a mature artist. By 2019, while Cyrus believed many had viewed her as a "Disney mascot" rather than as a person during her time working for the company, she said she was proud of her work on the series. She said she would like to play the character of Hannah Montana again. Cyrus explained in 2021 that she found it difficult to separate herself from the persona of Hannah Montana. ### Revised episode A second-season episode titled "No Sugar, Sugar" was planned to air in the United States on November 2, 2008, but was removed from the schedule after complaints about its subject matter. The episode, in which Oliver is diagnosed with type 1 diabetes, was previewed online; viewers said it presented inaccurate information about the disorder. Some viewers said there was a risk of uninformed children following the episode's health information, while others commended the episode's themes of acceptance and support for diabetics. The network revised the episode after consulting diabetes research-funding organization JDRF and filming new scenes; an updated version of the episode titled "Uptight (Oliver's Alright)" was aired during the program's third season on September 20, 2009. ### Lawsuits Television writer Buddy Sheffield alleged he pitched the concept for a television series titled Rock and Roland to Disney Channel in 2001; it would have focused on a junior-high school student who leads a secret double life as a rock star. The initial proposal was unsuccessful, and in August 2007, Sheffield filed a lawsuit against the network based on the similarities between his pitch and Hannah Montana. The lawsuit said Sheffield was owed millions of dollars in damages. A trial was scheduled to begin in August 2008, but the case was resolved privately beforehand. In April 2010, Correll and O'Brien filed a lawsuit against Disney Channel for \$5 million over profits from the program. The pair alleged they were denied their share of profits based on requirements for creators from the Writers Guild of America West. Correll, who also directed a number of episodes, further alleged he was unfairly terminated by Disney in response to giving testimony within the arbitration. By 2016, it was reported the arbitrator found \$18 million in under-reported amounts, but the franchise was still operating at a \$24 million deficit so no compensation was owed. The pair took their case to open court and claimed they were prejudiced by their arbitrator; in 2018, however, the request to overturn the ruling was refused. Poryes had filed a similar lawsuit in October 2008, but this was ultimately settled. ## Other media ### Films In 2008, Walt Disney Pictures released a concert film, Hannah Montana and Miley Cyrus: Best of Both Worlds Concert, as a three-dimensional film for a limited theatrical run. The film consists of footage of Cyrus performing as herself and as Hannah Montana at a concert during the 2007–2008 Best of Both Worlds Tour. It earned a gross of \$70.6 million worldwide. A soundtrack album of the live performances, Best of Both Worlds Concert, was released in April 2008. In 2007, Cyrus reported plans to adapt the television series into a theatrical feature film had commenced, and that she would like to film it in her hometown, Nashville, Tennessee; production began in Los Angeles and Nashville in April 2008. Hannah Montana: The Movie, was originally scheduled for release on May 1, 2009, but its release was preponed to April 10 that year. The film, directed by Peter Chelsom, follows Miley as the popularity of Hannah Montana begins to take control of her life. It grossed \$169.2 million worldwide. A soundtrack album, Hannah Montana: The Movie, was released in March 2009. ### Merchandising In December 2006, Disney released its first line of merchandise linked to Hannah Montana, which included clothing, jewelry, toys, and dolls; the line of clothing duplicated outfits Hannah wears in the series. A line of video games was also developed; the first, Hannah Montana, was released on the Nintendo DS on October 5, 2006. By February 2008, the Hannah Montana franchise had become so profitable Disney convened an "80-person, all-platform international meeting" to discuss its future. Disney's 2008 annual report to shareholders listed the brand as one of the leading contributors to growth across the company. MSNBC estimated the Hannah Montana franchise was worth \$1 billion by the end of 2008. The program was a commercially successful franchise for Disney Channel. ### Potential spin-off In 2011, Billy Ray Cyrus had said he wanted to produce a prequel series. Hollywood Life reported in 2020 that a potential prequel about Miley Stewart's rise to fame as a pop singer, with another child actor playing the character, was being discussed for Disney+. Billy Ray Cyrus again expressed his interest in being involved, while reports said Miley Cyrus would not be.
1,185,772
Vampire: The Masquerade – Redemption
1,170,866,667
2000 video game
[ "2000 video games", "Activision games", "Classic Mac OS games", "Dark fantasy role-playing video games", "Dark fantasy video games", "Fantasy video games set in the Middle Ages", "Gothic video games", "MacSoft games", "Multiplayer and single-player video games", "NStigate Games games", "Role-playing video games", "Vampire: The Masquerade video games", "Video games developed in the United States", "Video games scored by Kevin Manthei", "Video games set in Austria", "Video games set in London", "Video games set in New York City", "Video games set in the Czech Republic", "Windows games" ]
Vampire: The Masquerade – Redemption is a 2000 role-playing video game developed by Nihilistic Software and published by Activision. The game is based on White Wolf Publishing's tabletop role-playing game Vampire: The Masquerade, a part of the larger World of Darkness series. It follows Christof Romuald, a 12th-century French crusader who is killed and revived as a vampire. The game depicts Christof's centuries-long journey from the Dark Ages of 12th century Prague and Vienna to late-20th century London and New York City in search of his humanity and his kidnapped love, the nun Anezka. Redemption is presented in the first- and third-person perspectives. The player controls Christof and up to three allies through a linear structure, providing the player with missions to progress through a set narrative. Certain actions committed by Christof throughout the game can raise or lower his humanity, affecting which of the game's three endings the player receives. As a vampire, Christof is imbued with a variety of abilities and powers that can be used to combat or avoid enemies and obstacles. Use of these abilities drains Christof's supply of blood which can be replenished by drinking from enemies or innocents. It includes multiplayer gameplay called "Storyteller", which allows one player to create a narrative for a group of players with the ability to modify the game dynamically in reaction to the players' actions. Founded in March 1998, Nihilistic's twelve-man team began development of Redemption the following month as their first game. It took the team two years to complete on a budget of US\$1.8 million. The team relied on eight outside contractors to provide elements that the team could not supply, such as music and artwork. The game's development was difficult: late changes to software forced the developers to abandon completed code and assets; a focus on high-quality graphics and sound meant that the game ran poorly on some computer systems; and the original scope of the game exceeded the game's schedule and budget, forcing the team to cancel planned features. Redemption was released for Microsoft Windows in June 2000, with a Mac OS version following in November 2001. The game received a mixed critical response; reviewers praised its graphics and its multiplayer functionality but were polarized by the quality of the story and combat. It received the 1999 Game Critics Awards for Best Role-Playing game. It was successful enough to merit the production of the indirect sequel Vampire: The Masquerade – Bloodlines (2004), which takes place in the same fictional universe. ## Gameplay Vampire: The Masquerade – Redemption is a role-playing video game (RPG) presented primarily from the third-person perspective; the playable character is shown on the screen while an optional first-person mode used to view the character's immediate environment is available. The camera can be freely rotated around the character and positioned above it to give a greater overview of the immediate area. The game follows a linear, mission-based structure. Interaction is achieved by using a mouse to click on an enemy or environmental object to attack it or to activate it. Interaction is context based; clicking on an enemy initiates combat, while clicking on a door causes it to open or close. The playable character can lead a group of three additional allies into battle, controlling their actions to attack a single enemy or to use specific powers. Characters can be set to one of three modes: defensive, neutral, or offensive. In defensive mode, the character remains distant from battles, while offensive mode sends the character directly into battle. The main character and active allies are represented by portraits on screen that reflect their current physical or emotional state, showing sadness, anger, feeding, or the presence of injuries or staking—having been stabbed through the heart and rendered immobile. The player can access various long-range and melee weapons including swords, shields, bows, and guns, stakes, and holy water. Some weapons have a secondary, more powerful attack; for example a sword can be spun to decapitate a foe. Because they are vampires, allies and enemies are susceptible to damage from sunlight. Disciplines (vampiric powers) are used to supplement physical attacks. Each discipline can be upgraded, becoming a more powerful version of itself; alternatively, other in-game benefits can be gained. The game features disciplines that allow the player to enhance the character's physical abilities such as speed, strength, or durability. Disciplines can also allow the player to mesmerize an enemy or a potential feeding victim, render the character invisible to escape detection, turn the character into mist, summon serpents to attack enemies, heal, revive their allies, and teleport to a haven. Each discipline can be upgraded up to five times, affecting the abilities' durations, the scale of the damage or their effect, and the cost of using it. The characters' health and disciplines are reliant on blood, which can only be replenished by feeding on the living—including other party members—or finding blood containers such as bottles and plasma bags. Drinking an innocent to death and other negative actions reduces the player's humanity, increasing the likelihood of entering a frenzy when injured or low on blood, during which they indiscriminately attack friend and foe. Completing objectives and defeating enemies is rewarded with experience points, which are used to unlock or upgrade existing disciplines and improve each characters' statistics, such as strength or agility. Weapons, armor, and other accessories can be purchased or upgraded using money or valuable items, which are collected throughout the game. The character's inventory is grid-based; objects occupy an allotted amount of space, requiring the management of the storage space available. A belt allows some items to be selected for immediate use during gameplay, such as healing items, without the need to access them in the main inventory. The first version of the game allows progress to be saved only in the main character's haven or safehouse; it automatically saves other data at specific points. An update to the game enabled players to save their in-game data at any point in the in-game narrative. Redemption features an online multiplayer component which allows players to engage in scenarios together. One player assumes the role of the Storyteller, guiding other players through a scenario using the Storyteller interface. The interface allows the Storyteller to create or modify scenarios by placing items, monsters, and characters across the map. Character statistics, such as experience points, abilities, and disciplines, can also be modified. Finally, the Storyteller can assume the role of any character at any given time. These functions allow the Storyteller to dynamically manipulate the play environment while the other players traverse it. ## Synopsis ### Setting The events depicted in Vampire: The Masquerade – Redemption occur in two time periods: 12th century Prague and Vienna, and late-20th century London and New York City. The game is set in the World of Darkness; it depicts a world in which vampires, werewolves, demons, and other creatures influence human history. The vampires are divided into seven Clans of the Camarilla—the vampire government—each with distinctive traits and abilities. The Toreadors are the closest to humanity—they have a passion for culture; the Ventrue are noble, powerful leaders; the Brujah are idealists who excel at fighting; the Malkavians are either cursed with insanity or blessed with insight; the Gangrel are loners in synchronization with their animalistic nature; the Tremere are secretive, untrustworthy, and wield blood magic; and the monstrous Nosferatu are condemned to remain hidden in the shadows. Redemption also features the Cappadocian clan; the Society of Leopold—modern-day vampire hunters; the Assamite clan of assassin vampires; the Setite clan; the Tzimisce clan, the Giovanni clan, and the Sabbat—vampires who revel in their nature, embracing the beast within. The main character of Redemption is French crusader Christof Romuald, a once-proud, religious church knight who is transformed into a Brujah vampire. With his religious faith destroyed, Christof is forced to reassess his understanding of good and evil as he acclimates to his new life. Christof's anchor to humanity is the nun Anezka, a human with a pure soul who loves Chrisof even after his transformation. As a member of the Brujah under Ecaterina the Wise, Christof allies with Wilhem Streicher, the Gangrel Erik, and the Cappadocian Serena during his journeys through 12th century Prague. Other characters in this era include the slaver Count Orsi, the Tremere Etrius, and the Ventrue Prince Brandl. Christof continues his quest into the late-20th century, where he allies with the Brujah Pink, the enslaved Toreador Lily, and the Nosferatu Samuel. Other characters include the 300-year-old human leader of the Society of Leopold, Leo Allatius—who has unnaturally extended his lifespan by consuming vampire blood— and the Setite leader Lucretia. During his journey, Christof comes into conflict with Vukodlak, a powerful Tzimisce vampire intent on usurping the clans' ancestors and taking their power for himself. Trapped in a mystical sleep by those who oppose his plot, Vukodlak commands his followers to help resurrect him. ### Plot In 1141 in Prague, crusader Christof Romuald is wounded in battle. He recovers in a church, where he is cared for by a nun called Anezka. The pair instantly fall in love but are restrained by their commitments to God. Christof enters a nearby silver mine to kill a monstrous Tzimisce vampire who is tormenting the city. Christof's victory is noted by the local vampires, one of whom, Ecaterina the Wise, turns him into a vampire to prevent another clan from taking him. Initially defiant, Christof agrees to accompany Ecaterina's servant Wilhem on a mission to master his new vampiric abilities. Afterward, he meets with Anezka and refuses to taint her with his cursed state. At Ecaterina's haven, the Brujah tell Christof about an impending war between the Tremere and Tzimisce clans that will devastate humans caught up in it. Wilhem and Christof gain the favor of the local Jews and Cappadocians, who devote their member Serena to the Brujah cause. The Ventrue Prince Brandl tells the group that in Vienna, the Tremere are abducting humans to turn them into ghouls—servitors addicted and empowered by vampire blood. The group infiltrate the Tremere chantry in Prague, and stop the Gangrel Erik from being turned into a Gargoyle, and he joins them. Christof learns that Anezka, seeking Christof's redemption, has visited the Tremere and Tzimisce clans, and the Vienna Tremere stronghold, Haus de Hexe. There, the Tremere leader Etrius turns Erik into a Gargoyle, forcing Christof to kill him. Etrius reveals that the Tzimisce abducted Anezka. Returning to Prague, Christof finds the Tzimisce in nearby Vyšehrad Castle have been revealed to the humans, who have launched an assault on the structure. Christof, Wilhem, and Serena infiltrate the castle and find that the powerful, slumbering Vukodlak has enslaved Anezka as a ghoul. Anezka rejects Christof and prepares to revive Vukodlak, but the outside assault collapses the castle upon them. In 1999, the Society of Leopold excavates the site of Vyšehrad Castle; they recover Christof's body and take it to London, where he is awoken by a female voice. He learns that the events at Vyšehrad and the resulting human uprising divided the vampires into two sects: the Camarilla who seek to hide from humanity and the Sabbat who seek to regain dominion over it. The Society's excavation also enables Vukodlak's followers to recover Vyšehrad. After escaping, Christof meets Pink, who agrees to help him. They learn that the Setite clan has been shipping Vyšehrad contraband to New York City and infiltrate a Setite brothel to gain information. They kill the Setite leader Lucretia and recruit Lily, an enslaved prostitute. Christof, Pink, and Lily travel to New York City aboard a contraband ship, rescue the Nosferatu Samuel from the Sabbat, and infiltrate a warehouse storing the Vyšehrad contraband. There they encounter Wilhem, who is now a Sabbat under Ecaterina following the collapse of their group. Wilhem reveals that Pink is an assassin working for Vukodlak. Pink escapes and Wilhem rejoins Christof, hoping to reclaim the humanity he has sacrificed during the previous 800 years. Together, Christof, Wilhem, Lily, and Samuel discover that Vukodlak is hidden beneath a church within his Cathedral of Flesh and that Anezka is still in his servitude. In the cathedral they find that Vukodlak has awoken; he tries to influence Christof by offering him Anezka then revealing that she is completely dependent on Vukodlak's blood and will die without him. Christof refuses and Vukodlak drops the group into tunnels beneath the cathedral. Christof finds the Wall of Memories, which hold Anezka's memories of the last millennia, showing she continued to hope as Vukodlak found new ways to defile and torment her. She eventually sacrificed her innocence to gain Vukodlak's trust, using her position to delay his resurrection over hundreds of years until, with no options left, she prayed for Christof's return. The group returns to the Cathedral and battles Vukodlak. The ending of Redemption varies depending upon the quantity of humanity Christof has retained during the game. If the quantity is great, Christof kills Vukodlak, reconciles with Anezka, and turns her into a vampire, sparing her from death. If his humanity is moderate, he surrenders to Vukodlak and becomes a ghoul; Vukodlak betrays Christof and forces him to murder Anezka. A lesser quantity of humanity results in Christof killing Vukodlak by drinking his blood. Greatly empowered, Christof forsakes his humanity, murders Anezka, and revels in his new power. ## Development The development of Vampire: The Masquerade – Redemption began at Nihilistic Software in April 1998, shortly after the developer's founding in March that year. Its development was publicly announced in March 1999. Intending to move away from the first-person games the team members had worked on with previous companies, Nihilistic prepared a design and story for a futuristic RPG with similar themes and gothic aesthetics to those of the Vampire: The Masquerade series. After publisher Activision approached the team using the White Wolf license, they adapted parts of their original design to fit the Vampire series, which became the original design for Redemption. Endorsement by Id Software founder John Carmack helped Nihilistic decide to work with Activision. The Nihilistic team developed Redemption over twenty-four months; the team expanded to twelve members by the end of development. The development team included Nihilistic President and CEO Ray Gesko, lead programmer Rob Huebner, world designer Steve Tietze, level designer Steve Thoms, lead artist Maarten Kraaijvanger, artist Yujin Kiem, art technician Anthony Chiang, and programmers Yves Borckmans and Ingar Shu. Activision provided a budget of US\$1.8 million; the amount was intentionally kept low to make the project manageable for Nihilistic and reduce the risk to Activision, which was relatively inexperienced with RPGs at the time. Nihilistic's management was committed to the entire team working in a one-room environment with no walls, doors, or offices, believing this would force the individual groups to communicate and allow each department to respond to queries immediately, saving hours or days of development time. Redemption's story was developed with input from Wolf; it was co-written by Daniel Greenberg, a writer for the source pen-and-paper RPG. The small size of the team led to Nihilistic relying on eight external contractors to provide elements the team could not supply. Nick Peck was chosen to provide sounds effects, ambient loops, and additional voice recordings based on his previous work on Grim Fandango (1998). Kevin Manthei provided the musical score for the game's 12th century sections, while a duo called Youth Engine provided the modern-day sections' score. Some artwork was outsourced; Peter Chan (Day of the Tentacle (1993) and Grim Fandango) developed concept art to establish the look of the game's environments, and Patrick Lambert developed character concepts and full-color drawings for the modelers and animators to use. Huebner considered the most important external relationship was with a small start-up company called Oholoko, which produced cinematic movies for the game's story elements and endings. Nihilistic met with various computer animation firms but their prices were too expensive for the project budget. Redemption was officially released to manufacturing on May 30, 2000. The game features 300,000 lines of code, with a further 66,000 lines of Java for scripts. In January 2000, it was announced that Nihilistic was seeking a studio to port Redemption to the Sega Dreamcast video game console, however this version was never released. In February 2001, after the release of the PC version, it was announced that MacSoft was developing a MacOS version of the game. ### Technology Nihilistic initially looked at existing game engines such as the Quake engine and Unreal Engine, but decided those engines, which were primarily designed for first-person shooters, would not be sufficient for its point-and-click driven RPG and decided to create its own engine for development of Redemption. This was the NOD engine, which the developers could customize for the game's 3D perspective and role-playing mechanics. The team also considered that developing its own engine would allow it to freely reuse code for future projects or to license the engine for profit. NOD was prototyped using the Glide application programming interface (API) because the team believed it would be more stable during the engine's development, intending that once the engine was more complete, it would be moved to a more general API designed to support a wide range of hardware such as Direct3D. However, once a basic engine was in place in Glide, the programmers turned their attention to gameplay and functionality. By June 1999, Redemption was still running in Glide, which at that point lacked some of the basic features the team needed to demonstrate at that year's Electronic Entertainment Expo. When the team eventually switched to Direct3D, it was forced to abandon some custom code it had built to compensate for Glide's limitations such as texture and graphic management, which required the re-exporting of hundreds of levels and models for the new software. The late API switch also limited the time available to test the game's compatibility on a wide range of hardware. The team focused on building the game for hardware accelerated systems to avoid the limitations of supporting a wider range of systems, which had restricted the development of the company founders' previous game, Star Wars Jedi Knight: Dark Forces II (1997). The programmers suggested using 3D Studio Max for art and level design, which would save money by allowing the company to license a single piece of software, but the lead artists successfully lobbied against this plan, believing that allowing the respective teams to choose the software would allow them to work most efficiently. Huebner said this saved the project more time than any other decision made during development. The level designers chose QERadiant to take advantage of their previous experience using the software while working on Id Software's Quake series. Id allowed Nihilistic to license QERadiant and modify it to create a customized tool for its 3D environments. Because QERadiant was a finished, functional tool, it allowed the level designers to begin developing levels from the project's start and then export them into the NOD engine, rather than waiting for up to six months for Nihilistic to develop a custom tool or learning a new 3D level editor. In twenty-four months, the three level designers built over 100 in-game environments for Redemption. They obtained blueprints and sketches of buildings from medieval Prague and Vienna to better represent that period and locations. The four-person art team led by Kraaijvanger used Alias Wavefront Maya to create 3D art. Nihilistic's management wanted Kraaijvanger to use a less expensive tool but relented when the cost was found to be lower than had been thought. Throughout the project, the art team built over 1,500 3D models. At the start of development, Nihilistic wanted to support editing of the game by the user-community, having seen the benefits to the community while working on other games. Staff who worked on Jedi Knight knew the experience of creating a new, customized programming language called COG that gave the programmers the results they wanted but cost time and significant project resources. With Redemption, they wanted to incorporate an existing scripting engine that would more easily enable users to further develop the game instead of developing their own code again, which would consume months of development time. The team tested various languages, but became aware of another studio, Rebel Boat Rocker, which was receiving attention for its use of the Java language. Speaking to that studio's lead programmer Billy Zelsnak, Nihilistic decided to experiment with Java, having little prior knowledge of it. The language successfully integrated into the NOD engine without problems, providing a standardized and freely distributable scripting engine. Several designers were trained to use Java to allow them to build the several hundred scripts required to drive the game's storyline. ### Design The Nihilistic team used their experience adapting an existing property for the Star Wars games to design Redemption. Reasoning that most people would be familiar with vampire tropes, the team wrote the game assuming players would not need an explanation of the genre's common elements, while enabling them to explore White Wolf's additions to the mythos. When translating the pen-and-paper RPG to a video game, the team redesigned some of the disciplines to make them simpler to understand. For example, in the pen-and-paper game, the "Protean" discipline includes the abilities to see in the dark, grow claws, melt into the ground, and change into an animal, however in Redemption these were made into individual disciplines to make them instantly accessible, instead of requiring the player to select Protean and then select one of the sub-abilities. Huebner said the team struggled with restraint. From inception, the team had developed its assets for a high-end system to ensure the finished project would have top-of-the-range graphics, and because if necessary, it could more easily scale down the art down than scale it up. However, the art teams were not stopped from producing new assets, resulting in Redemption requiring approximately 1GB of storage space to install. Additionally, textures were made in 32-bit color, models were extremely detailed—featuring between 1,000 and 2,000 triangles each on average—and levels were illuminated with high-resolution light-maps. Because the game was designed for high-end computer systems, it relied on algorithms to scale down the models; combined with the high detail art assets, Redemption was taxing to run on low- and mid-range systems. Nihilistic had intended to include both 16-bit and 32-bit versions of the game textures, and different sound quality levels to allow players to choose which versions to install, but the CD-ROM format was not spacious enough to accommodate more than one version of the game. The finished product barely fitted onto two CD-ROMs; some sound assets were removed to fit the format. This caused the game to use a large amount of computer resources and limited the ability to port it to more limited console environments. The programmers identified early on that pathfinding—the ability of the variable-sized characters to navigate through the environment—would be a problem. Huebner cited the difficulty of programming characters to navigate an environment in which level designers are free to add stairs, ramps, and other 3D objects. They came up with a temporary solution and planned to improve the pathfinding later into development. By the time they properly addressed the problem, many of the levels were almost complete and featured few markers the programmers could use to control movement. They could identify walkable tiles but not walls, cliffs, and other environmental hazards. Ideal solutions, such as creating zones for characters to walk through would have taken too much time to retroactively add into the 100 created levels, so the programmers spent several weeks making small, iterative fixes to conceal the obvious errors in the pathfinding and leave less obvious ones intact. From the outset, the team wanted to make a grand RPG, but were restricted by their budget and schedule. They were reluctant to cut any content such as one of the time periods or the multiplayer aspect, and they decided to postpone the original release date from March 2000 to June the same year. They also scaled back the scope of their multiplayer testing and canceled the planned release of an interactive pre-launch demo. The delay allowed Nihilistic to retain most of the intended design but they were forced to remove the ability to play the entire single-player campaign as a team online, compensating for this by adding two multiplayer scenarios built using levels from the single-player game. Huebner said they did not plan appropriately for multiplayer when building the Java scripts for the single-player game, meaning the scripts did not work effectively in multiplayer mode. The multiplayer "Storyteller" mode was conceived early in the development cycle. Diverting from the typical death match or co-operative gameplay multiplayer modes, Storyteller required Nihilistic to develop an interface that could give one player, the Storyteller, enough control to run a particular scenario, and change events in the game in real time without making it too complex to understand for the average player. Much of the technology was simple to implement, requiring typical multiplayer software components that would allow users to connect with each other. The largest task required the development of an interface that could provide the Storyteller with control over the aspects of a multiplayer game without it becoming too complex. The interface had to contain lists of objects, characters, and other resources, and options to manipulate those resources. It had to be mostly accessible using a mouse as input, reserving the keyboard for less common and more advanced commands. The mode was inspired by the text-based Multi-User Dungeon, a multiplayer real-time virtual world in which high-ranking users can manipulate the game's environment and dynamically create adventures. ## Release Vampire: The Masquerade – Redemption was released for Microsoft Windows in North America on June 7, 2000, and in Europe on June 30. The game's release included a standalone copy of the game, and a Collector's Edition containing a copy of the game, a hardbound, limited edition of White Wolf's The Book of Nod chronicling the first vampire, a Camarilla pendant, a strategy guide, and an alternative game case cover. The Collector's Edition also included a copy of the game's soundtrack, featuring songs by Type O Negative, Gravity Kills, Ministry, Darling Violetta, Cubanate, Primus, Youth Engine, and Kevin Manthei. Nihilistic also released Embrace, a level editor with access to the game's code to allow users to modify levels and scripts. A Mac OS version was released on November 14, 2001. During its first week on sale, Redemption was the third best-selling Windows game in the United States behind The Sims and Who Wants To Be A Millionaire 2nd Edition. Sales of the Collector's Edition were individually tracked; it was the fifth best-selling game that same week. According to the sale tracking firm PC Data, Redemption had sold approximately 111,193 units across North America by October, earning \$4.88 million. Approximately 57,000 units were sold in Germany by March 2001. It spent four months on its list of the 30 top-selling games, peaking at number 5 in July 2000, before leaving the charts in October. Redemption received a digital release on the GOG.com service in February 2010. Redemption achieved enough success to merit the 2004 release of an indirect sequel, Vampire: The Masquerade – Bloodlines, which is set in the same fictional universe and was developed by Troika Games. A remake is currently in development as a total conversion mod for The Elder Scrolls V: Skyrim. ## Reception The aggregating review websites Metacritic provides the game a score of 74 out of 100 based on 22 reviews. Reviewers compared it to other successful RPGs, including Diablo II, Deus Ex, Darkstone, and the Final Fantasy series. The game's graphics received near-unanimous praise. Game Revolution said its "brilliant" graphics were among "the best in gaming" and Next Generation said the graphics were the best in any PC RPG. Computer Games said it was the most attractive PC game at the time, ArsTechnica said it was the best game to look at and watch since The Last Express (1997), and PC Gamer said, "there has never been a more beautifully created RPG". The level design and environments were praised for the level of detail, providing a brooding, atmospheric aesthetic with "painstaking" detail. Reviewers also made positive comments about the game's lighting effects. Conversely, Computer Gaming World (CGW) said that while the game was attractive, the visuals were superficial and failed to emphasize the game's horror elements. They were also critical of the third-person in-game camera positioning, claiming that it obscured the area directly in front of the player and did not allow the player to look upwards. Responses to the story were ambivalent; some reviewers called it strong with good dialog; others said it was poor. GameRevolution and CGW called the dialog poor, sophomoric, and often overly-verbose; in particular CGW said some speeches became an "agonizingly long filibuster" that only served to delay the return of control to the player. Other sources called it one of the richest, most engrossing stories to be found outside films and novels, and more original than most RPGs. Computer Games criticized the linear storyline, and said the few dialog choices available to the player had no real impact on the storytelling. CGW said the linear story prevented Redemption from being a true RPG because it lacked interaction with many characters, and the lack of player impact on the story made it seem as though they were not building characters but rather were getting them to the story milestones. According to PC Gamer, while the game's linearity was a negative, it kept the narrative tight and compelling. Reviewers variously appreciated and disliked the voice acting. Game Revolution and Computer Games said the acting ranged from adequate to good, while CGW said the voices were inappropriate and the 12th century European voices sounding like modern Americans, but that the modern era featured better actors. ArsTechnica said the acting was inconsistent but was better than that of Deus Ex. The weather effects, background sound, and moody music were said to blend together well and help immerse the player in the game's world. CGW said the sound quality was sometimes poor. Much of Redemption's criticism focused on technical problems when it was released, undermining the game experience or making it unplayable. Several reviewers noted issues with the initial lack of a function to save game progress at any point, which meant that dying or technical issues with the game could necessitate them to reload a previous save, and then repeat up to 30 minutes of gameplay. CGW added that the repetitive gameplay meant that losing progress and having to repeat it was a particular downside. Next Generation, who provided the game with a score of 3 out of 5, said that Redemption was potentially only a few patches away from being a 5 out of 5 game, if not for technical issues. PC Gamer's review even included recommended instructions for cheats that worked around the technical flaws. CGW said the in-game combat became a confusing mess once allies became involved, in part due to poor artificial intelligence (AI) that caused them to use powers liberally and become low on blood as a result. The AI was considered to be insufficient for the game; pathfinding failures meant allies would become stuck on environmental objects or each other during combat, use up their costliest abilities on enemies regardless of their threat, and were poor at staying alive in battle. Enemies were similarly dismissed for either not noticing the playable character in obvious circumstances or failing to respond to attacks on themselves. Combat was also criticized; Computer Games called the game "little more than a hack-and-slash adventure", and said the game's focus on combat was counter to the greater focus on political intrigue and social interaction prevalent in the source Vampire: The Masquerade tabletop game. ArsTechnica said that combat was initially fun but very repetitive, and it became a chore by the later stages of the game, noting that every enemy dungeon consisted of four levels filled with identical enemies, while Next Generation said the number of enemies and the difficulty of defeating them often meant the playable character would run away or die. The repetitive combat was also criticized by other reviewers, who disliked that it involved repeatedly clicking on enemies until they were dead; and running away if the playable character was about to die against unending waves of enemies. Disciplines were considered helpful in adding variety to combat, but battles were too fast-paced to allow the tactical use of a wide range of powers because of the inability to pause the combat to allow the issuing of orders. Game Revolution said the multiplayer feature was a revelation and worth the cost of the game alone. Computer Games said it was innovative and may serve as an inspiration for future games. PC Gamer said the multiplayer mode was the redeeming factor of the game, though it was still marred by bugs. Others noted that aspects of the multiplayer interface were insufficient, such as the inability to store custom dialog, requiring the Storyteller to type text in real time during gameplay. ### Accolades At the 1999 Game Critics Awards, Redemption was named Best RPG ahead of the first-person action RPG Deus Ex.